Entities are not defined by their attributes in the same way as Value Objects, instead they have an enduring “Identity”. The canonical example is a person. A person isn’t defined by their name, their age, their height, or any other attribute. Any of these can change and they are still the same person (barring any deep philisophical discussion). In software systems, this is usually implemented by assigning an arbitrary key that uniquely identifies the entity for its lifetime.
Where this key comes from is irrelevant to this discussion, but it is common for it to be a value that is generated in the persistence layer when the data is saved.
So, from this we could say that an entity is an instance with an identity and a collection of mutable attributes. Using a similar validation style to the one I used for Value Objects, I might write this in JavaScript like:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
But, I just said that entities are mutable, so, why am I still calling Object.freeze
? Because, I want this object – the Entity – to be immutable, its id
can’t change but it’s fields
are still mutable; I’m taking advantage of the shallow freeze semantics.
Side Note: If you’ve been paying close attention, you may have noticed I’m throw
ing exceptions here instead of return
ing them like last time. This is because I consider passing the wrong type to be a developer error, which is (hopefully) an exceptional situation, as opposed to a user entering an invalid value, which is to be expected.
Ignoring the TrailId
, which will be a Value Object similar to the previous post. The only interesting part left is the TrailFields
object. This will be slightly different to the Value Object and the Entity itself because it’s mutable. So, its validation must be performed in the setters instead of the constructor. Using the common setter/getter pattern, it looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
Even though we now have setters, all of the values are still required in the constructor. This makes it impossible* to have a partially instantiated instance running around.
* “Impossible” is an optimistic take on it. If you want to assign invalid values directly to the “internal” fields, this won’t protect you from yourself. This is JavaScript after all.
Another way of modeling the TrailEntity
is to have the fields as properties of the entity but leave the id
as null
until it is persisted, then updating it. The advantage of explicitly defining TrailFields
is, once again, less mutation and preventing the creation of incomplete instances.
Before the data is persisted we have only TrailFields
. Once it is persisted then we have a TrailEntity
. The different types represent the different stages in the lifecycle and the two states are less easily confused.
To explore their use, lets start by defining a simple Value Object – Distance – because I don’t claim to be creative in these matters.
About the simplest way of defining a Value Object is like this.
1 2 3 4 |
|
This is a good start and really isn’t much more difficult that using object literals. Given the other advantages (that can be added incrementally), I don’t see a down side.
But, but we can do better!
My first step would be to make it immutable. Immutability is a great property to have, especially for things being passed around a lot. For Value Objects, I think it is a reasonable default.
1 2 3 4 5 |
|
Now, Object.freeze
isn’t perfect. It only freezes the object itself; nested properties can still be mutated. But, I still think it’s worth it to prevent accidental mutation, or third party code mutating your objects without your knowledge (I’m looking at you, ng-repeat
).
The next issue to look at is validation of the fields. new Distance(12, 'parsecs');
would ostensibly give a valid Distance
, but it probably wouldn’t be very useful. We can use the constructor function to check the parameters, ensuring only valid values can be created. In this case, say – A distance is a positive whole number measured in centimetres, metres or kilometres.
It might be implemented this way:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The decision to return
the Error
s instead of throw
them is probably debatable, but I don’t think validation errors are “exceptional” and find it composes better with higher order functions (eg. map
, filter
, reduce
) which I generally use a lot of. Either way, the point is it won’t get back a valid Distance
object. And, since they are immutable, a valid Distance
only ever be valid.
Once a Value Object is defined, it becomes a great place to put some logic. Normalization and comparison logic is a good candidate. For example, adding methods to compare two distances.
1 2 3 4 5 6 7 8 |
|
It’s also a good place to add domain logic; the application may define how far is considered “walkable”. By having the Value Object, it provides somewhere for this knowledge to go and prevents if (distance < 5)
being duplicated across the application.
1 2 3 4 |
|
Then, when I decide I’m too lazy to walk 5km, it only needs changing in one place.
]]>~/opt/
directory where possible (then symlinked to ~/bin/
).
In this post, I’m looking at how I did this for my Haskell tool-set.
Haskell’s cabal
is a great dependency management/build tool with built-in sandbox capabilities. It is also often used to install many Haskell tools, such as hlint
, pointfree
and doctest
.
I was originally installing these tools by creating a subdirectory per tool (e.g. ~/opt/haskell/hlint/
) and using cabal sandbox init; cabal install hlint
to install the tool (hlint
in this case) within the sandbox.
But, I didn’t like having a set of “empty” directories (containing only the hidden .cabal-sandbox
and cabal.sandbox.config
). With a few extra arguments (thanks to some tips from Ben Kolera at the last BFPG hack night) we can forgo the superfluous subdirectories and reveal the sandboxes.
Here is an example of installing hlint
:
1 2 3 4 |
|
This creates a sandbox config file called hlint.sandbox.config
and a visible sandbox in the hlint
directory.
You can then symlink the executable onto your path:
1
|
|
This is how I have installed all of these tools and is my currently preferred method. There are other variations on this that can also be used, for instance, adding the package’s bin
directory to your path, or copying the actual executable out of the sandbox (then you could even delete the sandbox if you choose).
This works for all of the packages I mentioned above, and probably any others that you want to install just for the executable binary.
]]>In that spirit, I wanted a script to create an ssh tunnel and connect my irc client (irssi) to my bouncer (znc), behind my home firewall.
So, this is what I’m using:
#!/bin/bash
ssh -f -o ExitOnForwardFailure=yes -L 6667:localhost:6667 user@home.example.com sleep 30
irssi
The -f -o ExitOnForwardFailure=yes
combination makes ssh wait until the forwarded ports have been established before backgrounding. Effectively, this blocks the script until the ports are ready to use.
The sleep 30
keeps the connection open (in the background) for 30 seconds before ssh terminates. However, if there is an open connection on the tunnel, ssh will wait for it to close before terminating. This means, you have 30 seconds to connect to the forwarded port, then it will stay open as long as you’re using it. So, once I quit irssi, the tunnel closes.
Then, irssi configured to connect to localhost:6667 which is tunneled to localhost:6667 on the target machine, where it finds znc!
]]>nc
) is a really useful little utility, available for most (all?) OSs. It’s often used for low level network tinkering. Recently, I found an everyday (for me) use; Testing HTTP “Web Hooks”.
In my specific case, it was the Facebook “Real-Time” API, which POSTs data back to your registered endpoint when a given event occurs on Facebook. But, navigating to Facebook, performing an action and waiting for them to notify your server is a relatively slow process, and makes debugging painfully slow.
To overcome this, we need to be able to consistently repeat a request from Facebook while fine-tuning the handler to perform the required task.
Firstly, we can set up nc
to capture the request. We could manually write a HTTP request, but this will ensure it is authentic and actually represents the request that will be sent by the third party.
nc -l 8000 > request.txt
This will cause nc
to listen on port 8000 and write any incoming HTTP requests to “request.txt”. Then, we just need to coerce the target service to send us a request at the correct location (you could use port 80, if you don’t need to keep the web server running). Note: the listening nc
process will not reply to the request, so the connection will stay open until the client times out or you manually kill the process. Once the request is received, it will be stored in “request.txt”, where we can view it, edit it and — most importantly — replay it.
We can also use nc
to handle sending the request for us by piping the saved file through to the target server.
cat request.txt | nc myserver.example.com 80
This will connect to our server and make the exact HTTP request that was captured. The advantage, of course, is that we can replay the request over and over quickly and accurately.
]]>In “Apple Menu” (Top Left), “System Preferences”, “Keyboard”. On the “Keyboard” Tab (not “Keyboard Shortcuts”), there is a “Modifier Keys” button, which opens a dialog and provides a simple interface to remap (or disable) your modifier keys.
I wish I’d realised sooner that it was so simple, now to re-train my hands and free that poor little finger from its curled up hell.
]]>Command-Query separation is probably one of the lesser known principles of object-oriented programming (OOP). Proposed by Bertrand Meyer in “Object Oriented Software Construction”, it states that a function (or method) should perform a Command (do something) or a Query (return a value) but not both. So, any function that returns a value should not modify state. In Martin Fowler’s discussion of the principle he mentions that “it would be nice if the language itself would support this notion”.
If occurs to me that functional languages do indeed grant Mr. Fowler his wish.
Functional programmers aim to compose programs using “pure” functions. In this case, “pure” is used to mean a functions that are “referentially transparent”, that is, they only depend upon their parameters. A pure function, given the same set of parameters, will always return the same result. No external influences, including variables, databases, inputs or outputs can be used or modified by its execution.
Of course, a program that takes no input, can’t access a data store and produces no output isn’t very useful. As such, even the most pure of the functional languages have to allow the nasty real-world into its pristine clean-room. The difference though, is the strict boundaries that are constructed to protect the pure world from the impure. In Haskell, which is often held up as the purest of functional languages, this airlock is provided by the IO Monad. I won’t go into what a Monads are, it doesn’t matter, and I’m not sure I even explain them even if I wanted to. The point is, there are language features that strictly separate functions that alter state with those that “only” perform calculation.
These referentially transparent “pure” functions are revered because they can be reused, re-ordered or parallelized and are guaranteed to always produce the same result. This makes programs more predictable and thus simpler to debug. When you know a function is pure, you need only check its return value; it can’t have changed anything outside of itself.
So, while the languages most of us use daily don’t offer this as a feature, maybe following the principle more often would make for more predictable, bug free code. Wouldn’t that be nice.
]]>I was trying to randomize the order of elements within a list retrieved from a ngResource (answers in a quiz or poll application). The naive approach would be to write the code as if the resource fetch is synchronous.
1 2 3 4 5 6 |
|
The problem that will soon become apparent, is that the get
is not synchronous. So, when the shuffle(question.answer)
is called, the question
object is not yet populated.
One approach that I encountered to circumvent the issue is to use the built in orderBy filter with a callback that randomly chooses the order. This works, because the filter is run when the returning data triggers a new digest. This may be good enough in some circumstances. The problem I encountered with this approach however, is the list is re-sorted every digest cycle. So, if there is anything much going on in the scope, the list will re-sort itself causing the bound elements in the page to jump around.
The solution for me, was to sort the list once, after it is retrieved. This can be accomplished by passing a callback, as the second argument to $resource.get
, that will be called with the returned data once it becomes available. This allows you to perform any required manipulation on the data before it is assigned to the $scope
.
1 2 3 4 5 6 7 |
|
In this instance, the answers to the question are shuffled, but the pattern is useful any time you want to perform an action only after the data if fetched. This may include calculating aggregates or even fetching extra, dependant data.
As an aside, I don’t believe this manipulation belongs in the controller and should probably be encapsulated in a service that the controller consumes. I will have a look at using “fat” services and “lean” controllers in a future post.
]]>To perform this setup, we can create a new session for “shared” applications, opening irssi in a window named “irc”. Either manually create the windows and open the applications in them, or, if you’re lazy and use the same setup regularly. Script it, something like so:
tmux new-session -s shared "tmux new-window -n irc irssi"
You can use the automatically-assigned window names, but since I’m referencing them across sessions, I feel explicitly naming them (with the -n
) is more robust.
Once the applications are running, we can link the windows to the working session using tmux’s link-window
command, which has the basic form:
tmux link-window -s <src-window> -t <dst-window>
Any time a tmux command requires a reference to a window, we can provide an absolute reference to any window in any session using the session:window
format where session
is the name of the session and window
is the name (or number) of the window in that session.
Using this, we can use the link-window
our “shared:irc” window to index “9” in the current session. Which looks like:
tmux link-window -s shared:irc -t 9
The target parameter is optional, without it the window will be placed in the next available index, but I like to place it at the end of the list so they don’t get in the way of my “real work”.
]]>I recently grabbed the pre-print release of Lorna Jane Mitchell’s new book “PHP Web Services: APIs for the Modern Web”, now available in Print and Ebook formats through O’Reilly. APIs are such an integral part of modern web application development. Lorna herself specialises in API development and integration, rarely working on anything that could be called front-end.
Every system I have worked on recently has been heavily API based. With the current trend in single-page JavaScript web applications and client-side MV*, the server is being relegated more and more to the role of data provider through custom APIs.
In the preface, Lorna enumerates many scenarios in which building APIs is a prudent strategy and why PHP is a pragmatic way to “solve the web problem”; being built with the web in mind from the beginning it comes with many useful tools for delivering and consuming web services built in.
With this introduction (as well as the usual typographical conventions) out of the way, we are thrown straight into the nuts-and-bolts of the basics of HTTP request/response and how the browser/web server is analogous to the API consumer/provider we will be building. A very practical and useful part of the chapter is the introduction to a number of tools and techniques for making and inspecting requests against HTTP servers, I have personally used the section on Curl as a reference a number of times already.
No introduction to HTTP, especially as a platform for APIs, would be complete without an explanation of the HTTP verbs and this book doesn’t disappoint, with a thorough explanation of the common verbs and PHP code examples of their usage. Delving deeper into the HTTP protocol, we find a much more comprehensive look at request and response headers, a look at a number of common headers and a detailed look at the very important, but in my experience – underutilised, Accept
header for content negotiation, including example code to parse it correctly taking into accounts the weighting of the preferences. This is great low level information for building a flexible API service.
A chapter each is given to XML and JSON data formats, the advantages and disadvantages of each. Asking the question, “which format is superior?” (Hint: it depends); Some useful guidelines regarding the scenarios in which each format may be an appropriate choice are provided. There are, of course, code snippets showing how to work with each format.
The chapter on RPC and SOAP provides a substantial look into these styles of services with examples of real-world APIs, snippets to create and consume them and tools to make working with them more friendly. In spite of the book appearing somewhat biased towards REST style services (or maybe that’s just me), this chapter is full of great information and tips for REST and SOAP style services, but if this is your primary interest there are probably better books.
The REST chapter brings together many of the concepts explored in the previous chapters about HTTP, verbs, headers, URLs and data formats to describe REST services in a useful, accurate and fairly succinct way. Thorough explanations of how to implement the standard CRUD operations so fundamental to REST are provided. Special mention is also given to hypermedia and content negotiation/media types, which are important aspects of “pure” REST services. Ever pragmatic, the chapter concludes with a section reminding readers to remember not to be caught up in the “trendy”, and that it is more important for an API to be “useful” than it is to be “RESTful”.
The debugging chapter looks into what to do when it all goes wrong. As APIs are generally hidden behind the actual application interface the techniques for debugging them can be a bit different. Lorna introduces a couple of tools for inspecting the requests and responses on the wire to allow the tracking of anomalies without interrupting the data flow and breaking request/response formats.
Towards the end of the book, we step away from the code for a couple of chapters and discuss the design decisions based on the options provided earlier in the book. Including chapters discussing robust, predictable, user/developer friendly APIs, handling errors and writing documentation.
That was longer than I intended, but I see that as testament to the breadth of information contained within the seemingly modest tome, every chapter has something. Weighing in at a little over 100 pages “PHP Web Services” provides a great foundation to the practicalities of using PHP to build modern web services.
I feel the target audience for the book is the PHP developer who has built a few web sites, but never looked much deeper into the workings of the HTTP protocol and its implications. It provides a broad overview of important concepts, but probably doesn’t dive deep enough for the seasoned professionals. The code snippets are just that, snippets, a demonstration of the core concept, they are not full libraries that should be dropped directly into production code.
All in all, I think the book provides a solid overview of many of the considerations of web service development and would be a great guide to anyone venturing into building such services in PHP.
]]>Of course, as Chris Shiflett in his (now traditional :P) Ideas of March post espouses, the disadvantage of any third party service is the lack of data ownership; “You never know when it’s going to disappear.” citing the recent announcement that Google Reader is to be shut down. This is why I am reluctant to implement such a solution. And given Lorna Mitchell’s suggestion, “don’t read the comments”, I have to wonder if having comments is worth the effort involved?
For now, it looks like I’m going to remain comment free. Which is a shame. The occasional grateful comment from a random stranger on the Internet has a certain way of almost making up for some of the not-so-nice the Internet thrusts upon us. Without this feedback, where’s the love?
Both Lorna Mitchell and Rob Allen (in his response) assert that they intend to sometimes “turn off comments and encourage others to respond by writing […] on their own blog.” I think this is a fantastic idea (I’m doing so right now), though I do have some concern about how well fragmented blog posts can coalesce into a meaningful conversation, “trackbacks” were an attempt at solving this issue, but I’m not sure that worked too well.
This is one advantage the “walled gardens” currently hold – they have the ability to aggregate the posts into a single congruous conversation.
If you have any solutions to the fragmented conversation problem, write a response on your blog and I’ll… Probably never see it… Ping me on twitter, maybe?
]]>To facilitate this, and because my Wordpress install was so hideously out of date, I’ve decided to changed to a static blogging solution: Octopress. As such, the blog is currently without much of a design. This may (or may not) be remedied at a later time. I will write in more detail about the platform and the migration in a future post.
]]>There have been articles debating who is the cause of the confusion; whether it’s the Developers or the Users that need to try harder.
But in all this, what I think has been missed is the success of Facebook Connect. There are hundreds of comments on the ReadWriteWeb article from users who failed (for whatever reason) to find their way to Facebook, yet still managed to use Facebook Connect to leave a comment.
Sounds like a testament to the usability of Facebook Connect to me!
]]>Doctrine 2.0 looks like it might finally be the ORM framework I have been seeking for PHP. While the older versions of Doctrine provided great functionality, they were too intrusive for my taste. I think an ORM should provide a true data mapper; in which the domain entities need know nothing about their persistence.
Matthew Weier O’Phinney has already posted about autoloading Doctrine in Zend Framework, but Doctrine2 presents some new challenges. Mainly that Doctrine2 is fully PHP5.3, including “real” namespaces, so its classes don’t follow the (current) Zend naming standard and the ZF autoloader won’t load them for us.
Good News, Doctrine provides its own autoloader that we can leverage to load its own classes.
Bad News, the Doctrine autoloader automatically registers itself with spl_autoload_register, causing the normal Zend loader to be forgotten (well, pushed down the stack, where it isn’t very useful).
Good News, it’s easy to remove the doctrine autoloader using spl_autoload_unregister, then push it onto the ZF autoloader stack, targeting the Doctrine namespace. Letting the ZF autoloader call it as necessary.
Enough jibber-jabber, how do we do all this? In the bootstrap! Adding this method to your Bootstrap.php will achieve what we want; adding the Doctrine autoloader to the Zend Framework autoloader queue for the “Doctrine\” namespace.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
We can use all of Doctrine’s classes anywhere else in our code.
1 2 3 4 |
|
I still have a lot to learn and the documentation on 2.0 is a little sparse as yet, but this is a start.
]]>Based on the Yadif and Benjamin Eberlei’s recent look at Using a Dependency Injection Container with Zend_Application, where he replaces Zend_Applications default container instance (A Zend_Registry instance) with a Yadif_Container, I have created a Zend_Application_Resource to allow configuration based injection of dependencies into the container via the normal ZF configuration file (application.ini)
The container resource copies any already instantiated objects from the old container into the new one, then replaces the default container.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
I’ve also created a simple action helper to allow easy grabbing of resources from the action controllers. Both reside in my extensions repository.
To use the container resource you will need to add the prefix and path to the bootstrappers plugin loader:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Then you can add the resources and their dependencies via the normal configuration system. This means adding lines like:
resources.container.objects.Log.class = "Zend_Log"
resources.container.objects.Log.arguments.0 = "Log_Writer"
resources.container.objects.Log_Writer.class = "Zend_Log_Writer_Stream"
resources.container.objects.Log_Writer.arguments.0 = "%Log_Writer.stream%"
resources.container.options.Log_Writer.stream = APPLICATION_PATH "/../log/application.log"
There are 2 resources defined here, the “Log” and the “Log_Writer”.
Log is an instance of Zend_Log and takes a Log_Writer resource as the first (and only) argument to its constructor.
The Log_Writer resource is an instance of Zend_Log_Writer_Stream and takes a scalar as its only argument. The scalar value is defined in the container option specified.
Now, the controller can write a log like this.
1 2 3 4 5 6 |
|
While this is a simple example, it can be really beneficial when working with, for example, a service layer. The service you need might depend on another service, both of which may depend on an Authorization service. All the services depend on their data mappers (which themselves depends on a database connection) and their entity factories, etc. Instantiating a dependency tree like this for every object you need can lead to duplicate and hard to modify code. Dependency injection coupled with a good container can provide highly versatile code whose behaviour can be drastically changed by only modifying a configuration file.
]]>Note, there are still some bugs in Zend_Tool that prevent this working as it should, I will note the bugs and their fixes as we encounter them.
From the base path of our application (/WORKING/PATH/aza from the last article), we can issue the command to the Zend_Tool CLI to create our guestbook module.
zf create module guestbook
Then, create the index controller within the guestbook module.
zf create controller index 1 guestbook
The “1” argument tells Zend_Tool that we want to automatically create an index action within the new controller. We can get a help listing like this:
zf create controller ?
Once the new module and controller is created we need to tell the application that we are using modules. We do this by adding two lines to the configs/application.ini. The first activates the modules resource. The second configures the front controller, telling it where the modules are located. These lines should be added to the end of the production section of the .ini file.
resources.modules = ""
resources.frontController.moduleDirectory = APPLICATION_PATH "/modules"
To check our module is working, we can navigate to our guestbook at http://aza/guestbook and we should see the default view for the index action.
Bug: Zend_Tool doesn’t prefix the controller names within the module name. The guestbook index controller class IndexController needs to be changed to Guestbook_IndexController. Bug: The default view for the controller is the same as the main page, it shouldn’t be but we don’t really care, we’re going to replace it anyway.
In the same way that the Bootstrap.php set up the environment for our main application (also known as the default module) each module has its own Bootstrap.php that adds anything additional that each module needs. Zend_Tool doesn’t create this bootstrap by default, so we need to create application/modules/guestbook/Bootstrap.php and it should contain.
1 2 3 4 |
|
By creating this file, the application will automatically perform module bootstrap tasks such as adding autoloaders for the default resources; including models, forms and services. Any other module specific bootstrapping tasks can be added as _init*() functions. In our case, we don’t need any further bootstrapping.
Important note: All bootstrap functions for every module are run for every request.
The bootstrap process occurs before routing and dispatch, so during bootstrap there is no way to know which module/controller/action is being requested. Therefore, any setup that should be done only if a particular module is requested should be done in plugins, not bootstrap.
Now we have the module skeleton in place, lets start porting the guestbook code to our module. This turns out to be fairly easy; the majority of the changes involve prefixing class names with the module name.
To make life easy, start by acquiring a copy of the completed quickstart application (it’s on the right hand side in zip or tar.gz form).
Once you have downloaded and extracted the files into a temporary folder, we can start copying in the files we need.
We need to copy the GuestbookController from the Quickstart (making it the IndexController) and all of the Quickstart models, views and forms to the appropriate places within our module.
From (Quickstart)To (Aza)
application/controllers/GuestbookController.php
application/modules/guestbook/controllers/IndexController.php
application/models/* application/modules/guestbook/models/
application/views/scripts/guestbook/* application/modules/guestbook/views/scripts/index/
application/forms/* application/modules/guestbook/forms/
The controller and the views will require overwriting the existing files.
Now we have the files in the right place, we need to update the files to be modular.
We’ll start with the easiest one, the form. It is simple because it is already prefixed for the Default module, all we need to do it change the prefix to Guestbook_. So the class in application/modules/guestbook/forms/Guestbook.php changes from Default_Form_Guestbook to Guestbook_Form_Guestbook.
Now the models. There are many more changes here but they are just as simple because the models (like the form) are already prefixed with “Default”, but the classes also contain references to each other, so we need to change more than just the class names. A simple search and replace of “Default” with “Guestbook_” in the application/modules/guestbook/models/ directory is all we need.
The controller is a little trickier because it isn’t already prefixed (controllers in the default module aren’t), but it’s still not too hard. The name of the class in application/modules/guestbook/controllers/IndexController.php just needs to be changed from GuestbookController to Guestbook_IndexController, as it has changed from the guestbook controller within default module (no prefix) to the index controller within the guestbook module. We also need to update the references to the models and forms, the same search and replace as we used in the models will suffice.
Finally, we get to the view. In our index view (application/modules/guestbook/views/scripts/index/index.phtml) we need to update the parameters passed to the url helper to reference our controller. Adding the module, and changing the controller leaves the first link looking like this:
1 2 3 4 5 6 7 8 |
|
Done!
I’ll leave the actual creation of the database to you. It is he same as the Quickstart and this post is already particularly long. You will need to create the database, and add the configuration to the application.ini.
We have just ported the Quickstart guestbook application to a Zend Framework Module. Modularizing applications allows for easier code reuse across applications. Hopefully modules will become standardized to the point that there will be a repository of modules that can be added to your application and providing drop in functionality.
For those who had trouble following along, I’ve made the entire application (including database) available via my github repository.
]]>The first step is to actually get ZF, so start by downloading the package (about 40MB in total) into our working directory and extracting it.
wget http://framework.zend.com/releases/ZendFramework-1.8.4/ZendFramework-1.8.4.tar.gz
tar zxf ZendFramework-1.8.4.tar.gz
We’ll then create a symlink to provide an easy upgrade path (extract the new version and move the symlink), and an easier to remember directory name.
ln -s ZendFramework-1.8.4 ZendFramework
Creating an alias allows command “zf” to always point to the Zend_Tool shell script, so we can use the command line tool from wherever we need it.
alias zf=`pwd`/ZendFramework/bin/zf.sh
Now the installation is complete, we should be able to check what version of the framework we have just installed.
zf show version
# Zend Framework Version: 1.8.4
If this works, we’re ready to start creating our project. The documentation provides some alternate methods of setting up Zend_Tool, including setting it up in a windows environment.
Once Zend_Tool is working, we can begin creating our project. For the exercise, we’ll call our project “aza” (A Zend Application). Using Zend_Tool, we create the basic structure for the project.
zf create project aza
This should produce a project structure that looks like this
[caption id=“attachment_131” align=“alignnone” width=“324” caption=“Directory Listing of new project”][/caption]
Finally, we can tell the Apache2 web server about our application by adding a VirtualHost to the server configuration. You will need to replace “/WORKING/PATH/” with the absolute path to the directory in which you are working (run pwd if you’re not sure).
<VirtualHost *:80>
ServerName aza
DocumentRoot /WORKING/PATH/aza/public
<Directory /WORKING/PATH/aza/public>
php_value include_path "/WORKING/PATH/ZendFramework/library"
php_value magic_quotes_gpc 0
php_value short_open_tag "on"
DirectoryIndex index.php
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
Don’t forget to restart the web server to enable the site.
We should now be able to navigate to our site and be warmly welcomed to our new Zend Framework application! We’ll stop here for now. In the next post, we’ll start looking at creating our first module.
2009-06-26 Updated for ZF Version 1.8.4
]]>SET @pos := 0; UPDATE Example SET position = @pos := @pos + 1 ORDER BY position
It simply initializes a variable (@pos) to 0, then for each row (updates are done in sequence) increments the variable and assigns it to the position column. The ORDER BY clause ensures the current ordering is maintained. WHERE clauses can also be added as required.
]]>I found that this can be done using twitter’s own search function, simply put all the tags into the “Any of these words” box on the twitter advanced search interface and run the search. On the results screen, subscribe to the URL for the “Feed for this query” (at the top right of the page) and put it into your favourite feed reader, set the refresh rate to every few minutes (you can probably set it lower if your software allows, but do your really need it?) and watch the conversation.
This allowed me to keep up with chatter at the conference, even from people I don’t usually follow.
]]>After flying across the country we checked into the hotel, the Hilton Bonaventure in Montreal. The front desk apparently didn’t know we were there for the conference because we didn’t “book with the others”. I’m not sure what that means, or what difference it would have made? The hotel offered free wireless in the foyer and conference halls, but not in the rooms. Luckily, we were close enough to the foyer to access their wireless from our room (once I sorted out some driver issues on my laptop).
After missing most of the openning keynote on the first day in favour of eating (I didn’t know breakfast would be available) I jumped into the sessions. Starting with Matthew Weier O’Phinney’s Practical Zend Framework Jutsu with Dojo, which provided a practical overview to an area of the Zend Framework I have been planning on investigating, but haven’t yet got round to. This was followed up by John Coggeshall’s explaining their process of Building RIA Application in PHP. This wasn’t a talk I intended to attend (Sara Goleman’s talk was scheduled for this time, but was unable to attend due to illness) but it was interesting to see the differences in building a PHP application without a HTML front end.
After lunch, Derick Rethan’s looked into search and indexing. Of Haystacks and Needles, introduced MySQL full text, Selenium and Solr. While I have used Selenium, Solr seems like a useful step up for systems with higher requirements. The afternoon sessions were A tour of MySQL High Availability by Morgan Tocker, which talked about the difference between scaling for performance and scaling for HA, and techniques for the latter, for me there wasn’t much I hadn’t used before, but some of the monitoring tools warrant further research. Stupid Browser Tricks by Sean Coates, was in a similar boat. It was a good introduction to some useful browser side tools (Firebug, YSlow! and Selenium IDE) but I had hoped for a deeper look into Selenium. Isn’t everyone using firebug by now? FirePHP is a nice addition though.
Day 2 started with a quick breakfast (I’m a fast learner ;)), then the PHP Code Review Part #1, with Sebastian Bergmann and Stefan Priebsch delving into some not particularly pretty examples of code from well known PHP applications such as Wordpress. It’s reassuring to know other people write and release bad code too :) I didn’t attend Part #2 of the session, where they took code samples from the audience and critiqued them. Instead I opted to drop in on Chris Hartjes expaining why Deployment Is Not A 4 Letter Word and that with some planning and appropriate tools, in your absence even the sysadmin should be able to deploy your application with confidence.
The pre-lunch keynote was John Coggeshall again discussing RIA in Beyond the Browser. Is the browser dead? Not yet, but it certainly has some growing competition. After a break, I listened to Ilia Alshanetsky talk about Premature Optimization Mistakes, focusing on optimising the server stack itself before delving into application level optimisation. Arguing it usually provides more results without the risk of breaking the application. PHP for the Enterprise then examined how PHP has reached a level where it is suitable for projects that were once considered the realm of “real” programming languages. Most of the talk discussed more technical details of scaling PHP to an enterprise level, such as database buffer sizes, performance monitoring and caching at various levels.
The day ended with the career fair which saw a number of, primarily local, employers (including the armed forces?) set up booths and discuss their work and employment with the potential candidates at the conference. While I wasn’t actively seeking employment, I did have a chat with some of the representatives. Given the location of the conference, it wasn’t too big a surprise that the majority were bilingual and in some cases French only offices.
My final day of the conference started with Owen Byrne discussing Growing a Development Team While Building a Huge App at 500 miles/hour; which I attended, hoping to garner an insight into building a team and managing agile development on a large project. While the project was an interesting one, Owen seemed to be more interested in giving out t-shirts and I didn’t feel we got very deep into the whole process. Being a fairly heavy user and fan of Zend Framework, I joined Matthew Weier O’Phinney in his search for some of Zend Framework’s Little Known Gems. The talk was targeted at using the components in isolation and I discovered a number of components that may come in handy in future projects, with or without the MVC stack.
Morgan Tocker then talked further about MySQL, this time focusing on performance as opposed to high availability. There were a few points about the inner working of the InnoDB storage engine that got my attention, including some builds available from Percona we may need to look at.
The round-table Framework Comparison, featuring Fabien Potencier (Symfony) Derick Rethans (ezComponents) and Matthew Weier O’Phinney (Zend Framework) seemed to indicate that all 3 frameworks solve much the same problem, they even went as far as agreeing you should use components from the other frameworks when your primary framework doesn’t include one. Much different to the “my framework is better” “discussions” we too often see.
Finally, Chris Shiflett addressed Security-Centered Design: Don’t Just Plan for Security; Design For It provided an alternative look at some interesting security topics. Instead of focusing on technical details, he primarly focused on security from a user perspective as “user perception is as important as reality”. Giving examples of various recent attacks on high profile sites that, while not actually the fault of the site, would be perceived as such by most users. He also put forward ideas about using “ambient signifiers” to assist in the fight against phishing, and how the normal (request-reload) web model can mean important information is missed due to “change blindness” including a live demonstration. He suggesting AJAX might be a suitable solution in this case (as long as it’s still accessible of course).
All-in-all the conference was a great experience, the hotel was really nice (especially compared to some of the hostels where I usually slum on my travels) and the talks were wide ranging and generally well presented.
Some of the material I found too “introductory”, but I think that may be because I primarily attended sessions on topics I am familiar with hoping to learn more, whereas using the talks as introductions to new topics might have been a better idea. How do people usually select talks?
I also didn’t attend much in the way of “extra-conference” activities; social events outside of the conference schedule. I am a little disappointed about this, as it would have been good to chat to some members of the community in a less formal setting, so this is something I think I would do much more of next time. And there will be a next time.
]]>