Showing posts with label seaside. Show all posts
Showing posts with label seaside. Show all posts

Wednesday, 22 April 2009

Cincom starting work on 2.9

It's been a little while coming but it seems Cincom has begun to take a crack at porting Seaside 2.9. This is excellent news and it sounds like the first push hasn't been too painful, which provides some validation for our portability work. Hopefully this means we'll be able to iron out compatibility issues with VW before going to Beta.

Wednesday, 15 April 2009

Seaside Tutorials from GemStone

James Foster announced that he has posted his Seaside tutorial materials online. The tutorial is released under a Creative Commons Attribution Non-Commercial Share-alike license and takes you through the basics of setting up an image, figuring out Smalltalk, and getting started with Seaside, Monticello, and then GLASS.

Seaside documentation is always welcomed by the community. Thanks to James and GemStone for making it available.

Monday, 30 March 2009

Russian Continuations

Andrey Larionov just posted a Russian translation of my article on the use of continuations in Seaside. Hopefully it's accurate... I certainly have no idea. :) He says he's going to try to start a series of articles on Seaside so keep an eye out for that if you can read Russian.

In other news, I'll be on vacation in Portugal for the next 10 days, likely without Internet access. I'm hoping the weather will get warmer than currently predicted but as long as the rain holds off I won't complain.

Wednesday, 18 March 2009

Seaside at The Chasm

I've seen two references this week to Geoffrey Moore's Crossing the Chasm. I haven't read it yet but I've added it to my list.

Kent Beck describes the book's theory of market segments like this:

  1. Enthusiasts — will try new things for the sake of novely
  2. Innovators — will try new-ish things for the sake of business improvement in spite of some risks
  3. Early majority — will adopt proven products provided there is no perceived risk
  4. Late majority — will follow in adopting established products
  5. Laggards — the name says it all

He then goes on to highlight these observations of Moore's with regard to high-tech products:

  • The gap between innovators and the early majority is particularly wide (the “chasm” of the book title), stymying many promising innovations
  • Marketing is fundamentally word of mouth, but people in one segment don’t converse with people in other segments. This creates a chicken-and-egg problem in gaining traction in a segment.
  • Messages that work for one segment don’t work for the next. The time to switch is when a product seems to be gaining momentum, because that segment will soon be exhausted.

Eric Sink adds:

One of the ideas in [the] book is that new innovations don't go mainstream until they become a "whole product".

Now Kent is talking about JUnit Max and Eric is talking about Distributed Version Control Systems (like git, Mercurial, and Bazaar) but this got me thinking again about Seaside's positioning and how to grow our user community. I think GLASS, by providing integrated persistence, is a big positive step towards making Seaside seem like a "whole product". But what else is missing? Deployment/admin tools and documentation spring to my mind and I'm hoping to make a start on the latter with some of the posts on this blog. But what else?

Growing support by major Smalltalk vendors helps reduce the apparent risk within a small segment of the market but, for many people, Smalltalk itself is seen as a major risk. So part of making Seaside less appealing would involve either de-emphasizing its use of Smalltalk (not really going to work—people will notice eventually) or reducing the apparent risk of Smalltalk itself. That's no small task and the vendors, I'm certain, have this at the front of their minds. Still if anyone has any thoughts on how Seaside in particular can contribute to that effort, I would love to hear them.

One of the major perceived risks associated with Smalltalk, of course, is the relatively small number of developers using it. Thus we have a feedback loop there: more users means less risk, which means more users, and so on.

So three important questions, I think, are:

  1. What does Seaside need in order to be considered a complete product?
  2. What can Seaside do (other than growing the community) to reduce the perceived risk of adoption?
  3. What can Seaside do (other than reducing perceived risk) to grow the community?

No answers yet; just questions. If you have any thoughts, please share.

Tuesday, 3 March 2009

Seaside 3: Server Adaptors

This is the next in a series of posts looking at changes in the upcoming Seaside release. Check out the other posts on partial continuations and exception handling.

The upcoming release of Seaside will feature some changes to the now-familiar WAKom. The existence of various subclasses like WAKomEncoded and WAKomEncoded39 was beginning to point at a usability problem and we wanted to facilitate running Seaside on multiple ports out of the same image.

First of all, we have introduced an abstract class, WAServerAdaptor. Server Adaptors provide an interface between Seaside and a particular server or connector (e.g. Comanche, Swazoo, AJP) and are able to start and stop server instances on the correct ports. Most importantly, they are also responsible for translating between Seaside's WAResponse and WARequest objects and the native requests and responses required by each server.

Server Adaptors are configured with a port number and a Request Handler. They are also configured with a Codec, which specifies the character conversion that should be performed when converting the requests and responses. Character encoding issues are complex, however, and we are still playing with those interfaces so I will leave that discussion for another time. Server Adaptors get registered with an instance of WAServerManager (currently a singleton) when they are created and the Server Manager coordinates starting and stopping the servers.

To create and start a new Comanche adaptor, you could simply execute:

adaptor := WAComancheAdaptor port: 8080.
adaptor start.

Alternatively, you can use the new Control Panel (implemented with OmniBrowser) to create and configure your adaptors. You can now create and run multiple adaptors of the same type. You can even configure multiple adaptors on the same port (perhaps to facilitate testing different servers or codecs) though only one can actually be running at any time.

Note that WAKom and its subclasses still exist for backwards compatibility in this release and will start an appropriately-configured instance of WAComancheAdaptor. These classes, however, only support running a single adaptor instance at a time and will be removed in a later release. Once created with WAKom, the adaptor will be visible in the Control Panel and can be manipulated there just like any other adaptor.

I mentioned that Server Adaptors can also be configured with a Request Handler. Why would you want to do that? By default, all Server Adaptors dispatch incoming requests to the same default Dispatcher. But perhaps you have an admin interface that you want to run on a different port so it can be firewalled to an internal network. Perhaps you want to make Seaside's web config tool available through an SSH tunnel. Or perhaps you have an XML-RPC interface or a status page for integration with a service monitoring tool. Maybe you just want to transition your application behind a load balancer to a new instance with different configuration options. Whatever the case, if you just want to configure a new Dispatcher using the web interface, the Control Panel has a context menu option called "Use new dispatcher" which will give you a new blank starting point on that Server Adaptor. If you have an existing Request Handler somewhere you can tell the Server Adaptor to use it like this:

adaptor requestHandler: myRequestHandler

Using a Dispatcher at the root will allow you to configure multiple Request Handlers at different subpaths but you could just as easily use an instance of WAApplication, RRRSSHandler, or any custom Request Handler subclass (more on this in a future post) if you want all requests to be handled in one place.

Sunday, 1 March 2009

Seaside on Industry Misinterpretations

I spent some time this afternoon talking to James Robertson and Michael Lucas-Smith of Cincom for the Industry Interpretations podcast (episode 125). We talked a bit about the history of Seaside, partial continuations, the goals for the next release of Seaside, and some of the challenges (like getting debugging working). And probably some other things I've forgotten.

In fact, I know for certain that when we finished recording there was something I wished I had mentioned. I figured I'd blog about it when I posted about the podcast but, of course, now I have no idea what it was...

Update: The thing I forgot to mention on the podcast was another difference between a partial continuation and a full continuation. Evaluating a full continuation replaces the entire stack of the running process; a partial continuation can be evaluated—much like a block—so that, on completion, it returns to the context where it was evaluated.

On an unrelated note: I'm toying with the idea of making the trek to Potsdam for the German Squeak users meeting. Any other Seasiders planning to attend or have any idea how much someone with only elementary German would get out of it? :)

Tuesday, 24 February 2009

Zug Sprint

Last weekend we ran a successful Seaside sprint in Zug, Switzerland, hosted by Michael Davies.

With two days of solid work, we managed to close around 25 issues and were only occasionally distracted by LEGO Indiana Jones on the Wii. We also submitted a couple of fixes upstream for Comanche and Swazoo. We have nearly all the inter-package dependencies teased out now and the level of churn seems to be starting to ease a bit, which is quite exciting.

We have a very satisfying little centre of Seaside activity in this corner of the world right now.

Sunday, 15 February 2009

Audio and video of my ESUG 2008 presentation

James posted the audio and video of my ESUG 2008 presentation on the evolution and architecture of Seaside. If you missed the talk and are interested in hearing how Seaside got going 7 years ago and where it might be heading, check it out.

Thanks James!

Wednesday, 11 February 2009

Seaside 3: Partial Continuations

This is the second post in a series looking at the upcoming new release of Seaside. Check out the first post on exception handling.

Continuations in Seaside

Seaside is often referred to as a "continuation-based" web framework and certainly in its early days continuations were used throughout to work its magic. Seaside 2.8 still uses first-class continuations (more on what that means in a minute) in three different places:

  1. to abort normal request handling and immediately return a response;
  2. to interrupt a piece of code and resume it when the user clicks on a link or follows a redirect (to send cookies to the user, for example); and
  3. to implement Component call/answer.

The next upcoming release of Seaside, however, will completely eliminate the use of continuations within the core framework itself. Case 1 has been reimplemented using exceptions and the code for 2 and 3 moved to an optionally-loadable package. This means that you can now choose to install Seaside without using any continuations at all, which is good news for portability to a few Smalltalk dialects that don't currently support continuations.

At the same time, we are also replacing our use of full continuations with partial continuations and this article will look at what that means and why we are making this change. This stuff can get confusing (particularly while debugging it!) so don't worry if you have to let the information mellow and then come back and read it again. I've simplified a few things, sacrificing detail in the hopes of making the subject more comprehensible for people who are just curious about how it works. I'd appreciate any feedback on how well I've struck this balance.

What is a Continuation?

First of all, when I talk about continuations here, I'm talking about first-class continuations. Seaside also uses a continuation-passing style to implement its application render loop (this is the _k you see in Seaside URLs). This is a somewhat-related concept but is not what we're talking about today.

Continuations are often defined as "the remaining computation" but I think this can seem a bit obscure if you don't already understand them. To me, the simplest explanation is that a continuation saves a snapshot of a running process that you can resume later. You call a method, which calls another method, which calls another method, and so on and then at some point you create a snapshot of this chain of methods and save that snapshot object somewhere. At any point in the future you can restore it, abandoning the code you are currently running, and your program will be back in exactly the same place in exactly the same method as when you took the snapshot. That's a first-class continuation.

Smalltalk users should not find this too hard to come to terms with. When you save your Smalltalk image, you can open it later and be back exactly where you left off. You can open that saved image as many times as you like and return each time to the same state. If you save the image into a new file, you can still go back and load the old image. A continuation does basically the same thing but captures, instead of the whole image, only a single process.

Implementing Call and Answer

One of Seaside's most-demonstrated features is the ability to write multi-step tasks that query the user for information:

answer := self confirm: 'Do it?'.
answer ifTrue: [ self doItAlready ]

This is exactly the kind of thing we facilitate with continuations: we need to pause in the middle of this method to ask the user for feedback. If they ever answer the question, we want to resume the method where we left off. So let's look at how first-class continuations might be used to make this work.

Understanding the Diagrams

But first a word of explanation. The following diagrams depict context chains (though they are abstract enough that they could just as easily be a stack of frames). Every time you call a method or evaluate a block, a new context is created at the "top" of the chain. Every time a method returns or a block finishes, the topmost context is removed. The method context knows what method is being called, what object it was called on, and the value of any variables defined in the method. It also knows the context below it in the chain. If you need help understanding this process, take a look at this example illustration which shows the process step by step.

The diagrams below each represent a chain of contexts handling a single HTTP request. Each request is the result of clicking on a link and each causes the execution of a callback. Each callback eventually sends either #call: or #answer:.

The diagrams show the context chain at the point in time when #call: or #answer: is sent and then try to illustrate what happens next. The upward-pointing arrows show the progress as methods are called and the downward-pointing arrows show the progress as methods return. I show exceptions with a dashed arrow, the tail coming from the location where the exception is raised and the head pointing to the location where it is handled. In cases where a continuation has been saved, the diagrams show both the currently executing context chain and the saved one and the arrows behave as normal. Obviously these are very simplified illustrations; I'm more interested in getting the general idea across here than in the exact details.

To help make things clearer, each diagram is marked with a gray line. Everything above the gray line is user code: part of the callback that is being executed. Everything below the gray line, is part of the internal framework code: reading from sockets, looking up sessions, and so on.

A Naïve Implementation

Ok, so let's look at one possible implementation using continuations. Let's assume a user is staring at a web page with a link that says "do it". Clicking on that link will execute a callback with the example code shown above, which should prompt the user with the question "Do it?". While processing this request, the following things happen:

Full Continuation - Request 1


  1. The framework looks up the correct callback and executes it.
  2. During the callback (inside the #inform: method in the above example), the #call: message is sent.
  3. This results in every context being saved into a continuation for later use.
  4. An exception is signaled, which stops processing of the callback and returns control to the framework code.
  5. The framework continues its work and returns a response to the browser (in Seaside, a render phase would happen to allow the Components to generate the response, but I'm simplifying here).

The response to the browser should show the prompt "Do it?" and a link or button to confirm the action. When the user confirms the action, they trigger another callback, which will execute self answer: true. When this second request is received, the following happens:

Full Continuation - Request 2

  1. The framework looks up the correct callback and executes it.
  2. The callback sends #answer:.
  3. The current chain of contexts is thrown away and the exact contexts we previously saved in the continuation are retrieved and restored. Note that these methods will now return a second time. This is the weird part about continuations but remember it's no more weird than saving your Smalltalk image in the middle of a calculation. Each time you open the image you will get a result for the same calculation.
  4. Now that we have restored the saved context chain, execution resumes in the first callback as if the #call: method (remember, this is where we saved the continuation) had just returned.
  5. The restored callback finishes executing (in our example, it checks the value of answer and sends #doItAlready).
  6. The framework returns a response to the browser.

The problem here, and why I called this a naïve implementation, is that you can see the response is incorrectly returned to Request 1. The socket associated with Request 1 is, unfortunately, long gone and the browser is no longer waiting for a response over there. The browser is, in fact, waiting for a response that never arrives on the socket associated with Request 2. Ooops!

A (mostly) Working Call and Answer

So the first implementation doesn't work but hopefully you can see what was going on with the continuations. The problem is that, when we restore the continuation, we really don't want to abandon everything the framework is doing. At the very least, we need to keep the contexts that will return the response to the correct socket.

A simple way to limit the contexts captured by a continuation is to create a new process. A new process starts with a new, empty context chain, so when we create a continuation only the contexts in that chain will be captured. We can use a semaphore to cause the first process to wait while the new process handles the request. When the second process is finished, it signals the semaphore and the original process returns the response to the correct place.

This diagram shows exactly this (the contexts of the two processes are shown with different symbols):

New Process - Request 1

  1. At some point in the framework code, a new process is created and the original process waits on a semaphore.
  2. The new process finds and executes the correct callback.
  3. The callback sends #call:.
  4. A continuation is saved (note this time that the continuation extends only to the beginning of the new process).
  5. An exception is signaled, stopping callback processing and returning control to the framework.
  6. The framework creates a response and signals the semaphore.
  7. The original process resumes and returns the response to the browser.

So far, the only benefit here is that the continuation is smaller. But when the second request comes in, you'll see how this starts to solve our problem:

New Process - Request 2

  1. At some point in the framework code, a new process is created and the original process waits on a semaphore.
  2. The new process finds and executes the correct callback.
  3. The callback sends #answer:.
  4. The current chain of contexts is thrown away and the exact contexts we previously saved in the continuation are retrieved and restored (but note: this time only the contexts in the new process are abandoned; the suspended bottom process is unaffected).
  5. Now that we have restored the saved context chain, execution resumes as if the #call: method had just returned.
  6. The callback finishes executing.
  7. The framework creates a response and signals the semaphore to tell the bottom process it is finished.
  8. The original process resumes and returns the response (correctly!) to the browser.

So not only is our continuation smaller, but the second response actually makes it back to the right place. This, by the way, is the implementation used by Seaside 2.8 and earlier versions.

There are a few significant problems with this approach though:

  1. Doing multi-process synchronization adds complexity.
  2. Exceptions do not cross the boundary where the new process is created. That is, if you signal an exception in the second process the first process will never see it (technically, this could be simulated to some degree but that adds even more complexity). This means, for example, that error handling has to be done inside the new process. It also adds challenges when working with databases that use exceptions to mark objects as dirty or that key transaction information off the current process.
  3. Exceptions signaled after restoring a continuation will traverse the restored context chain. Also, when the exception is handled, the restored context chain will be unwound, not the abandoned one. Take a look at the framework code contexts highlighted in red in the last diagram: they never have a chance to finish executing and any ensure blocks they defined will never be executed. Trust me when I say that this can be the cause of some pretty subtle bugs.
  4. There is a trade-off between size/accuracy and convenience because of #2 and #3. If you start the new process right before the callback is executed, you get a smaller continuation and more accurate exception behaviour. Unfortunately, your exceptions don't propagate very far and your callbacks end up running in a different process from, say, your rendering code.
  5. Debugging sucks (at least in Squeak) when code depends on running in a certain process. I'm not sure if the debugger ever steps through the code with the actual process where the error occurred but it certainly doesn't always do so.

Partial Continuations

Enter partial continuations. A partial continuation simply means that, instead of saving the entire context chain, we save only the part we are interested in. And when we restore the partial continuation, we don't replace the existing context chain entirely; we only replace the part that we are not interested in. Let's see how they work.

Partial Continuation - Request 1

When the first request comes in, things work much the same as in our very first example so I won't number the steps. In fact, things work exactly the same except for one thing: using partial continuations, we can now specify the exact range of contexts to save in the continuation. In this case, we choose to save only the contexts that are part of the user (or callback) code. Remember the problems from the first implementation? The framework code is handling one particular HTTP request; these framework contexts would be completely useless to us when responding to any future request (and another request for the same URL is still a new request). Since a callback can span multiple HTTP requests, it is only those contexts that make up the callback that need to be saved and resumed later.

Remember also that the context chain is, in reality, much longer than shown in these diagrams: we might be storing five contexts now instead of, say, 40! A nice space savings.

Now let's look at the second request coming in. This illustration is a bit different and a little more complex because the context chain actually changes during execution, so I will explain it step by step:

Partial Continuation - Request 2

  1. A request comes in.
  2. The framework looks up the correct callback and executes it.
  3. The callback sends #answer:.
  4. We look up the saved partial continuation and, in place of the existing callback code, literally graft the saved contexts onto our current context chain by rewriting the senders. I'm waving my hands over the details but you'll have to trust me. The right side of the diagram shows the state after the contexts have been grafted in place. Note that all the framework contexts remain and we are still running in the same process. As far as Squeak is concerned, those methods were called in that order.
  5. We resume processing the saved callback as if the #call: method had just returned.
  6. Once the restored callback contexts have finished executing, they will return (because of the re-written sender) right back into the framework code that is handling the current request.
  7. A response is generated and returned, via the correct socket, to the browser.

Magic! It sure feels like it and it works beautifully. We have tiny saved continuations, we don't need a new process, and all our framework contexts get a chance to complete.

Conclusion

The partial continuation solution is currently implemented in the development version of Seaside and will be in the next release. Squeak and VisualWorks already support the implementation of partial continuations in Smalltalk code. GemStone is nearly finished adding support to their VM. Smalltalk implementations that cannot easily implement partial continuations have three choices:

  • they can simulate partial continuations to varying degrees of completeness using full continuations;
  • they can continue using a system similar to the one in Seaside 2.8; or
  • they can leave them out. As I mentioned earlier, we have removed all use of continuations from the core parts of Seaside: platforms can choose to simply omit support for #call: and this is now as simple as not providing the Seaside-Flow package.

By the way, at the same time as I was working my way through those subtle bugs I mentioned earlier, Eliot posted about stacks and contexts in the new Cog VM he's working on. As well as being very interesting reading, it shed some timely light on the meaning of primitive 198, which pointed me in the right direction and probably saved me an hour or two. Thanks Eliot! Check it out if you have the time.

I hope this was useful or interesting reading and would love your feedback on anything you found challenging or helpful to your understanding. Happy Seasiding.

Saturday, 24 January 2009

GLASS workshop

Cool... this place is offering a week-long workshop on Seaside with GLASS. The page is a bit confusing and light on details but it's cool that this kind of thing is being offered.

Monday, 19 January 2009

The Year of Seaside?

I'm a few weeks late, but I just finished listening to James Robertson and Michael Lukas Smith doing the year in review on the Industry Misinterpretations podcast. I was amused to hear them use the phrase "the year of Seaside" (a reference to Randal Schwartz's predication for Smalltalk this year). There did seem to be a lot of Seaside-related news.

But now that 2008 is over, was it the year of Smalltalk (or Seaside)? Sure, there's a lot of Smalltalk code being written, but that's been true for as long as I've been involved in the community. We made some progress with Gartner; attendance at Smalltalk conferences was high (though it sounds like there are fears it could be low again next year); Randal's predication generated a buzz in the community; and certainly there has been a notable effort to generate publicity outside the community. But is it working?

Have people seen an increase in the size of the community? An increase in Smalltalk-related business? What are our metrics? Was 2008 The Year of Smalltalk?

Monday, 12 January 2009

First Sprint of 2009

This weekend, we completed another very successful Seaside sprint here in Konstanz. We polished off all but a few of our targeted issues and made real progress towards the next alpha release.

Among other things, we added the ability to configure the new session cache, removed our dependency on MessageSend, standardized on the ANSI exception handling protocol and made sure we were signaling meaningful errors.

We also got a shiny new Control Panel implemented in OmniBrowser (see image to the right). There will be more features coming in this area.

Lukas was cleaning up the FileLibrary code on the train on the way here but lost most of it when his image suddenly crashed and ended up only 4MB in size.

I still need to look at some package dependency issues and Philippe and Lukas are working on the ability to add cookies during the callback phase and related bugs.

All in all, a very productive two days!

Wednesday, 7 January 2009

Object Initialization

Making sure your Smalltalk objects are initialized once and only once can be tricky.

(if you'd rather skip the discussion and go straight to the details, here are Seaside's object initialization conventions)

For starters, some Smalltalks implement #new on Object to call #initialize and some don't. But it gets more complicated than that. For example:
  • What should you call a new parameterized initialization method?
  • How should a subclass with one of those make sure its superclass is initialized?
  • How should class-side instance creation methods work?
  • How do you make sure inherited instance creation methods don't result in partially initialized objects?
  • What if someone else has made subclasses of yours and is already overriding your superclass' initialization method?
These are just a handful of the possible issues.

For much of the code you write, you can probably avoid worrying about this too much. You were probably using and testing the code as you wrote it, so you know it works. You probably don't have that many levels of subclasses and you (or at least someone on your team) probably knows (or can trace) all the users and subclasses of each class. In many cases, it probably isn't even the end of the world if an object is initialized twice and you're bound to notice pretty quickly if an object is never initialized. All this breaks down as your project and team get bigger, of course.

As a framework, though, I believe Seaside needs to hold its developers to the very highest possible standards of code quality. Seaside is used in Smalltalk images of various platforms all over the world and there is no way that any developer will ever be aware of all the subclasses and users that are out there. Some of our users will implement classes where it does matter if they're initialized twice and when it breaks it will be them not us who notices the problem. This means that we need to be unambiguous, clear, concise, consistent, and portable (and yes, documented... sigh), as well as vigilant in thinking about how developers will subclass, extend, and use our code.

For Seaside, Avi and I worked out a loose convention for this from day one and it has been fairly consistently applied. Some recent discussion on the mailing list, however, has prompted me to formalize and document our conventions. Our requirements are something like:
  1. Each object must be initialized once and only once.
  2. During initialization, all initialization methods must be called in a predictable order. If my subclass has already overridden my superclass' initialization method, it should still be called. (This means a method should never call super with a selector other than its own).
  3. All inherited instance creation methods must result in a completely initialized object.
  4. The conventions must be consistently applicable in all cases.
  5. It should be very clear to users of the class what parameters are required for initialization.
  6. It is more important to minimize the complexity and effort for users of the framework than for developers of the framework.
  7. Without sacrificing the above points, we should not have to write or override more code than necessary (we don't want to have to override every instance creation method each time we subclass, for example).
So here are the conventions we use in Seaside. Let me know what you think.

Monday, 29 December 2008

Seaside 2.9: Exception Handling

Ok, I promised well over a month ago to start documenting some of the new bells and whistles in the Seaside 2.9 Alpha series. I thought I'd start things off with a discussion of the new exception handling mechanism.

Have you ever wanted to customize the look of Seaside's error pages? Want to send yourself an email whenever an exception is raised? Maybe save a copy of the image on errors so you can look at the stack later? It has been possible for ages to implement a custom error handling class for your Seaside application but the process was not necessarily obvious and you were limited to catching subclasses of Warning and Error. In the upcoming Seaside release, we have cleaned up the exception handling code to really simplify the error handlers that come with Seaside and to make your custom handlers simpler and more flexible.

Basic Usage

To create your own exception handler, the first question is what class to use as your superclass. If you are just interested in customizing the appearance of error pages or performing some additional action when an error occurs, you probably want to subclass WAErrorHandler. If your requirements are more complex, you may want to subclass WAExceptionHandler directly (see Advanced Usage below).

Subclassing WAErrorHandler

WAErrorHandler is already configured to handle Error and Warning and is a good starting point if you don't need to drastically change the error handling behaviour. In your custom subclass, you can implement #handleError: and #handleWarning: to define the behaviour you want in each case. The methods take the exception (either an Error or Warning respectively) as a parameter and should ultimately either resume the exception or return an instance of WAResponse. By default, both methods call #handleDefault: so you can provide common implementation there.

For example, assuming you had defined a routine to notify yourself of errors by email, you could provide an implementation like this:

handleError: anError
self notifyMeOfError: anError.
^ WAResponse new
internalError;
contentType: WAMimeType textPlain;
nextPutAll: 'Eek! An error occurred but I have been notified.';
yourself

Returning an HTML Response

You could of course provide an HTML response but you would want to make sure your HTML is valid (and that means a head, a title, a body, etc.):


handleError: anError
^ WAResponse new
internalError;
nextPutAll: '<html>
<head><title>Error!</title></head>
<body><p>An error occurred</p></body>
</html>';
yourself

If you want to use the Canvas API, you can subclass WAHtmlErrorHandler instead. Just implement #titleForException: and #contentForException:on: (it gets passed the exception and a canvas) and you're done.

Using Your Exception Handler

Once you have written your exception handler, you need to configure your application to use it. Open your web browser and navigate to the Seaside configuration page for your application. There you will find a preference called 'Exception Handler' and you should be able to choose your new class from the list.

Alternatively, from a workspace, execute something like:

(WAAdmin defaultDispatcher entryPointAt: 'myapp')
preferenceAt: #exceptionHandler put: MyHandlerClass

Advanced Usage

If your needs are more complex than the above, you may want to subclass WAExceptionHandler directly. An exception handler needs to do two things:

  1. choose which exceptions to handle
  2. define how to handle them

Selecting Exceptions

The easiest way to select which exceptions to handle is to implement #exceptionSelector on the class-side of your new handler (note: in Seaside 2.9a1 this method is named #exceptionsToCatch). Your #exceptionSelector method should return either an ExceptionSet or a subclass of Exception. You probably want to continue handling the exceptions handled by your superclass so your implementation might look like:

exceptionSelector
^ super exceptionSelector, MyCustomError

If your needs are somehow more complex, look at implementing #handles: and #, yourself on either the class- or instance-side, as appropriate (note: these two methods do not exist in Seaside 2.9a1).

Handling Exceptions

When an exception is signaled and your handler indicates (via #handles:) that it wishes to handle the exception, #handleException: is called and passed the signaled exception. To define your exception handling behaviour, simply implement #handleException: on the instance-side of your handler. Note that the same handler instance is used throughout a single HTTP request even if multiple exceptions are signaled.

The #handleException: method is expected to return a WAResponse object. It may also choose to resume the exception, cause the Request Context stored in its requestContext instance variable to return a response (by using #redirectTo: for example), or otherwise avoid returning at all; but if it returns, it should return a response.

Internal Errors

WAExceptionHandler also provides a class-side method called #internalError:context:. This method creates a new instance of the handler and calls #internalError:, which should generate a very simple error message. This method is used whenever the exception handling mechanism itself signals an error as well as any other place in the system where we cannot be certain that a more complex error handler will succeed. This method should not do anything that has potential to raise further errors.

Setting Up an Exception Handler

Exception handling for Seaside applications is set up by WAExceptionFilter (more information on filters in a later post) and you do not need to do anything else if you are using WAApplication and WASession. If you are implementing your own request handler, however, and want to use the exception handling mechanism, you will need to set up an exception handler yourself.

Although you could do everything yourself, the easiest thing is to call #handleExceptionsDuring:context: on your handler class, passing it the block you want wrapped in the exception handler and the current WARequestContext object. You can look at RRRssHandler for an example.

Examples

The Seaside distribution includes a few exception handlers that you can look at for examples of usage. Beware of WADebugErrorHandler and WAWalkbackErrorHandler, though, as they do crazy things to get debugging to work the way we want in Squeak; although they are well commented and may be interesting to look at, they are probably not good examples of general usage. The Seaside-Squeak-Email package includes WAEmailErrorHandler, an abstract class that can be subclassed and used to send an email whenever an error occurs.

Friday, 10 October 2008

Graph Theory

I'm not sure what happened. I don't remember finding graph theory particularly interesting in university but over the past month I've had two occasions of simple pleasure while figuring out (pretty basic) directed graph algorithms.

The first was at our last Seaside sprint, where Lukas and I were working out how to modify his Package Dependency tool to mark cycles on Seaside 2.9's dependency graph in red, so we could immediately spot problems. It's not like there aren't already algorithms for this but it was fun just trying to work one out for ourselves.

The other was this Tuesday. I woke up wondering whether I couldn't make the graph a little easier to look at by removing direct edges when we have another indirect path to the same node. In other words, if package A depends on B and package B depends on C, we could omit a dependency from package A to package C because that dependency is already implied through transitivity anyway. I pictured this as the opposite of a Transitive Closure.

It turns out that this operation is called a Transitive Reduction and there are algorithms for it but they don't seem to be very well documented online. I ended up just working something out myself using depth-first searching and path-length tracking to prune shorter paths to objects. I don't know if there's a faster way but it doesn't really need to be that fast in this context anyway. I currently abort if the graph is cyclic but I'm pretty sure you could do something like pick an arbitrary root for the cycle, treat everything within the cycle as being distance zero from each other, and you would get a reasonable (albeit non-deterministic) answer. In our case, we don't want cycles in our graph so, realistically, you would fix the cycle and rerun the dependency checker anyway. Here's the result.

This got me thinking... is there a good implementation of graph theory algorithms in Smalltalk? I would love to be able to create a Graph of my model objects (maybe implement #edges on each one or something) and then call: "graph isCyclic", "graph transitiveReduction", or "graph topologicalSortFrom: aNode". So many things can map to directed graphs and, if you could work out domain-specific solutions by really easily layering generic graphing algorithms, that would be really cool.

Sometimes solving simple problems with definable right answers is so satisfying.

Gartner on Seaside

Well... praise for Seaside from a Gartner analyst:
If you are BIG fan of dynamics languages (closures, meta programming, and all that cool stuff) then consider giving Smalltalk a look. You might like what you see. Its like Ruby but with bigger muscles. You think Rails is cool? Check out seaside.
His general comments about Smalltalk are in line with Gartner's changing position but the specific reference to Seaside adds another touch of legitimacy when selling a Seaside project to those who hold the purse strings. Very nice.

Saturday, 20 September 2008

Seaside History

There are a lot of questions around the origins and evolution of Seaside, particularly after Avi and I gave up our old domain and the old Seaside and Seaside 2 websites with it.

A couple of months ago, I began (but never finished) a history page for the Seaside website to provide some background information for those who are interested. I had to dust off some of those notes to prepare my ESUG presentation on the past and future evolution of the framework and figured I might as well dust off the history page as well. So here's the story as best as I can recall it (and by "recall", I mean "find in Google" because Google seems to hold the majority of my memories these days).

[Update: now posted at seaside.st/about/History]

Introduction

Seaside made its public debut (version 0.9) in an announcement to the squeak-dev list on February 21, 2002. Avi Bryant and I developed Seaside to support our web application development consulting, specifically the development of a web-based theatre boxoffice sales system.

Seaside took heavy inspiration from Avi's Iowa framework (now here), which was written in Ruby and was itself inspired by NeXT's (and then Apple's) WebObjects. This first release of Seaside provided action callbacks for links and forms, session state management with support for call/return and the back button, and a component system with templates.

Experimentation

Almost immediately after the release of 0.9, we began work on Seaside 2.x (codenamed Borges, a reference to Jorge Luis Borges' short story The Garden of Forking Paths and an allusion to Seaside's support for forking session states). Seaside 2.0 was essentially a complete rewrite with a layered architecture: a Kernel layer providing a continuation-based HTTP request/response response loop and state (back-)tracking; a Views layer providing action callbacks and a rendering API for generating HTML; and a Component layer providing call/return semantics, embedding, and development tools.

Seaside 2.0 was released in October, 2003 with the templating system conspicuously absent. This was an experiment to see whether the development of the HTML rendering API and the wider acceptance of CSS had reduced or eliminated the need for templates. The new layered architecture made it easy for others to experiment with developing their own template engines. Seaside was also ported by Eric Hodel to Ruby, where it kept the name Borges.

Several versions followed in quick succession with major refactorings to the session state tracking and backtracking mechanisms. Seaside 2.3 (mid-2003) also introduced an even more layered architecture that tried to make some of the internals clearer and more accessibly to the project's growing number of users and contributors. It also confirmed that Seaside would not have built-in templates in the near future. Seaside became increasingly well-known around this time with a presentation at ESUG 2002 by Lukas Renggli and Adrian Lienhard and a hands-on development workshop at Smalltalk Solutions in 2003 by myself and Avi.

Stabilization

Seaside 2.4 and 2.5 addressed some growing pains in some of the core parts of the system: the Renderer API, collapsing under the weight of combinatorial explosion, was replaced by the now-familiar Canvas API; and some of the internal workings of the Session object were reified to make its application main-loop metaphor more obvious. Version 2.5 also saw the introduction of Component Decorations, Halos, and response streaming.

As first I and then Avi began to work full time developing applications using Seaside, the community began to carry more of the development load, with the release of Seaside 2.7 being  entirely (and very successfully) managed by the community, with Lukas, Philippe Marschall, and Michel Bany leading the effort. This release focused heavily on cleaning up the code base by fixing, deprecating, refactoring, and removing code.