Showing posts with label geeky. Show all posts
Showing posts with label geeky. Show all posts

Saturday, 7 August 2010

Computer Adaptive Testing and the GMAT

Back around Christmas, I had a few weeks free and decided to prepare for the GMAT exam, a "computer-adaptive" standardized test required (or at least accepted) by business schools around the world. With university application deadlines looming, most of the testing sessions were already fully booked, but I managed to find a mid-January test date just a few hours' drive away.

I'll start by saying that I finished university eight years ago, so—ignoring a few intense weeks of German classes—it's been a while since I "studied". And it was about half my lifetime ago that I last wrote a standardized test: you know, with those sealed envelopes, bar-coded stickers, big machine-readable answer papers, and detailed instruction books reminding you to use a #2 pencil and "chose the BEST answer". If, like me, you haven’t written one of these in a while, you may be surprised by how much has changed.

The GMAT, like a number of other admissions tests, is now administered exclusively by computer and the test centres even have palm and iris scanners, which are used any time you enter or leave the room. Unlike most computer-based tests, though, which stray little from the well-worn paths of their pencil-and-paper siblings, the GMAT uses a computer-adaptive process undoubtedly conceived by a singularly sinister set of scholars and statisticians. This process is complex and has a number of interesting implications but basically it works like this: when you get an answer wrong, the questions get easier; when you get an answer right, they get harder. The theory is that by adjusting the test to your ability, the computer is able to rate you more precisely against people of about the same level.

The material on the test is not really that hard. It's no walk in the park either, but it mostly limits itself to stuff you learned (then unfortunately forgot) in high school. Everything else about the GMAT, though, seems designed to maximize stress:

  • The test is long. Nearly four hours long. There are three sections 60-75 minutes long, with a short break between each.

  • The test is timed. Ok, what test doesn’t have a time limit? But this one has a clock counting down in the corner of your screen, taunting you to pick up the pace. Worse than that: you'll probably actually need all the time because the questions keep getting harder as you get them right, remember? The challenge on this test is not usually solving the problems, but rather solving them in time.

  • The breaks are timed. Again, not surprising, I guess. But your break is only eight minutes long and, if you’re not back, the next section simply starts running without you! Inevitably you spend the entire break worrying about how many minutes you have left. Since you need to scan your palm and iris at the beginning and end of each break, your trip to the bathroom is not going to be leisurely.

  • Erasable notepads. No pens or paper are allowed, presumably so you can't smuggle out questions. Try working out math problems quickly on a laminated card with a dry-erase marker.

  • You can't skip a question. Remember that the questions get harder as you get them right and easier as you get them wrong. This means that the next question you see is largely determined by how you answer the current one. The computer needs an answer to the question, so you can’t skip it.

  • You can't go back. Similarly, since your current position is determined by your earlier answers, you can't go back and change them. So if you're used to finishing early and then checking over your work, you'd better start unlearning that habit.

  • You don't know the difficulty level of the questions. Is the test feeling easy because you really know your stuff or are you simply earning yourself easier questions by choosing a lot of wrong answers? The only saving grace here is that you're so busy madly answering questions that you don't have many brain cycles left to worry about this.

  • Some of the questions are "experimental". About 25% of the test is made up of new questions being tested on you to determine their difficulty level, but of course you don’t know which. That's right: that really hard question you just spent 5 minutes working on because you were sure you could solve it... doesn't count!

  • You are heavily penalized for not finishing. Right, ok, so you have a fixed time, you can't skip or come back, and you can’t predict the difficulty of the remaining questions. But if you want a decent score, you still need to pace yourself to answer all of them. Remember that countdown clock? You have about two minutes per question–so keep an eye on that average time! Oh, and the clock counts down but of course the question numbers go up, so you’d better get real quick at subtracting your time from 75 (you’ll be working out your average question time every few questions).

  • Data Sufficiency questions. These nasty little buggers are, I think, unique to the GMAT. Given a math problem and two statements, you are asked whether the problem can be solved using either statement, both statements, or both statements together. You don't need to work out the answer to the problem but you need partially solve it several times with different information and keep each attempt separate in your mind. Don't think that sounds tricky? Try searching for "sample gmat data sufficiency questions" and try a few. I think I got only about a quarter of these right on my first practice test.

You have to admire the brilliantly evil minds that came up with this thing. The experience for the test-taker is four hours of pure, non-stop stress. At least that was my experience: my brain literally didn’t stop whirring. The adaptive process pushes everyone to their limit, challenging them to keep their feet under them and ensuring that they're sweating right until the end.

The test designers have really optimized the experience around their own needs: the test is easy for them to grade, minimizes cheating, allows new questions evaluated automatically, and measures something in a pretty precise, consistent way. I’m not entirely certain what it measures, but I’m pretty confidant that people who are generally smarter, better organized, faster to learn and adapt, and better at dealing with stress will obtain a better result.

As a company selling a product, it might seem odd that GMAC (the company that runs the test) can get away with optimizing the test for their own needs. But, although it may appear that the test is the product you’re buying, I think what you’re really buying is the report that is sent to the universities. The cost of this report just happens to be $250 + study time + four hours of stress. If GMAC had competitors, they might be forced to optimize for the test-taker but, as a virtual monopoly, the motivation just isn’t there.

The challenge with the GMAT, I think, is really learning an entirely new test-taking strategy. I used a couple of books (Cracking the GMAT, by Princeton Review and The Official Guide for GMAT Review) to first understand the test and the differences in approach that were required and then to practice as many questions as possible of the specific types that appear. Doing computer-based practice exams is, of course, also essential given that what you’re learning is the test-taking strategy more than the material.

I emerged from the exam feeling absolutely drained but energized by the rush of tackling something so intense and coming out on top. In some ways it was fun but I have no intention of rewriting it any time soon. :)

Friday, 2 July 2010

Seaside 3 "Release Candidate"

You could say it's been a long time coming.

Seaside 3.0 began ambitiously and grew from there. We began (at least I did) with the goal of cleaning up the architecture, revisiting each aspect and asking what could be simplified, clarified, or standardized. As functional layers were teased apart, suddenly pieces became unloadable and a repackaging effort got under way. From this we realized we could make the process of porting Seaside much less painful. Along the way, we lowered response times and reduced memory usage, added 10x the number of unit tests (1467 at last count), standardized code and improved code documentation, added jQuery support, and, oh, did you hear there's a book?

The result? This release runs leaner, on at least six Smalltalk platforms and is, I think, easier to learn, easier to use, and easier to extend. Seaside 3.0 is the best platform out there for developing complex, server-side web applications. Is it perfect? No, but I'll come to that part in a moment. It is the result of literally thousands of hours of work by a small group of people across all six platforms. But this release also exists only due to the generosity of Seaside users who tried it, filed bugs against it, submitted patches for it, and eventually deployed it.

Deployed it?! Yeah, you see, not only have all the commercial vendors chosen to ship our alphas and betas, but our users have also used them to put national-scale commercial projects into production. I alluded last month to a conference session I attended, in which somebody made the statement that
The best way to kill a product is to publicly announce a rewrite. Customers will immediately avoid investing in the "old" system like the plague, starving the product of all its revenue and eventually killing it.
It was a shocking moment as I realized we'd attempted just that. At first we justified the long release cycle because we were "doing a major rewrite"; then we just had "a lot more work to do". Eventually there were "just too many bugs" and things "just weren't stable enough". And, finally, once we realized we desperately needed to release and move forward, we just ran out of steam (no quotes there—we really did).

I still think the original architectural work needed doing and I'm really happy about where we ended up, but here's what I've learned:
  • When your wonderful, dedicated users start putting your code into production, they're telling you it's ready to be released. Listen to them.
  • We don't have the manpower to carry out the kind of QA process that goes along with an Development, Alpha, Beta, RC, Final release process.
  • We need to figure out how to get more users actively involved in the project. This could be by writing code but probably more importantly by writing documentation, improving usability, building releases, managing the website, doing graphical design, or something else entirely. The small core team simply can't handle it all.
Trying to apply these lessons over the past month, I asked for help from a few people (thank you!) and we closed some final bugs, ran through the functional tests, developed a brand new welcome screen, and managed to bundle everything up. We're releasing this today as 3.0RC.

We're not planning a standard multi-RC process. The "Release Candidate" simply signifies that you all have one last chance to download it, try it , and let us know about any major explosions before we do a final release, hopefully at the end of the month. From there we'll be reverting to a simpler process, using frequent point releases to fix bugs. 3.1 will have a smaller, better defined scope and a shorter cycle. I have some ideas but before we start thinking about that, we all need a breather.

I also have some ideas about the challenges that potential contributors to the project may face. But I'd like to hear your thoughts and experiences. So, if you have any suggestions or you'd like to help but something is stopping you, send me an email or (better yet if you're there) pull me aside at Camp Smalltalk London or ESUG and tell me about it.

Ok, ok. You've waited long enough—thank you. Here's the 3.0RC one-click image, based on Pharo 1.1 RC3 and Grease 1.0RC (just the image here). Dale has promised an updated Metacello definition soon. Enjoy!

Friday, 25 June 2010

The Trouble with Twitter

The thing about Twitter is it's so easy. Sitting down to write a blog post takes time and effort. I want to develop a thesis, establish a reasonable structure, and edit the thing until it flows and becomes a pleasure to read. Ignoring the time spent in advance thinking about the topic, a well-written non-trivial blog post might take me an hour to write (some have taken longer). As a result, I find it increasingly tempting to just dash off 140 characters and toss the result out to the masses.

The trouble is, if you have something to say and you want people to spend their time reading it, you really ought to take the time to craft a proper argument; it seems only fair. I would much rather read a handful of well-written, thought-provoking blog posts than a hundred trivial tweets. And besides, I actually enjoy the writing process.

I'm pretty confidant that some ideas are better suited for tweets and others for blog postss, but the line can be fuzzy. And the temptation of laziness persists so I'm going to need to increase the temptation of effort to counter it. In the meantime, I'll be on Twitter throwing out undeveloped thoughts with everyone else.

Saturday, 12 June 2010

This week's events


The VASt Forum in Stuttgart this week was well attended, with maybe 40 attendees. Unfortunately, as the presentations were all running long and I had to leave before the social event, there was quite limited time for discussion; but it was clear that most people were either past or existing Smalltalk users (though not necessarily current VASt customers). This, combined with the increasing regularity of Pharo sprints and the more than forty people who have already signed up for Camp Smalltalk London, seems to be a very good indication of the enthusiasm and growth in the Smalltalk community these days.

Attendance at the Irish Software Show in Dublin has been lower than we expected. My informal counts suggest about 60-80 people in attendance each day. Of interest to me was Wicket, which I had never looked at before; I was quite surprised to see how similar it is to Seaside in some respects and how similarly Andrew Lombardi, who was giving the presentation, described the framework's benefits and his joy when using it.

The web framework panel discussion had about 30 people watching and we had some good discussion there. Attendance at my Seaside talk was probably closer to 10. It would have been nice to have attracted more of the Java developers at the conference (there were about 20 people at the Wicket session earlier in the day) but it was interesting to find out that the majority of those who came had at least played with Smalltalk before.

Other interesting highlights include Kevin Noonan's talk on Clojure (seq's are much like Smalltalk's collection protocol but available on more classes), Matthew McCullough's presentation on Java debugging tools (interesting to see their progress and a also few ideas to look at ourselves), and Tim Berglund's overview of Gaelyk (reminds me disturbingly of writing PHP but the easy deployability and integration of XMPP, email, and Google Auth are cool). The speakers' dinner at the Odessa Club last night was great and we had a number of good discussions there as well.

The above photograph was humourously hung over the urinals in a restroom here in Dublin. I would have thought the slightly disturbing visual association was accidental if there hadn't been five separate copies!

Saturday, 29 May 2010

Camp Smalltalk is popular

When the UK Smalltalk User Group started planning the Camp Smalltalk London event a few weeks ago, we imagined we might get 20 people. After only four days, 30 have signed up and we're jumping to figure out how many more people are interested and how many more we can handle. There are certainly worse problems to have!

If you're still interested in attending, please do us a favour and add yourself to the waiting list at cslondon2010.eventbrite.com.

Thursday, 22 April 2010

Upcoming Smalltalk events

There are a number of Smalltalk events coming up in Europe over the next few months (Joachim posted about some of them a little while ago).

[edited to add:] I forgot to mention that I recorded another Industry Misinterpretations podcast with James Robertson and Michael Lucas Smith last week. It's a two-part episode talking about cross-platform Smalltalk development. The audio for the first part is available now; and (if I'm lucky) this link should be the second part once it is posted.

The UK Smalltalk User Group is up and running again and the next meeting will be at 6:30pm this Monday, April 26 at the Counting House.

[added] Markus Gälli pointed me to a talk by Claus Gittinger, creator of Smalltalk/X, on Flow-Based Programming. This will be in Zurich on April 28.

On May 4, Cincom is planning to host an experiment on "Wolfpack Programming" at the eXtreme Tuesday Club's weekly meeting. The idea is to play with how wolves' social structure and hunting strategies can be applied to a large team of programmers working in a single live system (kind of like extreme pair programming). It should be a fun evening and we're providing food and drinks for the night as thanks for your participation. More details will be posted on the May 4 meeting page shortly—please post your name there if you're coming.

May 16-19 is the SPA 2010 conference in London. This isn't a Smalltalk event, per se, but Cincom is sponsoring it and a few of us will there. We're hoping to have some results from the Wolfpack Programming experiment to discuss.

June 8 is the VA Smalltalk Forum Europe 2010 in Stuttgart. John O'Keefe from Instantiations will be presenting as well as Sebastian Heidbrink, Joachim Tuchel, and a number of others. Lukas Renggli will be talking about Seaside. I'm not presenting but I am planing to attend.

Also starting June 8 and running until the 11th is epicenter 2010: The Irish Software Show in Dublin. They asked me last fall to come talk about Seaside and I'm happy that we've got the details all worked out (though I'm still waiting for my bio to be updated). I'll be talking on Thursday the 10th and also, I think, taking part in a panel at one of the evening events.

On June 10, the 3rd Smalltalk Stammtisch in Köln (Cologne) is happening. Ich möchte gerne hingehen, aber das ist nicht möglich als ich nach Dublin fliegen muß.

July and August are quiet months in Europe since everyone goes on vacation. But I've heard rumours of a sprint or other event being discussed in London. I'll pass on anything I hear.

Then of course there's the annual ESUG 2010, in Barcelona this year from September 11 to 17. This has been one of my favourite events over the last few years.

Finally, we're also working on something in France (probably in June) and Sweden/Norway (Sep/Oct) but they're still preliminary, so I'll post details as they're available. Toss in some vacation and a couple of weddings this year and I'm going to be busy.

In the meantime, if I missed any events, please pass them along. Also, if you can think of conferences or events where Smalltalk should be represented or groups that would be interested in hearing about Seaside or Smalltalk in general, let me know and I'll see what I can do to make it happen.

Saturday, 30 January 2010

Easing compatibility with Grease

Photo by DarkSide, sxc.hu

In December, I gave a presentation on portability to the NYC Smalltalk group. Seaside now runs on at least seven different Smalltalk distributions. Given the lack of standardization, this is no minor feat; for Seaside’s developers, the need to keep code portable is always on our mind. As a result, we have gradually accumulated a set of tools, patterns, and conventions to help keep our code as portable as possible and to factor out code that needs to be implemented differently on each platform.

In our work on other projects, we found the same portability challenges came up over and over and we wanted to use the tools we had developed for Seaside to address them. So we began to split out the Seaside-specific functionality, allowing us to leverage the generic parts it in our other work. And thus Grease was born.

So what exactly is Grease?

  • Grease enhances the ANSI Smalltalk standard. With only a few exceptions, we assume platforms are fully ANSI-compliant. Platforms want to support Seaside and standardization makes this easier for the project’s developers and its porters.
  • Grease defines expected APIs with unit tests. Platforms can quickly determine if they are compatible and users can examine the tests to determine exactly which behaviours they can count on.
  • Grease takes a pragmatic approach to compatibility. Sometimes a method behaves so differently on two platforms, for example, that we are forced to avoid it or to standardize on a new selector. To get standard exception signaling on all platforms, Grease is forced to provide special exception classes that can be subclassed. Sometimes we need to put “right” aside and settle, instead, on a solution that can be implemented everywhere.
  • Grease tries to be concise and consistent. Despite its pragmatic approach, we still want to be “right” as much as possible. Because it’s hard to remove functionality once it has been added, we need to carefully consider each addition before proceeding. We’re moving slowly and looking for methods that are commonly used and that have clear names and semantics.
  • Grease does not try to solve all problems. We are not testing Sockets or HTTP clients. We don’t expect platforms to have standard SSL or graphics libraries. Its scope may grow over time, but for now we’re focusing on extending the functionality of the core classes defined in the ANSI standard (collections, exceptions, streams, blocks, etc.) and on other pieces of functionality that are critical to the Seaside project (e.g. random number generation and secure hashing).
  • Grease is widely adopted. Implementations exist already for all platforms that support Seaside 3.0. As well as Seaside, new versions of Magritte, Pier, and Monticello are already being implemented on top of Grease.

If you’re developing on Squeak or Pharo, you can also benefit from Slime, which uses the Refactoring Browser to find and, in some cases, rewrite common compatibility problems. Think of Grease as defining what you can write and Slime as defining what you can’t. It would be nice if Slime could be extended to other platforms, but their RB implementations are currently not compatible enough (a perfect target for Grease!).

Grease will continue to be part of the Seaside project and to be driven, for now, primarily by Seaside’s requirements. But we hope other projects will find it increasingly useful over time. Since each platform has already ported it, you may already be able to leverage it to provide increased consistency and portability for your applications. For the moment, consider Grease a prerelease and subject to major change; it will track Seaside releases for now, though I’m thinking of assigning independent version numbers to Grease releases to make things clearer.

The Grease packages can be found in the Seaside 3.0 repository or through your vendor's standard code distribution mechanism.

Sunday, 24 January 2010

Facebook time

I was poking through some of Seth Godin's eBook What Matters Now this afternoon (apparently, in my case, it didn't matter until 6 weeks later). I like this message from Howard Mann:
There are tens of thousands of businesses making many millions a year in profits that still haven’t ever heard of twitter, blogs or facebook. Are they all wrong? Have they missed out or is the joke really on us? They do business through personal relationships, by delivering great customer service and it’s working for them.
How much time are you spending with your customers?

Wednesday, 16 December 2009

Training in Boston

So I'm most of the way through this jaunt down the US East Coast and have yet to post even a single update (unless you count the occasional tweet). I know, I know... what can I say? I've been busy. My first attempt was overly rambling so I'm going to focus on one aspect here and follow up with a few more posts over the next couple of weeks.

The main reason for the trip was a Product Management seminar led by Steve Johnson of Pragmatic Marketing—and I definitely recommend the course to anyone who's interested in this stuff. One thing I found interesting: in North America smalltalk usually means asking, "so what do you do?"; well at a seminar made up of 30 people who all do the same thing, that gets replaced with, "so where do you work?". Fun watching the puzzled looks on people's faces as they stared at the blank line below the name on this independent consultant's name tag. :)

The main focus of the course is on guiding product development through market problems and on grounding those problems in real data instead of hunches and "wouldn't it be cool if...?". I'm interested in Product Management from two angles: first, as a possible career direction and, second, in its applicability to open source projects, such as Seaside.

In past jobs, I've found myself naturally trying to fill an institutional void. I've been the one asking, "Are you sure the students want an on-campus version of Facebook? I kind of suspect they just want to use Facebook...". Actual demand for what we were doing, the exact problems we were trying to solve, and even the development costs have all been more-or-less-hand-wavy things. How do you know what to implement if you don't know what problem you're solving and for whom? Or, to look at it another way, if you develop without that knowledge, how do you know anyone will find the result valuable? It was revealing for me when I first learned there are people who make a living doing these things I found rewarding.

The applicability to open source is an interesting issue. On the one hand, it is almost intuitively obvious that most of the same factors apply. A project that meets a market need will succeed while one that does not will fail. A project that knows who its users are can be more effectively marketed; one that does not will succeed only through chance or an inefficient shotgun approach. What I'm not sure of yet is what is different: is it the formulas, the costs of the resources, or maybe their units of measurement? Or do we need to tweak one or more of the definitions? As a random example, Product Management makes a distinction between users and buyers of a product; what's the correct mapping for these concepts in open source? I'm still pondering all this... more to come.

Before I leave off, I should mention that the Hilton DoubleTree in Bedford is one of my best hotel experiences in recent memory. Everything was efficient and painless. The room was roomy, modern, and spotlessly clean. The internet was fast and free. And the (three!) extra pillows I tossed on the floor were left there neatly for my entire stay instead of being put back on the bed. They even insisted on comping a meal I had in the restaurant which was, admittedly, slow in arriving but not to the point I was concerned about it. So, I don't know why you'd be in suburban Boston, but if you are, go stay at the DoubleTree.

Wednesday, 2 December 2009

New York presentation confirmed

The details for my talk in New York have been confirmed. We'll be at the Suite LLC offices (directions) on Thursday, December 10; there's an open house at 6:30pm and the presentation is at 7:00 (drinks afterwards).

Here's the planned subject of the talk, though I think I'll play it a bit by ear and see what people are interested in:
Seaside is a rare example of software that runs on all the major Smalltalk platforms: Pharo, Gemstone, GNU Smalltalk, Squeak, VA Smalltalk, and VisualWorks. We’ll take a look at some of the challenges in keeping the framework portable and some of the techniques the team has developed to deal with these. Along the way we may also touch on tools such as Grease, Slime, and Monticello and how they help the process. And then we’ll see where the discussion leads.

Tuesday, 24 November 2009

Boston, NY, Raleigh

I've confirmed a December trip to the US East Coast. In Boston, I'm attending a product management seminar put on by Pragmatic Marketing, meeting up with a few Smalltalkers from the area, and planning to pop in on the Boston Ruby group's monthly meeting if I can squeeze it in.

On Thursday, December 10, it looks like I'll be giving a presentation at the NYC Smalltalk users group—Charles was kind enough to try to schedule something around my timetable. Details are not quite confirmed; I'll try to remember to post an update here but keep an eye on their site if you're interested. I'm planning to talk a bit about the techniques and tools we use to ensure Seaside portability across the various Smalltalk dialects but we'll see where the conversation wanders. I'm also planning to visit with friends, enjoy the pre-Christmas season in New York, and maybe do some shopping.

Finally, I'm making my way down to visit the VA Smalltalk team in Raleigh, North Carolina. John and I are planning to put our heads together on a couple of issues and I think I'll be doing a Seaside tutorial for some of the engineers while I'm there.

I'm looking forward to a productive, if exhausting, trip. Drop me a line if you're in one of these areas and want to meet up.

Friday, 20 November 2009

SIXX port for VASt

I just published an initial port of SIXX to VAStGoodies. Most of the tests are passing and I'll push the minimal changes I made back upstream for integration. Just like the Pier and Magritte ports I recently finished, this one was requested and released back to the community by Nationaal Spaarfonds.

The plan is to see if I can use SIXX for Pier persistency... that'll be the next step.

Tuesday, 3 November 2009

Pier for VASt

I mentioned a couple of weeks ago that I had uploaded an initial port of Magritte for VA Smalltalk. I've spent a couple of days since then (again courtesy of Nationaal Spaarfonds) getting the Pier port cleaned up and posted. Currently none of the add-ons have been uploaded but I have the security package mostly done and it will follow shortly.

Consider these alpha releases: they are being heavily updated to work with the newest Seaside (3.0a5 currently) and to sort out compatibility with different platforms. With that said, though, all of the Pier tests and all but four of the Magritte tests pass, so give them a try. You'll need the B130 development build of VA Smalltalk.

The original Pharo sources for these Seaside 3.0-compatible versions are available: pier repository magritte repository. Again, these packages are still in flux. They're now built on top of the same Grease portability layer as Seaside 3.0a5; I'd encourage interested platforms to give them a try and see how easily portable they are.

Monday, 19 October 2009

Magritte for VASt

Over the past month or so I have been doing some work for National Spaarfonds, including porting Magritte to VA Smalltalk. They are generously offering this work back to the community and I am happy to announce that I have just uploaded the first version of the VASt Magritte port to VAStGoodies.

You'll want to start with the VASt 8.0.1 [128] developer preview image and then load the configuration map from VAStGoodies. There are currently four failing tests: three caused (I think) by method inlining and one by differences in error handling behaviour. I haven't yet determined what (if anything) can be done about these.

A version of Pier ported to Seaside 3.0 and VASt won't be far behind but I have some more cleanup to do first in order to make sure it loads into a clean image.

Thursday, 8 October 2009

Seaside 3.0a5

The fifth alpha release of Seaside 3.0 is out. Check out the release announcement. It's looking like Cincom, Instantiations, and GemStone will all include this version in their next upcoming releases; Pharo users can use the Seaside Builder to generate a load script. Squeak users will probably have success using the Builder as well, but we are looking for one or more people to actively test and maintain a Squeak port. Get in touch if you're interested.

We're expecting this to be the final alpha release, so now is the time to actually send in any bug reports you've been sitting on.

Monday, 28 September 2009

Seaside at Amsterdam.rb

I'm just back from the monthly meeting of the Amsterdam Ruby User Group at De Bekeerde Suster in Amsterdam. The cheeseburger was delicious, though I was slightly offended when the guy who brought the food said, "Let me guess... you want ketchup?". It's not like I'd even said a word; how could he have decided I was north american? :)

We talked about their plans for the RubyEnRails conference coming up on October 30 and shared some of our experiences from ESUG conferences. There was quite a bit of discussion around how to encourage programmers to give lightning talks. I also took a few minutes to give an overview of Seaside. Everyone there was quite interested in Smalltalk and the level of awareness was already quite high. We had some interesting discussions of the language's history as well as its benefits and limitations. In the end that made up a good part of the evening.

Thanks for the warm welcome.

Friday, 18 September 2009

Smalltalk on AppEngine

Torsten posted a link to the announcement of GwtSmalltalk, which compiles to JavaScript and runs on top Google Web Toolkit and, thus, AppEngine. This is interesting coming only weeks after Avi's announcement of Clamato... there's clearly some interest around combining Smalltalk and JavaScript at the moment.

You can try out a demo. Hint, to create new instances you need to use:

Kernel instanceOf:

Monday, 7 September 2009

ESUG 2009 wrapup

Well, as I recover from another busy but very fruitful ESUG, it's interesting to look at what made it such an enjoyable conference. There is a real sense of community there that makes it a pleasure to attend every year.

There were some interesting presentations but, for me anyway, the true value was in the networking and personal conversations. I made some interesting new contacts, renewed some old ones, and rounded up some consulting work that will keep me in Europe for a little bit longer. The organizers made some last minute changes this year to help encourage these sorts of meetings and I hope we will see more of this sort of thing next year.

My overall impression is that these are interesting times for the world of Smalltalk. There seems to be a sense of common purpose and renewed life at the moment and it's satisfying to think that Seaside has played at least a small role in making that happen. I'm not sure what lies ahead, but I think opportunities will arise that we need to take advantage of. I'm also not yet sure exactly what part I want to play but I'm starting to think seriously about it.

My tutorial with Lukas was well received. As usual, we didn't quite manage to get through all of our material, but it went pretty smoothly and I think the thirty-or-so participants all picked up some new tricks to use in their Seaside projects.

The Seaside sprint was very successful, even though we didn't quite meet our target of finishing a 3.0 beta release. Keep an eye out for an announcement when we do get it done.

I'll close with links to a few people's photos:
Hope to see you all next year in Barcelona!

Wednesday, 2 September 2009

Seaside 3.0 and Documentation

For those who aren't at ESUG this year and missed Lukas' tweet, we announced yesterday that the Seaside 2.9 alpha series will become Seaside 3.0 when we go to beta.

We feel the name is well earned: a cleaner architecture, increased flexibility, better documentation, improved portability, and jQuery support make Seaside 3.0 an even more solid base for developing powerful web applications. They also lead the way for more incremental changes in the future and should make life easier for anybody who wants to develop tools or other frameworks on top of Seaside.

We will be running a Seaside Sprint here in Brest from Friday afternoon through Saturday and the goal is to get the remaining issues resolved for a first beta release. Please join us if you have the opportunity.

Also announced at ESUG, was the release of the online book Dynamic Web Development with Seaside. It's a great resource: make sure to check it out and contribute comments and content.

Saturday, 29 August 2009

ESUG and Keychain integration for Firefox

I arrived this afternoon in Brest, France for the ESUG 2009 conference. I didn't write much Smalltalk but got caught up with a few people and had a couple of interesting discussions.

There will be much Seaside to come but, taking a break from that over the past few days, I also managed to get a beta version released of my Keychain Services Integration extension for Firefox that allows OS X users to store their logins and passwords in Apple's keychain. This allows the passwords to be shared with other browsers like Safari and Camino and also lets you take advantage of features like Keychain locking to protect your stored passwords. If you use Firefox 3.x on OS X, give it a try and let me know how it goes - it's scratching an itch for me anyway.