Monday, November 15, 2010

Getting the Rehearsal Timings for Multiple Slides in PowerPoint

When practicing a presentation, you can have PowerPoint record how much time each slide takes using rehearsal mode. Although PowerPoint shows the time for each slide in the slide sorter and in the transition information, I couldn't figure out how to get the total time for multiple slides. This is useful when you want to find out how long different sections of the presentation are so that you know if you are spending too long on certain parts. In the end, I just wrote up a quick PowerPoint macro to do it (just select some slides in the slide sorter and then run the macro to tell you the total time needed for the selected slides):

Sub TotalSlideTiming()
Dim pres As Presentation
Set pres = ActivePresentation

Dim slides As SlideRange
Set slides = ActiveWindow.Selection.SlideRange
TotalTime = 0
For Each s In slides
TotalTime = TotalTime + s.SlideShowTransition.AdvanceTime
Next s

minutes = TotalTime \ 60
seconds = TotalTime Mod 60

msg = Str(minutes) + "m " + Str(seconds) + "s"

MsgBox msg

End Sub

Wednesday, November 10, 2010

Public Transit in Ottawa in the Age of Cars

This blog is supposed to be mainly about coding, but for engineering-types, the efficient movement of people around a city is always an interesting puzzle, so though this blog post is off-topic, I don't think it's uninteresting (note: I'm not a transit expert; I'm just a transit fan who likes shouting out like a blowhard).

Well, anyway, now that there's a new(ish) mayor of Ottawa, there's been a lot of newspaper articles about whether the old mayor's light rail (LRT) plans should be continued or scrapped. Personally, I always thought that the old mayor was pretty dumb (or at least naive) regarding transit. It takes decades to put together transit plans and get contracts signed, so for the old mayor to cancel the original light rail contracts when he got elected was stupid. The plans may not have been perfect, but it was better than nothing, and by canceling the contracts, the mayor left the city with nothing for at least another decade (the electric light rail line proposed by the old plan sure would have been useful when gas prices went through the roof!). I always assumed that the new light rail plans proposed by the old mayor would be impractical, and everything I heard about it seemed to confirm that. But with all the talk in the newspapers, I finally took a glance at Ottawa's 2008 transportation master plan, and I can now definitively say that it's a bad plan. It's a boondoggle that will cost a fortune but not actually improve public transit at all. It was obviously designed by a car driver who took a look at some other cities, became impressed with the shiny trains and subways, and decided that Ottawa should get them too, without any thought about money, efficiency, or political realities.

The reality is that this is the age of the car. In the age of the car, trains only make sense in certain limited situations. It does not make sense to copy the transit systems of Europe because those cities were built before the age of the car. It does not make sense to copy the transit system of large North American cities because those cities are larger than Ottawa, and those cities evolved before the age of the car. Ottawa grew big during the age of the car, and it built a transit system appropriate for the age of the car with its Bus Rapid Transit (BRT) Transitway. As a result, Ottawa is now a leader in designing transit systems for the age of the car, and Ottawa can't really look to other cities for patterns to copy because most other comparable cities to Ottawa are behind it, not ahead of it. Ottawa must innovate in order to come up with appropriate solutions to its transit problems.

Cities that developed during the age of the car are notable because people live all throughout the city, and they might work all throughout the city. It's the urban sprawl model of urban development. It is true that cities in the age of the car still have a downtown core where a large minority of the population work, but it's definitely not New York where masses of people commute from the suburbs into downtown. In the age of the car, people on the east side will work in an industrial park on the west side, people in downtown will work at a shop in the south, or people in the south-end will work in the south-west. To be useful, public transit shouldn't take everyone to a central hub or to simply move people between two points--it has to be able to take a person from anywhere and move them to anywhere in a reasonable amount of time. The hub and spoke model doesn't work in the age of the car, public transit needs to provide a network or lattice that can move people from anywhere to anywhere. Buses are the most appropriate transit option in the age of the car because they can easily go anywhere in the city, and they can use dedicated busways to go long distances quite quickly. LRT is an obvious choice if you already have a bunch of unused rail lines sitting around (Ottawa dug up its rail lines a long time ago) or if you have a lot of extra money that you can splurge on indulgent infrastructure (Canada hasn't had extra money for decades). If those conditions don't hold, then you have to be a lot more strategic in when you build LRTs.

So although I love trains, you can't let the romanticism of trains influence your decisions. In reality, trains are expensive. In modern medium-sized cities, it almost always makes more sense to use buses for public transit than trains. As far as I can tell, these are the factors that must be considered in evaluating new transit projects:
  1. Do the proposed lines go anywhere interesting? In particular, they should go among high density residential, commercial, and industrial districts containing people who are too poor to own cars. Some rail advocates argue that a train station magically makes a place an interesting destination. I think this is false. In the age of the car, a train station can add to the attractiveness of an area, but you can't build something from nothing. If a real estate developer has a choice between building a condominium tower in a place with good amenities and car access (which is good for the 70% of a city's population who are car drivers) or building a condominium tower in a place with nothing except for a train station (which is only good for the 30% of the population who are either prefer a lifestyle where they are within walking distance to amenities or who are too poor to own a car let alone a condo), I think they'll skip over the train station. They may build a slightly taller apartment building in a place if there's a train station there though. It's also important that transit lines go through as many interesting places as possible due to the network effect. A line going from point A to B can carry people from A to B or from B to A. A line going from A to B to C can carry people from A to B or C; or from B to A or C; or from C to a or B (that's six combinations vs. two). A line going between more locations allow for a lot more possible journey combinations, making the line a lot more useful.
  2. How long is the waiting time? I think the rule of thumb is that transit riders feel that waiting for transit feels twice as long as it actually takes. So if you have to wait 10 minutes for a bus, then need to waiting another 5 minutes for a bus transfer, you'll feel like you wasted 30 minutes of your life even though you spent only 15 minutes waiting. As a result, a good transit line should minimize transfers and have vehicles that come often.
  3. Capacity: a line should have enough capacity to handle the expected ridership. Trains can potentially carry more people than buses, and they can board passengers more efficiently than buses. Ottawa does have a capacity problem in the downtown area in that the roads cannot handle the number of buses that pass through the area.
  4. Cost of construction: The cheaper the better. Also, exotic technology should be avoided at all costs. When you build any piece of infrastructure, you end up having to rebuild it every few decades as part of the maintenance. If you use standardized technology, you benefit from lower costs due to mass production since everyone else in the world is using the same technology. If you use exotic stuff, you have to pay elevated prices to rebuild the same custom, exotic stuff every few decades.
  5. Cost of operation: the most successful transit lines are the ones that make money (in reality, I don't think any transit line in world supports themselves without subsidies or financial wizardry--though they probably pay for themselves indirectly by improving the efficiency of the economy and reducing the need to build highways). Electric trains have a lower operating costs than buses because they require fewer drivers and can hold more people. There's a bit of a trade-off needed to get this lower operating cost for a train though--you save money by running fewer trains than buses, but that means that people have to wait around longer for trains than for buses. It's not clear to me how mostly empty trains or diesel trains compare to buses in terms of costs. Again, exotic technology like hybrid buses, diesel-electric trains, etc. often end up costing more money in the long run.
  6. Speed: I think the speed hierarchy is bus/streetcar, bus/tram with dedicated lanes, bus/lrt with dedicated lanes and traffic priority, busway/dedicated rail line, bus/lrt with grade separation (i.e. tunnel, trench, or elevated line), automated underground trains. As far as I can tell, trains do not go any faster than buses. The main determining factors in the speed of transit are whether a line is separated from car traffic and how many stops there are on a line.
  7. Risk minimization through diversification: things go wrong, so it's useful to have a backup plan. With trains, people are always throwing themselves onto the line or whatnot, and that leads to the whole line being backed up. Buses are often useful in that you can reroute them around traffic incidents. With multiple possible routes serving an area, a transit system can more easily withstand problems such as accidents, long construction projects, blackouts, demonstrations, etc.
  8. Every phase of the project must provide an incremental benefit: The reality of big infrastructure projects is that they often get canceled due to lack of money or a change of politicians. Although politicians can make 10-20 year plans, the plan usually only survives for 3-5 years. If that first phase provides significant benefits, then if you're lucky, the other phases might get funded. So each phase must be nicely self-contained and provide a benefit in and of itself.
So given these criteria, this is why I think the 2008 transit master plan for Ottawa is a waste of money:
  1. Do the proposed lines go anywhere interesting? No. The proposed LRT will follow the existing Transitway line, so it won't provide access to anywhere new.
  2. How long is the waiting time? Wait times will be longer, especially during its construction. Anyone crossing the city will need to make two additional transfers.
  3. Capacity: the LRT should solve the capacity problems on the downtown portion of the Transitway.
  4. Cost of construction: Any sort of grade separation like a tunnel will cost a fortune
  5. Cost of operation: I imagine the number of passengers will be high enough that the cost of operation will be lower than running buses along the Transitway.
  6. Speed: As far as I can tell, there's no reason to believe that LRTs will be faster than buses along the Transitway. Any speed-ups will be solely due to the tunnel through downtown. This tunnel will probably save about 5-10 minutes per trip that goes through downtown.
  7. Risk minimization through diversification: the LRT means Ottawa is actually increasing its risks of unusual disruptions. With buses, you can reroute buses around problems. Although a tunnel will mean fewer disruptions due to traffic, the use of trains means that every time someone throws themselves onto the tracks (I imagine it'll happen once every month or two), the whole system will grind to a halt.
  8. Every phase of the project must provide an incremental benefit: This is the big weakness of the plan. The proposed new LRT system only works if the whole thing gets built. If you only build the first phase, you actually end up with a worse system than you started out with. Let's say you build the LRT tunnel and then you run out of money. You've then spent a billion dollars building a train that doesn't go anywhere. Basically, you'll end up with a train that crosses downtown. Ottawa is not Toronto. There isn't enough density in downtown Ottawa to support a train that only goes from one of the downtown to the other. For people in the suburbs, you have to catch a bus for one part of your journey anyway, so it simply makes sense to stay on the bus and ride all the way to or across downtown. The only way to get them to ride the train would be to force them--bus journeys would purposely have to halt at the edge of downtown and everyone would be forced to transfer to a train. People going from the west end to the east end would have to make two transfers if not more. It's just a disaster. The LRT only works if it's long enough to stretch across the city, but I seriously doubt the funding will last that long.
So in my opinion, the proposed LRT plan involves spending two billion dollars to build a train that doesn't go anywhere new but simply cuts travel times by 5-10 minutes. Of course, if a problem develops in the plan and it isn't completed, Ottawa will instead end up with a train that increases everyone's travel times by 5-10 minutes due to increased transfers. This plan just seems like a huge risk and a huge cost with only a small possible benefit. If the risks or the costs were small, then it might be worthwhile. If the possible benefits were huge, it would also be worth it. But this plan has too many negatives. The old mayor used to claim that making transit faster would get more people to use it. This is one of those delusions by car drivers that they would switch to using transit if it could be made faster than driving. This is, of course, totally ridiculous. It's pretty much impossible to make transit faster than driving, and most people still wouldn't switch.

Of course, I have to admit that solving this transit problem is tricky. The most pragmatic plans aren't necessarily the ones that are politically feasible. I also don't have the numbers about costs and ridership, so I can't really make well-informed proposals. But in the interests of being constructive, I'll throw out some completely uninformed suggestions.

The primary transit problem that needs to be addressed by Ottawa is that the city has reached the capacity for buses in downtown. Any transit plan needs to address this problem. It would be nice if a proposed transit plan also provides other benefits, but the current system will run fine for another decade or so as long as the downtown problem is resolved. Although everyone wants to get rid of the Transitway through downtown, this is simply not an option in the next few decades. BRT is the fundamental backbone of the Ottawa transit system. If you mess with it, you might break the entire system, and that's too risky. So all planning decisions must start with the assumption that the Transitway will continue to go through downtown for the foreseeable future. Starting from that assumption quickly narrows down the number of feasible plans for solving the downtown capacity problem:
  1. A grade separated busway through downtown (it doesn't have to be a tunnel--a trench or elevated line might be cheaper and less risky) should probably be enough to solve the capacity problems because you can have really long bus platforms without having to be limited by the size of city blocks and traffic lights. Travel times through downtown would also probably decrease by 5-10 minutes. This plan would be expensive though and not very glamorous, so I'm not sure that it's politically feasible to go this route.
  2. A supplemental bus transit line through downtown to pull traffic off of the Transitway. Expanding the downtown portion of the Transitway into a four-lane bus highway would probably work but would never get political support. Having a second bus rapid transit line would probably also work (it would be something like buses that eventually go north or south will run along Laurier while east-west buses will continue to use Albert and Slater), but I don't think it would work politically either.
  3. Any supplemental downtown transit line would have to be light rail to be politically viable. One advantage of LRT is that it's easier to drum up support for them. Car drivers would rise up in arms if you converted part of a busy road into dedicated bus lanes. But if you convert part of a busy road into a dedicated LRT, people will often grudgingly agree (though they'll mutter about Toronto streetcars the whole time). A surface light rail line would probably be sufficient because it's cheaper and the LRT only needs to serve enough capacity to bring down the Transitway bus traffic to reasonable levels. Unfortunately, a short LRT will hardly have any traffic at all--though riders might save 3-5 minutes by taking the LRT through downtown instead of riding the bus, they'll lose all that time by needing to transfer to a bus later for the rest of their journey. You need to have a long LRT to have any hope of getting reasonable ridership. There's a problem with placement of the line though. Running an LRT along the Transitway probably makes the most sense, but is probably politically infeasible because Ottawa was originally going to do this back in 2006 but it was canceled out by the old mayor. It would look bad to simply revive the old plan. One possibility is to run a LRT along Carling and then through south downtown or centretown, and then somehow possibly down Montreal road. This LRT configuration would serve underserved neighbourhoods that are good candidates for densification, which will help drum up political support for the plan. It'll be comparatively cheap as a surface rail line. It would hopefully pull traffic from the Transitway though I admit that there's no guarantee about that. It also provides a nice diversity of transportation options for people in downtown. It wouldn't be very fast though.
  4. Some percentage of the traffic on the Transitway through downtown is caused by people traveling through downtown but not actually stopping there. It might be possible to pull some of the traffic off of the Transitway by developing the O-train into a proper diesel commuter train. The O-train can be extended both south past the greenbelt and north into Hull (not that anyone expects much ridership to/from Gatineau with only a single station, but until there's an actual train station on the other side of the river, no one in Gatineau is going to make appropriate transit plans that incorporate that possibility. Plus, responsibility for the O-Train can then be dumped on the NCC and outsourced). There's a bit of a risk that a commuter train will lead to more urban sprawl, making transit problems worse, but these sorts of risks are impossible to predict. It might also be worthwhile to build an experimental commuter train line that runs from Orleans, to the Via train station, to Billings Bridge, and then up to Bayview. I'm not sure if the O-train line has the capacity and signaling to handle the traffic though.
  5. It might be possible to gamble that the future development of the city will lead to a change in traffic patterns as jobs and housing start distributing themselves more evenly around the city instead of having the jobs mostly be in downtown and housing mostly be in the suburbs. In that case, it makes sense to ignore the problem in downtown and focus on making sure that there is a sufficient lattice of BRT lines crisscrossing the city to allow for the efficient movement of people from anywhere in the city to anywhere. If that is the case, it might make sense to create a Transitway ring around the city (possibly two--one inside the Greenbelt and one outside).

Saturday, October 30, 2010

MVC is not Elegant, It's the Only Thing that Works

I was doing some Swing programming over the summer. Although I've long been familiar with MVC concepts and although I've been doing Swing programming almost since Swing first came out, it was only this summer when I *really* started understanding MVC.

In the past, I always understood MVC in terms of "elegance." Designing UI frameworks using MVC concepts supposedly resulted in a more elegant design. One could easily change the look of a component and reuse behaviours in different widgets. Separation of presentation from data and control logic led to a cleaner design where orthogonal issues would be stored in different places in the code instead of all grouped together.

In fact, this is all wrong. People don't use MVC because it is elegant. People use MVC because for certain user interfaces, MVC is the only design that works. As such, you don't have much choice. If you have a UI design that allows for two views on the same piece of data, you need MVC (for example, if you have two word processing windows open on the same document--so when you type things in one window, these changes should be reflected in the other window as well). Once you need to support this situation in a UI, then MVC just naturally falls out.

  • If you have two windows for editing the same data, you need code to let the user edit data: that's the controller.
  • You need to store the data in one spot (not separately in each window) or else it will get out of sync in the two windows: there's your data model
  • Finally, whenever the data changes, it needs to update both windows to reflect the changes: that's the view

Once you want to support two views of the same piece of data, the reasoning behind MVC becomes obvious. MVC is not elegant. In fact, for simple UIs that only need to support a single view on a piece of data, MVC is unnecessarily complicated. MVC exists because it is the only design that works for complicated UIs. They really should state this outright in documentation instead of rambling on about this elegance nonsense.

Monday, October 11, 2010

Double-Click Detection in IE and Other Browsers

Several months ago, I wrote some code for cross-browser support for double-click detection. It was sort of tricky to put together, so I should I have posted it, but I forgot. But here it is:

function hookIEDblClickToClickCatcher(object, handler)
var isDown = false;
var isMismatchedUp = false;
object.onmousedown = function() { isDown = true; isMismatchedUp = false; };
object.onmouseup = function() { if (isDown) isDown = false; else isMismatchedUp = true; };
object.ondblclick = function() { if (isMismatchedUp) {; isMismatchedUp = false; } };
object = null;

The general idea of this code is this. You might add an onclick handler to some HTML element in JavaScript. But then you notice that, in IE, if people click an object repeatedly (like in a game), every 2nd click counts as a double-click and is ignored. So you want to register a double-click handler and have that also trigger a click event.

The problem is that this results in too many clicks for non-IE browsers. Other browsers trigger both a click event and a double-click event on the second click of a double-click, so in total your click handler ends up triggering three times. So what you want is a way to trigger a click handler on a dblclick for IE only (but you don't want IE specific code in case IE changes its behavior in the future).

Fortunately, IE has a distinctive sequence of mouse down and mouse up events that lets you detect an IE double-click vs. the double-click of other browsers (there is no mouse-down event before the double-click event).

Data URI Test

In games, I want to use data URLs for images, but IE only supported this type of URL starting with IE8. Here is some code for testing for URI support. You supply a callback function, and it will be given a true or false value depending on whether data URIs are supported or not (the data URI used is a 1x1 transparent GIF--I couldn't really find anything smaller).

function testDataURLSupport(callbackfn)
var img = new Image();
img.onload = function() { callbackfn(true); callbackfn = null; };
img.onerror = function() { callbackfn(false); callbackfn = null; };
img.onabort = function() { callbackfn(false); callbackfn = null; };

Sunday, October 10, 2010

Scrolling Games in JavaScript

So I'd previously tried to program a game in JavaScript with a scrolling background, and repeatedly encountered performance problems. In the background of the game was about 100 images, positioned absolutely, that had to scroll as the player moved. These images were part of a larger map involving thousands of image tiles, so not all of the images could be created in advance--images that were not visible had to be removed and new images had to be created as for map tiles coming into view.

All the benchmark information available at the time talked a lot about how innerHTML to create new objects was much faster than manipulating the DOM by hand in JavaScript. This makes sense. JavaScript is slow, but in most browsers, DOM objects are separated from JavaScript by heavy COM or XPCOM abstraction layers, so if you can minimize the number of DOM calls you make, that should make your JavaScript faster. Based on this reasoning, I initially wrote my scrolling code by erasing all my objects and creating replacements with a single innerHTML call. Unfortunately, this was really slow (too slow for most games), but I was too busy to work out a better approach.

I recently found some time to take a second look at this, and I found that the performance issues were more comlex than that. In fact, there is an overhead for manipulating the DOM, but it's actually not too bad. Creating (and to a lesser extent deleting) DOM objects involves a much larger overhead. Also strange is that IE8 is much slower than IE6 for a lot of this DOM manipulation (probably because it's properly synchronized etc.).

So the correct way to create a scrolling background is not to restrict oneself to a single DOM call. Instead, to scroll a bunch of images, it's fastest to go through each image and move each one by an appropriate amount (by using = '?px'). It results in hundreds of DOM calls, but it's much faster than creating new images. I was thinking of creating some sort of complicated DOM hierarchy, so that I could move multiple images with a single DOM call, but that wasn't necessary--moving each image individually was sufficiently fast. Of course, you need to create new images and remove old ones as new areas come into view or move out of view, but the performance saved by creating fewer new HTML elements counteracts the cost of the extra DOM calls.

So some quicky benchmarks reveal that when 100 moves of around 25 tiles, 128x128 in size, in 5 pixel increments take this amount of time (in milliseconds) :

IE8 with innerHTML complete refresh of all objects14563
Chrome with innerHTML complete refresh of all objects5843
FF3.5 with innerHTML complete refresh of all objects10918
IE8 with simple move elements around1919
Chrome with simple move elements around605
FF3.5 with simple move elements around1691
IE8 with simple move elements and reusing old elements1641

This means that it's probably possible to do simple games in IE8, so there's no immediate need to switch to using canvas and restricting oneself to non-IE browsers. I'm hoping that developers won't program their HTML5 games using only the canvas element because I think more traditional methods are sufficiently fast and more compatible. In fact, if I do move to use HTML5 features, I'm vaguely thinking of going with SVG, but I'll have to see how the performance evolves.

Thursday, July 08, 2010

Miscellaneous Java Swing stuff

I was doing some Java Swing coding over the summer, and I found that there were a few issues that took a lot of time to figure out due to insufficient documentation, so I thought I would dicuss the solutions that I eventually found here. (Note: I started writing this post in July, but I only finished writing this post at the end of October)

One thing that has bugged me about clipboard operations is how to efficiently support an Edit menu in the menu bar with cut/copy/paste menu items. If a user selects cut, how is the UI supposed to know where to cut from? Do you need to write some sort of crazy handler that tracks which UI widget last had input focus, and then if that widget supports cut operations, perform the cut there? In fact, Swing has classes to do this. If you only support cut&paste operations in text widgets, then there's a TextAction class you can use. I think if you want to support more than text widgets, you might be able to use TransferHandler.getCutAction()---though I haven't investigated this second option too deeply, but I think it automatically performs a cut operation on the last widget that supports clipboard operations through a TransferHandler interface.

Another thing that bugged me with Swing is that in the Windows Look and Feel, the JTextArea defaults to using a fixed size font. This is a little bizarre because the AWT TextArea doesn't use a fixed size font, and the JTextArea using the default look and feel doesn't use a fixed size font. I would normally just set the font of a JTextArea to something else, but I couldn't find sufficient information in Swing about how to do this properly. In order to change the font, you need to specify a font size, but what size is appropriate? You can't use a "reasonable" value like 12pt, because the user might have configured their OS to use extra large fonts. Operating systems usually provide an area where you can look up information about which default font should be used in user interfaces, but it seemed like Swing didn't provide access to such information. In the end, the best compromise I could find was to grab the font used in JTextField widgets and apply them to the JTextArea. Presumably, Swing will configure JTextField widgets to use an appropriate font based on a user's settings, so applying this information to a JTextArea will give you a reasonable default. It's not a great solution, but it will have to do until this bizarre JTextArea problem is fixed (it's been there for many, many Java versions though, so I doubt it will be fixed any time soon) or until Java comes with a default look & feel that isn't so confusing and disorienting (even I have problems using the file dialogs in the default metal l&f).

Another thing that I had a hard time finding in the Java documentation is how newlines/linebreaks are handled in a JTextArea. Do they use the platform defaults? If you save the contents of a JTextArea to a file, do you have to manually convert between CR, CR/LF, and LF in order to get things in a neutral format? This is all nicely documented, but it is stored in an unusual spot.

Finally, the last Swing issue that took me a long time to figure out was how to support a JTextArea that you can resize horizontally, but which will grow vertically as you type into it. So for the purposes of word-wrapping, it behaves like it has a fixed width, but you should still be able to change that width by resizing the window. A normal understanding of the Java UI layout model suggests that this isn't possible. The preferredSize()/size()/minimumSize()/maximumSize() model can't handle things that sometimes have a fixed width but expand downwards, but sometimes lets you change the width, resulting in a different height for the widget. In fact, if you code up a JTextArea with no scroll bars but with a layout manager that can control the width of the widget, it works fine. Apparently, Java has various hacks for supporting this sort of setup, so it just works.

Saturday, June 12, 2010

Upgrading from Windows Vista to Windows 7

I recently did an upgrade from Windows Vista to Windows 7, and I encountered a few difficulties with the install process, which I documented here.

Windows Mail is Missing

In Windows 7, they removed Windows Mail. Although Windows 7 can still run the Windows Mail program, the upgrade process will uninstall Windows Mail, which is somewhat annoying (I'm not sure where the Windows dev team started having the great idea of uninstalling functionality that was working fine before the upgrade, but I think the new Windows team has some weird priorities that previous Windows teams didn't seem to have).

During the upgrade process, Windows 7 will tell you that it will keep your old Windows Mail settings around, and it will be easy to install Windows Live Mail Desktop later. This is untrue. Windows Live Mail Desktop will automatically import your old Windows Mail settings only if it detects that you have Windows Mail. Of course, Windows 7 uninstalls Windows Mail, so Windows Live Mail Desktop will not automatically import your old settings. So if you use Windows Mail in Vista, you should install Windows Live Mail Desktop FIRST, have it pick up all your old settings, and then upgrade to Windows 7 SECOND.

If you didn't do this, then you have to manually re-enter your account settings, then import your old mail from C:\Users\[userid]\AppData\Local\Microsoft\Windows Mail, and then go into contacts to import your old contact list from Windows Contacts.

East Asian Handwriting Recognition

The whole reason I upgraded to Windows 7 was because the Chinese Handwriting Recognition was a little flaky. It didn't integrate properly with Internet Explorer and Google (it kept losing focus and changing its language), and it was hard to resize the input panel, etc (it, oddly, seemed to work fine in Firefox though). I was hoping that they finally fixed this problem in Windows 7. And they did. They removed East-Asian Handwriting Recognition support from Windows 7. Instead, they want me to shell out an extra $150 for Ultimate in order to get access to this feature, which they removed during my upgrade to Windows 7. To the product manager who earned a nice bonus for thinking of this way to increase Microsoft's revenue: "Thanks a lot, asshole." I remember a time when Microsoft was mean and dangerous, but at least they would try to do right for their customers. Nowadays, their main areas of innovation seem to be finding fiendish ways to increase revenue.

Fortunately, Windows 7 still includes limited East Asian handwriting recognition in the IMEPad (to be removed in Windows 8, no doubt). And it seems to work more reliably than in Windows XP or Windows Vista. The instructions for enabling the Chinese handwriting recognition in IMEPad are the same as for Vista.

No Going Back

I was so annoyed with losing all this functionality during the upgrade that I was going to revert my upgrade back to Windows Vista. Windows has a long history of allowing you to undo even really invasive Windows upgrades (e.g. you can uninstall Windows XP and revert back to Windows 95 after such an upgrade). But, apparently, they stopped providing this functionality ever since Sinofsky took over the Windows Vista team (it might also be related to Windows Activation). So I'm stuck with it. I'm starting to think this will be the last Windows upgrade that I will ever do though, so I'm not too annoyed.

Sunday, April 11, 2010

PowerPoint 2007 VBA Macro for Expanding Custom Animation into Multiple Slides (for Export to .PDF)

When I make presentation slides, I like using a lot of diagrams to explain algorithms, and often I use animations to show the different processing stages of these algorithms. In the past, I would just make a different slide for each stage in an animation sequence, but this gets tedious when I have to make lots of edits to a slide (I then have to go back and change all the slides in the animation sequence). So I started using the custom animation feature of PowerPoint, but when PowerPoint exports a presentation or prints notes, it sticks all of the animation elements on a single slide, so it's not possible to follow the animations in an exported presentation. This is especially important when exporting to PDF so that I can put my presentation online or so that I can have a backup copy of the presentation around in case there are portability problems with PowerPoint.

There were a couple of scripts online for doing this (here and here), but I wanted to write my own because one of them came with an installer, which I don't like, and the other one seemed too limited. Figuring out how to write my script ended up being more difficult than I expected though. I've programmed a few things in Visual Basic for Applications (VBA) before--the standard way to write macros for Office--but I vaguely remembered that when designing Office 2007, Microsoft said that it would be phasing out VBA in subsequent versions in preference for Visual Studio .Net. In fact, the record VBA macro feature (a common technique people use to program initial macros) wasn't even included in PowerPoint 2007. Programming things in .Net is a bit of overkill when you just want to write a little macro that's easy to distribute, so that wasn't feasible either. I thought about writing a little VBScript or JavaScript/JScript script that would do what I wanted, but then I found out that Microsoft was deprecating the whole ActiveScripting framework as well. Since the new Office file format OOXML is supposedly XML, I thought about just reading in and modifying the files directly, but after playing with the files a bit, I found that OOXML is very much an XML dump of the legacy Office file formats. There are all sorts of crazy cross-links between the different sub-files inside an OOXML zip file that it's impossible to read and modify a small part of the file without breaking it. In order to modify it properly, you really need to read and parse the whole thing into an intermediate form, modify it there, and then write a new file from scratch, but this is clearly too much work for a little script. I briefly considered making my presentation in OpenOffice instead, and I actually would have if the OpenOffice people had bothered to add in a couple of extra features into their application instead of simply making a brain-dead clone of Office (e.g. like being able to animate the color changes of individual words of text instead of being restricted to paragraphs only), but I was beginning to like the automatic text sizing and animation number indicators of PowerPoint 2007.

After much digging around though, I found out that Microsoft had changed their mind and decided not to deprecate VBA after all. Starting with Office 14 / Office 2010, Microsoft has renewed their commitment to maintaining VBA. So I wrote a little VBA script to split up an animation into multiple slides. I believe my script handles all entrance and exit animations, but I haven't really tested this. It makes use of Office 2007's new API for exposing the different sub-parts of an animation, which should allow a script to handle new animations in a more uniform way. Unfortunately, this new API is still somewhat broken, so it needs various error-handling code to handle anomalies like SetEffect objects without any properties etc.

To use the script, go into PowerPoint 2007, go to View...Macros, create a new macro called "ExpandAnimations", replace the macro code with the code below, and then run it.

Private AnimVisibilityTag As String

Sub ExpandAnimations()
AnimVisibilityTag = "AnimationExpandVisibility"

Dim pres As Presentation
Dim Slidenum As Integer

Set pres = ActivePresentation
Slidenum = 1
Do While Slidenum <= pres.Slides.Count
Dim s As Slide
Dim animationCount As Integer
Set s = pres.Slides.Item(Slidenum)

If s.TimeLine.MainSequence.Count > 0 Then
Set s = pres.Slides.Item(Slidenum)
PrepareSlideForAnimationExpansion s
animationCount = expandAnimationsForSlide(pres, s)
animationCount = 1
End If
Slidenum = Slidenum + animationCount
End Sub

Private Sub PrepareSlideForAnimationExpansion(s As Slide)
' Set visibility tags on all shapes
For Each oShape In s.Shapes
oShape.Tags.Add AnimVisibilityTag, "true"
Next oShape

' Find initial visibility of each shape
For animIdx = s.TimeLine.MainSequence.Count To 1 Step -1
Dim seq As Effect
Set seq = s.TimeLine.MainSequence.Item(animIdx)
On Error GoTo UnknownEffect
For behaviourIdx = seq.Behaviors.Count To 1 Step -1
Dim behavior As AnimationBehavior
Set behavior = seq.Behaviors.Item(behaviourIdx)
If behavior.Type = msoAnimTypeSet Then
If behavior.SetEffect.Property = msoAnimVisibility Then
If behavior.SetEffect.To <> 0 Then
seq.Shape.Tags.Delete AnimVisibilityTag
seq.Shape.Tags.Add AnimVisibilityTag, "false"
seq.Shape.Tags.Delete AnimVisibilityTag
seq.Shape.Tags.Add AnimVisibilityTag, "true"
End If
End If
End If
Next behaviourIdx
On Error GoTo 0
Next animIdx
Exit Sub

MsgBox ("Encountered an error while calculating object visibility: " + Err.Description)
Resume NextSequence
End Sub

Private Function expandAnimationsForSlide(pres As Presentation, s As Slide) As Integer
Dim numSlides As Integer
numSlides = 1

' Play the animation back to determine visibility
Do While True
' Stop when animation is over or we hit a click trigger
If s.TimeLine.MainSequence.Count <= 0 Then Exit Do
Dim fx As Effect
Set fx = s.TimeLine.MainSequence.Item(1)
If fx.Timing.TriggerType = msoAnimTriggerOnPageClick Then Exit Do

' Play the animation
PlayAnimationEffect fx

' Make a copy of the slide and recurse
If s.TimeLine.MainSequence.Count > 0 Then
s.TimeLine.MainSequence.Item(1).Timing.TriggerType = msoAnimTriggerWithPrevious
Dim nextSlide As Slide
Set nextSlide = s.Duplicate.Item(1)
numSlides = 1 + expandAnimationsForSlide(pres, nextSlide)
End If

' Apply visibility
rescan = True
While rescan
rescan = False
For n = 1 To s.Shapes.Count
If s.Shapes.Item(n).Tags.Item(AnimVisibilityTag) = "false" Then
rescan = True
Exit For
End If
Next n

' Clear all tags
For Each oShape In s.Shapes
oShape.Tags.Delete AnimVisibilityTag
Next oShape

' Remove animation (since they've been expanded now)
While s.TimeLine.MainSequence.Count > 0

expandAnimationsForSlide = numSlides
End Function

Private Sub assignColor(ByRef varColor As ColorFormat, valueColor As ColorFormat)
If valueColor.Type = msoColorTypeScheme Then
varColor.SchemeColor = valueColor.SchemeColor
varColor.RGB = valueColor.RGB
End If
End Sub

Private Sub PlayAnimationEffect(fx As Effect)
On Error GoTo UnknownEffect
For n = 1 To fx.Behaviors.Count
Dim behavior As AnimationBehavior
Set behavior = fx.Behaviors.Item(n)
Select Case behavior.Type
Case msoAnimTypeSet
' Appear or disappear
If behavior.SetEffect.Property = msoAnimVisibility Then
If behavior.SetEffect.To <> 0 Then
fx.Shape.Tags.Delete AnimVisibilityTag
fx.Shape.Tags.Add AnimVisibilityTag, "true"
fx.Shape.Tags.Delete AnimVisibilityTag
fx.Shape.Tags.Add AnimVisibilityTag, "false"
End If
' Log the problem
End If
Case msoAnimTypeColor
' Change color
If fx.Shape.HasTextFrame Then
Dim range As TextRange
Set range = fx.Shape.TextFrame.TextRange
assignColor range.Paragraphs(fx.Paragraph).Font.Color, behavior.ColorEffect.To
End If

Case Else
' Log the problem
End Select
Next n
Exit Sub
MsgBox ("Encountered an error expanding animations: " + Err.Description)
Exit Sub
End Sub

Saturday, March 20, 2010

Transportation Simulation Games

When SimCity 4 was released, I was very disappointed by the broken transportation model, the assumptions built into underlying economic model that resulted in all cities looking like California, and the weird game design that assumed I was more interested in building stories about Sims than about building stories about the city itself. Since then, I became interested in figuring out how the simulations in these games work, but I always got hung up on a small mathematical detail. I finally figured out the problem a couple of weeks ago. Some simple grade 11 math is all you need, but I just couldn't see it until now.

The most intuitive scale to model these simulations at is at the level of individual people. So for each "person" in the city, you code up a model for the person's preferences and behaviour, and you then "run" this model and record what happens. So if a person wants to drive along a road from one building to another, you then increment the traffic on that road segment to account for the person's car. If you simulate enough people, you can figure out the traffic along each street, resulting in a simulation for the entire city.

The problem is that the city is constantly changing and the people being simulated will change their behaviours, so you want to have the simulation running constantly to take these changes into account. This makes the simulation complicated. If your simulator decides that a person will take an alternate route to get to a destination, the simulator needs to reduce the traffic along the old route of the person (since the person no longer drives on those roads) and increase the traffic along the new route. Keeping track of the routes of all these people is complicated, memory-intensive, and slow. If you don't remove the person's traffic from the old route though, then the traffic along the old route will keep on increasing (since there's no way for the traffic to go down to reflect the fact that fewer people are using it).

One solution is to divide your simulation into rounds. In each round, you simulate the actions of all the people in the city once, record the result, and update the UI with these results. Then you start a new round by discarding all the results (i.e. set the traffic of all roads to zero) and running a complete cycle of the simulation again. This approach isn't satisfactory because one cycle of the simulator might take several minutes to complete. So if you change something in the city, it might take several minutes to see the effect of these changes in the simulator because the UI won't be updated until the next simulation cycle.

We want to constantly update the traffic values, but this results in traffic growing without bounds. Resetting the traffic to zero is not practical because we want to be able to display the simulation values as they are updated. What we need is some way to "decay" the traffic along road segments over time. As a result, the traffic along each road segment will constantly decrease, but as you simulate the actions of different people, the traffic along each road segment will increase again. For this to work though, the speed of the decay must be balanced out the speed at which you add new traffic, so that the simulation converges to reflect the correct traffic along a certain road. But I could never figure out the math for this problem. If traffic decays too quickly, then the traffic will be close to zero in most places, but will spike up randomly in places where the new routes of people are being simulated. If traffic decays too slowly, then the total traffic along each road will increase without bound.

Simulating transportation always results in new traffic being accumulated in an additive fashion. i.e. As you simulate the route of one person, the traffic along the route will be incremented by a constant to reflect the traffic of this one person. But the decay must be in the form of a rate. The traffic along a route cannot be decreased by a constant all the time, but by a percentage of its existing traffic (because heavily trafficked routes should have their traffic decreased by a larger amount than lightly-trafficked routes--otherwise the traffic on light routes will fall to zero while the traffic on the heavier routes will increase without bound). Previously, I tried to think of things as being like a physics simulator where the simulator would try to balance things out so that the amount of traffic added by the simulator will be counter-balanced by removing an equivalent amount of traffic due to decay or something like that, but this isn't quite the right thing to do.

In fact, the solution is the geometric series. For example, assume that you don't change your city at all, so that the simulation should converge to a steady-state. So if you simulate all the people in the city, the same amount of traffic will be added will be added to a particular road segment (say, a). If you decay the traffic on any particular road segment by a rate r (let's assume you apply the decay once every time you do a complete simulation cycle of all the people in the city), then over time, this results in a geometric series. Geometric series will converge to a/(1-r). This is exactly what we need! We don't have to fiddle with balancing the rate of new traffic with old traffic or whatever--the geometric series shows that the simulation will converge to nice stable values. And we can then scale a and r to get the convergence rate and overall stability that we want. It all becomes nicely straight-forward.

Wednesday, February 17, 2010

Background Music in JavaScript without Flash

So for the last half a year, I've been playing a bit with making larger and more complicated games in JavaScript. I've found a few neat new tricks, but I haven't gotten around to writing about them until now. One thing I figured out is how to play background music in modern browsers without using Flash. This is useful for JavaScript games where you need some music to help set the mood in a game.

So for non-IE browsers, you can use HTML5. Although the APIs seem to work reasonably well, I'm not sure how stable the specification is. There's some occasional weird things in the specification that suggest that the specification is immature. For example, the audio specification makes use of a lot of "boolean attributes" which are HTML attributes that can be defined or not defined instead of simply being true or false (so setting the attribute to false is equivalent to defining it as true). This sort of bizarre, unnecessarily complex, probably XML-incompatible stuff suggests that the specification still contains idiosyncrasies and pet projects of individual developers instead of being polished into a consistent spec. I'd be concerned of a problem like with ECMAScript 4 where once Microsoft and Yahoo! applied some common sense to the specification, a lot of things might end up changing. So I suggest using HTML5 with caution--you should always check to make sure that every method and field that you want to use is supported by your browser before relying on their existence.

Well, anyways, for HTML5 audio, you first need to check if it's supported:

var hasAudio = true;
try {
var audiotest = new Audio();
if (!audiotest.canPlayType) hasAudio = false;
} catch (e) { hasAudio = false; }

Every browser supports different audio formats, so you need to make an MP3 version of your files (you can use iTunes for that) for Safari/Chrome and an OGG version (you can use Audacity for that) for Firefox/Chrome. For some reason, the Chrome browser doesn't support WAV files because they claim the files are "too large," and I think this is a clear example of the immaturity surrounding the HTML5 specification in general. There are many good reasons for supporting WAV files in browsers, and for short sound effects, WAV files can even be smaller than OGG or MP3 files. The reason for not including WAV support in Chrome is simply the result of bias and ignorant opinions as opposed to a mature approach to the problem. So anyways, you need to test which background music file to load depending on which formats are supported.

var clip = new Audio();
clip.autoplay = false;
clip.autobuffer = true;
var canOgg = clip.canPlayType("audio/ogg");
var canMp3 = clip.canPlayType("audio/mpeg");
var canMp3Alt = clip.canPlayType("audio/mp3");
var canWav = clip.canPlayType("audio/wav");
if (canMp3 != "" && canMp3 != "no")
clip.src = mp3File;
else if (canMp3Alt != "" && canMp3Alt != "no")
clip.src = mp3File;
else if (canOgg != "" && canOgg != "no")
clip.src = oggFile;
else if (canWav != "" && canWav != "no")
clip.src = wavFile;

And from there, playing the file is easy (though Firefox 3.5 doesn't support looping music, so you need to do that manually).

clip.addEventListener("ended", function() {
// I'm not sure if this setTimeout() thing is necessary
window.setTimeout(function() {;}, 1);});;

For Internet Explorer, you can use Windows Media Player to play music. First, you have to check to see if Windows Media Player is available using ActiveX:

var hasWmp = true;
try {
var player = new ActiveXObject("WMPlayer.OCX.7");
} catch (e) {hasWmp = false;

For some reason, when you create the ActiveX object, it isn't bound to your web page properly, so you can't use relative URLs to refer to music files. If you do want to use relative URLs, you actually have to create the player object as an HTML object in your document.

var tmp = document.createElement('div');
tmp.innerHTML = '<OBJECT CLASSID="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6"></OBJECT>';
player = tmp.getElementsByTagName('OBJECT')[0];
tmp = null;
if (!player || !player.versionInfo) hasWmp = false;

From there, playing an mp3 file is easy:

player.settings.setMode("loop", true);
player.URL = mp3File;;

Update 2010-5-3: Apparently, using MP3 files on a web page or in a game requires an mp3 license. In the future, I guess I will only support Ogg Vorbis and let users of Safari and IE endure the silence.

Friday, January 22, 2010

We're Currently in a Golden Age of Literacy, and It's All Downhill from Here

During the past few months, I've slowly started to notice that a lot of the things that I used to enjoy reading about, I know prefer watching videos about instead. For example, I used to enjoy reading reviews about video games on the web, but recently I'll search for video reviews first, and I'll only fall back to text reviews if I can't find any videos. I'm also starting to do the same for educated political commentary and satire. In fact, if there was enough video content on my favourite topics, I might prefer watching videos to web browsing entirely.

The logical conclusion of all this is that this signals the end of literacy as we know it. Sure, we might now complain about the poor literacy skills of our kids, we decry their unintelligible text messages, and we bemoan their incoherent essays, but ten years from now, we're going to look back on 2010 as a golden age of literacy. It will be considered the pinnacle of literacy in the world--people all over the world actually had to type words in order to find information! People were required to read information and instructions! Children preferred sending messages to each other in the form of text! In between the television age of the 1980s and the Internet video age of the 2020s, the period around 2010 will be considered a renaissance of literacy where people actually had to know how to read and write text to survive in the world!

In ten to twenty years, there will no longer be a need to read and write information; everyone will use video. Instead of web pages, there will be videos. Instead of SMS and tweets, kids will send videos to each other. Instead of textbooks, there will be slides and accompanying video. Society will complain about how kids can't form well-thought out text any more, preferring stream-of-consciousness videos. Kids might now even write text any more but simply make videos, edit it together somehow, and then dump the text out. Similar to how people ignore people talking to themselves in the street (they assume that the person is using a hands-free cellphone) and ignore people checking e-mail during conversations, I think in the future, people will simply get used to the idea of people quietly making short video messages to others in the middle of a meeting (of course, it will seem extremely disruptive to us, but I imagine it's something that future generations will simply get used to).

There are a few weaknesses of video--because it is a linear medium, it's hard to scan through it quickly, so there will still be some text that people will read. This text will mostly be in the form of "headlines," "search results," and "transcripts." When we want to find videos to watch, we'll tell our computer what videos we want to watch (through a webcam, of course!), and the computer will show various video clips with meaningful text headings that we can scan through quickly to find what we want. And every video will have an accompanying transcript (in both text and screencap form) that people can use to quickly jump around to the parts of videos that they're interested in.

I see two impediments to this video future:
  1. patents
  2. speech recognition (specifically video transcription) technology

The main barrier to this future right now are patents on video technology. These patents inhibit standardization (so no two companies can agree on a video e-mail standard or a webcam standard, for example), which prevents a network effect from starting around video technology. The cost and licensing difficulty of patents also inhibit innovation and entrepreneurship in this area. New companies and researchers don't want to develop new ideas in this space because it's too complicated and too expensive to license technology in this area, so the potential rewards (with Internet technology, most of this consumer technology has zero revenue) aren't profitable. I believe this patent issue will work itself out once patents expire or once storage and bandwidth becomes so cheap that we'll just use raw video for everything. The day after a useful video patent expires, a standard for video e-mail will be made (or perhaps there will be some sort of proprietary Facebook thing that comes out before then that everyone will use--unfortunately, it will be a bit of a kludge getting it to work with cellphones and other devices).

The other barrier is the lack of accurate video transcription technology. As I mentioned before, video is a linear medium, so we need a way to search it and to scan through it. The minimum technology we need for this to be practical is automatic video transcription. Currently, speech recognition isn't quite at the point where this is possible, but I imagine this technology will be ready by 2020, perhaps even by 2015 (there might be patent issues on this technology that will delay its widespread use for a few more years).

It's sort of weird knowing that despite all the complaints that people have about literacy in 2010, this is actually the best it's going to get. I'm not sure about the ultimate implications and ramifications of the future age of video on society, but I think it's inevitable, so we might as well start preparing for it now. I suppose other people have also been evangelizing that video is the future, so my musings are perhaps not the most original, but they never tell you that the trade-off is the end of literacy.