Wednesday, April 27, 2005

Fedora Core 3 and User-Mode Linux

Previously, I made a Fedora Core 3 disk image for User-Mode Linux by simply copying the disk image of a running system into a loopback disk image and starting User-Mode Linux using that disk image. That works, but I later found that some of my services didn't start up correctly in UML.

It seems that Mysql doesn't work with the current User-Mode Linuxes based on 2.6 kernels. Apparently, UML doesn't implement some of the new threading features of 2.6 kernels, which causes various problems. To avoid that problem, you have to erase the libraries that use these new threading features. I found a forum that listed this fix

mv /lib/tls /lib/tls.backup

that prevents the libraries that make use of the new threading features from being found. While you're at it, it's probably also a good idea to disable the SELinux features because they haven't been designed to be easily configurable in Fedora Core 3 anyways. Just modify /etc/selinux/config so that SELINUX=disabled (instead of "enforcing").

I also had problems with postgresql, but that was because I forgot to make /tmp world-writable. Unfortunately, my postgresql didn't seem to log anything, so I couldn't really diagnose the problem. I tried to play with the boot commands a bit to get it to log something, but in the end, I found it was easiest to simply run the postgres postmaster from the command-line with

su postgres
postmaster -D /var/lib/pgsql/data

Currently, my Fedora Core 3 image for User-Mode Linux seems to be running fine. httpd does seem to crash the UML kernel when it exits, but I can live with that for now.

Sunday, April 17, 2005

U3D is Half-Baked

So during the weekend, I decided to try writing a U3D file viewer. I had high hopes for U3D because U3D seemed to avoid many of the weaknesses of the competing X3D specification. After working with U3D for a bit though, I've concluded that the U3D specification is half-baked. It's almost as if the 33 members of the 3D Industry Forum performed almost no due diligence in reviewing the specification. Personally, I think the spec is unworkable and unimplementable. This is a real shame because Adobe went and added U3D support to Acrobat, meaning that a lot of marketing muscle and mindshare has just been put behind a losing proposition. Even if a better spec were designed in the future, since it wouldn't be supported in Acrobat like U3D, it would likely fail.

I can ignore minor annoying aspects like the fact that the spec itself wasn't well-designed, treating things like arrays in several different ways throughout the spec and going out of its way to avoid simplifying the descriptions of the fields, and such.

But the killer of the spec is its ridiculously overbearing bit encoding scheme. I just cut & paste the code from the specification and translated it to Java, and it's just awful. The code uses something like over 30 lines of code to read an uncompressed 8-bit integer. And the encoding/compression scheme has some characteristics that simply ruin the rest of the specification.

For example, the compression scheme sometimes requires you to feed in parameters of live data structures in order to read subsequent data. This means that you can't simply read the data into data structures and then process it later. You literally have to create a live mesh data structure, manipulate the mesh as data is being read in, and feed the resulting properties of the mesh back into the compression scheme in order to read the next bit of data.

And the encoding scheme has this weird property whereby previous data values read can influence future data values read. So even though the file format is supposedly tag/block-based (meaning that you should be able to process only the types of data that you are interested in and skip the rest), you still need to read, process, and understand every single bit of data in a file in order to read later bits of the file because otherwise the decoding state won't be properly configured when you get there. This even holds true for uncompressed data, which is just wonky. I wouldn't have believed it if I couldn't see it right there in the code that I cut & paste from the spec.

After having spent something like 5+ hours on trying to read in a CLOD Progressive Mesh Continuation Block from a file and failing miserably, I can only conclude that it's simply not worth the trouble to implement. I am now very suspicious about why the 3DIF still hasn't released a reference encoder/decoder three months after the release of the specification. In any case, as far as I'm concerned, U3D is a write-off. It might gain a little bit of mindshare because of its Acrobat-support, but I think it will eventually die off because the only way to create U3D files is to use the rumoured Intel SDK (since it's essentially impossible to write one's own encoder), and most 3D tools developers will decide to ignore it.

Friday, April 15, 2005

Some First Impressions on the U3D Format

I haven't actually done any examining of U3D files yet, but I did a quick skim through of the specification, and I have a couple of first impressions.

The first thing I noticed was that they put a "file size" field in the file format. Personally, I think no self-respecting file format includes a "file size" field. The inclusion of such a field makes it impossible to stream data to stdout. Instead, you have to write out all the data, then rewind all the way back to the beginning of the file, just to fill-in that field. It would be different if the field served some sort of useful purpose, but I imagine all U3D file readers will just ignore it.

Then, the basic geometry type of the file format is the Continuous-Level-of-Detail (CLOD) triangle mesh. This is a little unusual because I thought the consensus in the 3d graphics community was that no one actually uses this stuff (of course, not being in the 3d graphics community myself, I can't be 100% sure, but that's the impression I get from reading papers). The main uses of CLOD meshes are for progressively refining 3d models during data streaming and for animation. I think the Internet experience has been that progressively refining anything during data streaming just isn't worth the trouble. For example, who uses the "interlace" option on their PNGs and GIFs? And properly getting the synchronization right on browsers so that they can handle the rendering of partially downloaded files is just a big mess. Just look at the old Java image APIs that supported these features: they were almost unbearable to use. It's easiest to simply use less-detailed thumbails/models. And I think CLOD meshes aren't used for animation either. If refined versions of CLOD meshes are computed by the microprocessor, then they have to be transferred to the video card for rendering, which is a waste of video bandwidth. It may be possible to do the refining of CLOD meshes on the video card, but I think no one does this because a) it's hard, b) CLOD meshes take up a lot of memory c) refining CLOD meshes aren't easily parallelizable and exhibit poor memory access behaviour d) you can get nicer looking low-polygon models by editing them by hand and using bump-maps and stuff. I suspect that the only reason CLOD meshes were included in the specification is that Intel wanted them there, but I would have thought that the rest of the 3DIF members would have convinced Intel to leave the feature out for the first version and only include it in version 2.0 (and then decide never to release a version 2.0 of the specification). I think it's possible not to use the CLOD mesh features though.

The U3D file format also uses a large variety of data types. I guess it isn't a real problem, but in this age where bandwidth is cheap (and things can be compressed fairly efficiently anyways), I think it makes more sense to have a couple of floating point types, an integer type, a byte type, and that's it. I haven't really examined what all the different types are used for though--there might be a good reason.

And the "U3D" way of doing things means that most resources are given string names. This seems a little inefficient (especially if you use something like UTF-16 encoding). It would seem to me that named resources would be more the exception than the rule, so it would make sense to have most resources be unnamed, and provide an optional naming interface for those resources that do, in fact, need names so that they can be accessed externally.

Also, the file format uses some weird lossless compression scheme. If the compression scheme were decent (like, say, the lossless compression that comes with jpg, mpg, and png), then I guess it might be worthwhile, but it looks like it's possible to get even better compression just by feeding the data into a zip compressor, so I'm surprised they even bothered.

I haven't really dug all that deep into the U3D file format yet though, so until then, I can't really say anything intelligent about the standard. But hopefully, I'll get around to doing that in the future.

Finding U3D Files

For the past few weeks, I've been wanting to play with the U3D file format a bit. Unfortunately, no actual U3D files are available on the web. There are U3D files embedded in pdf documents that can be viewed with Acrobat Reader, but if you want to look at an actual U3D file, you're out of luck. Intel apprently has a SDK for creating and reading U3D files, but they haven't released it to the public yet, so the most widespread way for people to get an actual working U3D file right now is to get a trial version of commercial 3d file format conversion software (which has Intel's U3D SDK embedded in it) and to convert some existing files over to U3D.

I tried e-mailing the 3DIF, asking them to put up a plain U3D file somewhere so that I could take a look at it, but they never bothered replying after a week. Fortunately, after a little bit of research, I found a utility called Multivalent. This utility comes with a tool called Uncompress, which is able to take a pdf document and uncompress all the files attached inside. Of course, afterwards, you still have a pdf document with U3D files embedded inside, but since the U3D data is now uncompressed, it's possible to examine the raw U3D data.

Since all U3D files start with the string "U3D\0" you can just discard all of the pdf data that comes before that string, and everything after that is a valid U3D file (I presume). I used a little Python script like this to discard the initial useless pdf data:

data =
# You need to take the second "U3D"
idx=data.index('U3D', data.index('U3D')+1)
data = data[idx:]

Saturday, April 02, 2005

Maybe X3D Isn't Such a Bad Spec After All

After previously denouncing the X3D specification, I decided to take a look at the competing U3D specification. I skimmed through the specification trying to think about what would be required to implement a U3D reader (the spec mandates a binary encoding with special compression used) when I realized that there aren't any U3D files available for testing U3D implementations. Nowhere. Except when they're embedded in pdf documents. Perhaps one has to be a member of the 3dif to get access to them, but the problems I have with the X3D specification pale in comparison to the no-no of developing a specification that uses a complicated encoding without providing an example file anywhere. Well, I e-mailed the 3dif. Maybe I was just looking in the wrong places.

3D Tools for the Casual 3D Artist

Every few months, I get the urge to learn how to do rudimentary 3d art, so I do a quick tour of 3d tools available for casual users, and I always come back disappointed. For casual users, it's important that tools be cheap, not require users to have artistic talent in order to create simple objects, and be easy to use.

Immediately, the big, mainstream 3d modelling packages can be ruled out because they're too expensive. Of course, it's possible to use illegal copies of these programs, but I prefer not to do that. Then there are the mid-tier tools, but I'm always hesitant to shell out several hundred dollars for little-known tools from little-known companies that may or may not be any good.

This pretty much leaves low-cost and open source tools. The tool with the most mindshare currently is Blender. Unfortunately, Blender is a good example of a software application where little or no attention has been paid to the user-interface. If you read the development docs, this was an intentional decision on the part of the developers. For example, up until a couple of months ago, Blender proudly touted the fact that it had no undo functionality (telling you to save often instead). Personally, I'm tempted to use terms like "amateur-hour" here, but honestly, if I were a professional 3d artist who had just shelled out thousands of dollars for this tool, I would probably be ok with learning all of the weird UI quirks and hotkeys needed to be productive with the tool. Since I'm not a professional 3d artist and the tool is free, it's probably unfair of me to apply my standards on the tool. I have made a few honest attempts at trying to learn Blender, but all the hotkeys made my head spin and it seemed to crash quite often for me. Plus, based on the tutorials, it seems like users need powerful 3d visualisation skills to make effective use of Blender--skills that I don't possess. There's no way I can think in terms of "take a sphere, extrude this section while rotating it like so, perform a lathe on that object, and then edit the mesh." At best, I can look at a 3d object and play with the vertices a bit to get the shape that I'm looking for.

Milkshape3d is another popular 3d tool, and I've been tempted on many occasions to shell out the cash to go buy it. Milkshape3D is nice because it does only a few small things, but it does those things reasonably well. Unfortunately, every time I'm about to mail away my money to Switzerland, I start to think about the tedium of clicking in multiple windows everytime I want to add a single point to the model, and how I always get confused when I have to keep looking at different windows to know where I am, and I decide to wait for something better. I'm probably being hopelessly optimistic because manipulating 3d geometry on a 2d screen with a 2d input system (the mouse) is, by definition, hopelessly complex. But still, when I've used scaled back versions of professional tools (e.g. GMAX), they always provide nice facilities for manipulating polygon vertices without much hassle, so maybe that's what I'm looking for.

Art of Illusion seems to be reasonably powerful, but I think its 3d graphics isn't hardware accelerated, so it always seems a little sluggish to me. Plus, I can never figure out what the icons mean and the tooltips take too long to show up, so I spend much of my time holding my mouse pointer over icons to figure out which one does what I want.

Recently, I tried Wings 3D, and I really liked it. It's possible to perform lots of complex operations with complete ease. Pretty much all of the UI is self-explanatory and mouseable. The main advantage of the UI is that since everything is explained on-screen, you only have to memorize five things to be productive--namely what G, L, Q, space bar, and the middle mouse button do. With everything else, all you need is a general idea of what you want to do, and you'll probably be able to find it in the menus somewhere. The only problem with Wings 3d is that it doesn't work with many graphics cards. I tried it on an ATI Rage card, and an Intel integrated Extreme Graphics, and the UIs ended up being too horribly mangled to be usable. Finally, on an ATI 7500, the UI still didn't quite render correctly, but it was enough to be usable, and it was a joy to use. I was tempted to dig into the code and see if I could find any workarounds for my problems, but Wings 3D is coded in Erlang, so...yeah.

So in conclusion, if you want to dip your toe into 3d graphics on the cheap, try Wings 3D, and if it doesn't work, give up. Of course, I may be the only person who finds xfig to be an easy-to-learn program that one can quickly become productive in, so perhaps my experiences may differ from that of others.