tag:blogger.com,1999:blog-93506402024-03-07T17:24:14.902-05:00Ming's Coding BlogA summary of issues I've encountered during coding and the solutions that I've found.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.comBlogger110125tag:blogger.com,1999:blog-9350640.post-19871140453906685742023-06-08T18:54:00.001-04:002023-06-08T20:03:17.228-04:00Using My Own Programming Language in a Game Jam<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidRK5RrEltV9EOemL1w1Z80Sy3bvnWN4w5q8Dr1Cj71u3NSr3p1GCuG9bB2s-FBoQhkeLdTUL5FBHYWdFBCXmiJ1-j9uBucFQ4wrRfzVNyBDtBSg7MGr-qbS2VzQa5zSWpm8QUWz8eB8F4FAtGT8S8bnHy4s3eAOaN4KHN0LiSGiJd55-iLzs/s3264/IMG_20230608_200021.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="3264" data-original-width="2448" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidRK5RrEltV9EOemL1w1Z80Sy3bvnWN4w5q8Dr1Cj71u3NSr3p1GCuG9bB2s-FBoQhkeLdTUL5FBHYWdFBCXmiJ1-j9uBucFQ4wrRfzVNyBDtBSg7MGr-qbS2VzQa5zSWpm8QUWz8eB8F4FAtGT8S8bnHy4s3eAOaN4KHN0LiSGiJd55-iLzs/w240-h320/IMG_20230608_200021.jpg" width="240" /></a></div><br /><p>For the past while, I've been working on creating a new programming language called <a href="https://www.plom.dev/">Plom</a>. This is a bit unusual because I don't actually believe in creating new programming languages. I think there are enough programming languages. I don't think there's much benefit to creating new ones beyond personal satisfaction. And I even dedicated a whole PhD to researching how to add new features to existing programming languages so that you don't have to create new ones.</p><p>The reason I ended up making a new programming language was that I was interested in getting more people into programming by making it easier for people to program using cellphones. In a world where more and more people have cellphones, we need to bring programming to cellphones instead of expecting people to buy computers to do any programming. But then I faced a standard research dilemma of whether I should adapt an existing programming language resulting in something more practical but full of compromises or whether I should design a new language entirely from scratch giving me more avenues to explore to find the best solution but resulting in something less practical for real-world use. In this case, I did decide that making a new programming language would simply offer considerably more gains over adapting an existing language, and I felt like it was important to show as many gains as possible. I need to convince others that this direction in programming language design is important, and I feel that I need to show clear improvements over existing techniques in order to do that.</p><p>When creating a new programming language, I have a theory that it's important that the programming language be usable for real programming. It doesn't matter if it's a toy programming language or an educational programming language or whatever. To fully understand the main problems and issues behind a new programming language, it must be used. It's also important for marketing because the primary evangelists behind programming languages are other programmers. So it doesn't matter if a programming language is intended for a non-programmer audience, the people who will evangelize the language to them are, in fact, programmers. As such, the programming language must be usable for real programming by real programmers if you want it to gain any traction.</p><p>So all this is a long-winded explanation for how I ended up game jamming with my own programming language. I've been working on my Plom programming language for a while, and though it's still in rough shape, I felt that I really needed to start using it for something real to get a feel for the real issues facing the language and where it needed improvement. If I were to just make some toy programs in a relaxed environment, I would end up working in starts and stops. Seeing one annoyance in the language, then taking a break to work on it, and then switching back and forth, again and again. But game jams are useful because you have a limited time and you must absolutely focus on the language to drive it through any problems and issues. Instead of taking a break to fix an annoyance, I had to keep programming despite the annoyance, which sometimes revealed deeper, more important issues. So I entered a <a href="https://itch.io/jam/tojam-2023">weekend game jam</a> with the intention to make a game entirely using Plom, allowing me to focus entirely on Plom for one weekend so that I could understand its strengths and weaknesses. I spent the weeks before the game jam making sure that Plom had at least rudimentary support for being used in a game jam such as basic support for importing external resources (like images for games), a rudimentary runtime so that code could be run, and the ability to export everything as a game. I then installed everything on my iPad and Android phone and headed off to the jam.</p><p>Overall, programming with Plom worked better and worse than expected. </p><p>It worked better than expected in that I was actually able to finish making <a href="https://my2iu.itch.io/adventurers-backpack">a small game with the language</a>. I kept worrying that I would encounter some fundamental flaw in the language that would make the project go awry. I was sure that I must have overlooked some implementation detail that would cause the language to behave unreliably, requiring hours and hours to debug, derailing things. In the end though, the language itself seemed to work fine, and it behaved as it was designed to. Its implementation seemed to scale properly to support a small, real program, and its performance was adequate. PlomGit, the little git version control app I wrote earlier, was solid, and I was able to move between developing on Android, iOS, and web depending on where I was without any complications. I did most of my programming on my iPad because it was a faster machine, but I did do some programming on my phone on the subway too. I really only had to stop game jamming and focus on improving Plom once, which was to add support for importing Plom projects made in iOS/Android directly into the web version, so that I could debug them more easily. It wasn't a fundamental issue with Plom itself. So overall, I think the general design that I have for Plom seems solid.</p><p>Plom worked much worse than expected in that some aspects of Plom simply weren't ready for real programming yet. Mainly, the error reporting and logging were totally inadequate. I knew of this shortcoming when entering the game jam, but I expected to be able to set-up a full build environment for Plom at the game jam where I could debug Plom and dig into its internals at the jam site to figure out what was going wrong. But the computers at George Brown College were shared, and I didn't want to store my ssh keys there, and installing xcode on those machines to allow me to deploy new iOS versions seemed a little dubious (curse you, Apple and your closed developer ecosystem that requires complicated pipelines and signing credentials just to deploy small programs). So even if I made small coding errors in my game, there was inadequate feedback about where the errors were, and I wouldn't be able to properly debug it until I brought the code home and the entire Plom environment in a Chrome debugger (I've never really been able to get the Safari debugger to work properly with GWT code running in frames, so debugging the Plom environment in iOS never really worked for me). As a result, at the beginning, when I wasn't really confident in which parts of Plom were reliable or not and didn't have much experience with programming in Plom, I made several coding mistakes that weren't obvious, couldn't track them down, and pretty much became stuck for hours at a time because I didn't know whether the bugs were in Plom or my game code or somewhere else. In fact, by half way through the jam, I still couldn't reliably display images on the screen, let alone make a game. I then had to discard my initial plan for a game and come up with an entirely different game idea that was much simpler because there was no way I could make my original game idea with the pace I was proceeding at. By the end of the second day, I was finally able to get some small things running and with a game idea that had a more manageable, smaller scope, I didn't feel so stressed out. By the third day, I was more confident in using Plom, and I was more confident that any bugs I encountered were bugs in the game and not with Plom, so I was really able to focus my efforts and finish the game.</p><p>So overall, the game jam was pretty stressful. Usually with game jams, I spend the first day coming up with an idea and programming the basic groundwork for the game. I then have the game playable by the end of the second day. And I spend the third day polishing the game up so that it's enjoyable to play. With this game jam, I went in with a simple game idea already and I spent the first day getting used to Plom and trying to draw sprites onto the screen. By the second day, I was still fighting with getting a basic game framework going, and disheartened by how things were going, I had to change game ideas. I was even making a new Array implementation for Plom that could interface better with JavaScript code. It was only on the final day where I was able to program most of the functionality of the game itself. I always felt like I was behind and struggling to catch up. But I did catch up, and though the game isn't remarkable by any means, it is a real program and it was made with Plom. Plom still has a long road before it's a usable language. During the jam, I must have made 2-3 pages of notes of things that needed improvement. But I'm encouraged by how things went, and I'm beginning to think that Plom might actually work as a language.</p><p><br /></p>Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-57554073163453938692022-11-30T13:16:00.001-05:002022-11-30T13:22:01.439-05:00Steam Deck is Like the DOS Era All Over Again<p>I recently purchased a Steam Deck for my parents, hoping that it would be an easy to use gaming machine for the occasional times when my parents want to game. The promise of the Steam Deck was that it would be an easy to use gaming machine like a gaming console, but for PC games. Instead, I've found the Steam Deck to be like DOS-era gaming where I have to spend huge amounts of time doing configuration and setup, and afterwards, everything is still sort of fiddly and difficult to use. </p><p>In the end, I now realize that the Steam Deck is not actually for PC gaming. It can play PC games, but the hardware and software have been specifically designed as a new gaming console designed specifically to play <i>Steam Deck games</i> for <i>Steam Deck gamers</i>. What I mean by that, is that the Steam Deck isn't really designed for more general gamers, and it really isn't really designed for non-Steam Deck games. I had a whole library of old Steam games that I've accumulated through Humble Bundles and elsewhere over the years, and I assumed that they would work okay on the Steam Deck. In fact, the experience of playing these games on the Steam Deck isn't that great. The Steam Deck is designed for playing Steam Deck games--games that have been customized and programmed specifically for running on the Steam Deck. If you have a lot of those games, then that's great. I think those are mostly action-oriented games, especially if they have been ported from other gaming consoles. The Steam Deck is also not designed for casual gamers. To use the Steam Deck, you have to learn a bunch of UI quirks and memorize several shortcuts. Non-tech-savvy people will never remember all these things and will become frustrated by the device. A lot of fit and polish issues needed for a general audience are lacking. For example, just turning on the device is a little complicated. There's a one or two second delay between pressing the power button and anything showing up on the screen. So when I press the button, I can never figure out whether the press was registered, and whether I should press the button again, which might turn it off, or long-press it to actually turn it on or whatever (a lot of other UI actions have long delays with insufficient feedback like that too--I'm looking at you, "return to game mode"). And when it does finish booting up, it dumps you on a non-customizable "home" screen, which doesn't actually list the games that you can play on your device. Instead, it lists a jumble of games that you recently purchased on Steam, some that you've played recently, etc. You have to press an unmarked shortcut (the B button) or navigate through the Steam menu to get to the games list, and then you have to navigate around that to get to your list of installed games. There's no way my parents or young kids will remember all those steps to get to their favorite game. You would think that this convoluted UI is a scheme to get you to buy more Steam games, but that's not the case either because you have to navigate the menus to get to the Steam store as well. I just don't understand why the UI is this way.</p><p>I've watched several videos about how the Steam Deck can be used as a computer. In fact, it only makes a suitable computer if you plug in a monitor and mouse and keyboard. The Steam Deck designers did not bother refining that aspect of the experience to make it practical if you're using just the Steam Deck itself. For example, I'm not sure if the hardware digitizer is poor quality or the touch drivers are poor, but all touchscreen actions are pretty janky. Swiping to scroll in web browsers and elsewhere doesn't really work smoothly. The virtual touch keyboard always misses key presses, so you can't really type quickly using it. I'm not sure if the soft keyboard is part of the OS or if it's a Steam thing because in some programs, the program loses keyboard focus when I'm in the virtual keyboard, which is annoying. There's no dedicated button to pull up the soft keyboard. Instead, you need to use the Steam-X shortcut, which normal people won't remember. That shortcut is also a hassle because it requires two hands to press (a good portable device should be usable one-handed), and I often end up accidentally pressing the grip buttons on the back of the device when I have to shift my hands over. When using the trackpad like a mouse, the R2 trigger is used for left-click, and the L2 trigger is used for right-click, which is going to throw beginners off. Also, the L2 and R2 triggers are analog triggers, so it's a pain having to squeeze them all the way down just to do a mouse-click. In particular, double-clicking is a real pain, and sometimes, I have to shift my hand a bit to fully depress the trigger, causing my thumb to shift on the trackpad a bit, moving my mouse pointer before clicking. Personally, I think R1 for left-click, and R2 for right-click might have been better. You can install your own programs and games, but Steam discourages that, requiring you to add 4 different pieces of artwork in 3 different locations to get your own programs to integrate nicely with the Steam interface. </p><p>Playing games that aren't optimized for the Steam Deck isn't too great too. Part of the problem is that the device is optimized for Steam Deck games at the expense of being good for general PC gaming. For example, besides the keyboard being janky, the Steam Deck doesn't have enough buttons for it to act like both a mouse and a game controller at the same time. With non-Steam PC games, there's an assumption that even if you have a gamepad, you might sometimes have to click on things or type things to configure things. But the Steam Deck can't be configured as both. You have to go into a mouse mode to do your mouse things, then switch back to controller mode to do your controller things. And there's no button for doing that switch, so you have to navigate menus or whatever every time you need to switch. If the Steam Deck were designed for general PC gaming, they would have lost one of the trackpads and had a dedicated left-click/right-click mouse buttons, plus a keyboard button. That way, you could do easily switch between mouse/gamepad/keyboard for non-Steam Deck games without much hassle. Instead, the games need to be customized specifically for Steam Deck to work well.</p><p>I still think the Steam Deck is a nice device, but a lot of the hype oversells what it is. It's not easy to use like a gaming console at all. It isn't great at general PC gaming, and you aren't going to pull out your old collection of Steam strategy games or whatever to play on it. It's not a general computer. You aren't even going to browse the web with it. I think there's still a lot of room for other manufacturers like GPD, AYN etc. to make better devices that are easier to use and better for gaming.</p>Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-6029765164053237922019-09-10T19:04:00.001-04:002019-09-10T19:14:58.458-04:00Swift ASN.1 Decoder for iOS Receipt ValidationIf you want to have in-app purchases in an iOS or MacOS app, you need a way to check what purchases have been made. Annoyingly, Apple does not provide developers with any code for doing this. Apple's APIs will give your program a receipt, listing what was purchased, but the receipt is encoded in a weird format, and Apple doesn't provide any code for reading this format. Apple's reasoning is that not providing code for this is like a very limited form of DRM/copy protection. If every program has custom code for parsing and interpreting the receipt, software pirates will need to do extra work to crack your software.<br />
<div><br />
</div><div>It is true that software piracy is rampant on Android, and it probably exists on iOS too. Some of us aren't really too concerned with this software piracy issue though, and we just want to implement some quick and dirty handling of IAP with the assumption that most software pirates wouldn't have purchased the software anyway. </div><div><br />
</div><div>Apple's preferred solution is for you to create your own receipt validation server that your programs can connect to, which will then contact Apple's servers to parse the receipt and to confirm that it's valid. This is a bit of hassle because you have to make an online service, figure out how to keep it running, protect it from hackers, and make your app more fragile because it will always be connecting to this online service.</div><div><br />
</div><div>The other solution is to do receipt validation on the app itself. This is annoying because Apple doesn't provide code for parsing the receipt, the receipt stored on the app contains less information than what Apple provides to servers, and iOS doesn't really bother to keep the receipt up-to-date all the time meaning you often have to go out of your way to update the receipt yourself. The most common way to do the receipt parsing is to just include a copy of OpenSSL in the app, but that involves some annoying interfacing with C code.</div><div><br />
</div><div>I just wanted something quick & dirty, and I'm not too concerned about doing all the signature checking and whatnot, so I just wanted some simpler Objective-C or Swift code online for doing receipt parsing. I tried looking around online a lot, but I couldn't find one, so eventually, I just rolled my own. It's pretty rough since I just threw it together until it worked just enough that it would work for my own app, so use at your own risk. Here it is:</div><br />
<pre>struct Asn1BerTag : CustomStringConvertible {
var constructed: Bool
var tagClass: Int
var tag: Int
var description: String {
return String(tagClass) + (constructed ? "C": "-") + String(tag);
}
}
struct Asn1Entry {
let tag : Asn1BerTag
let data : Data
let len : Int
}
// TODO: This parser thing is sort of insecure because it doesn't really do bounds-checking on
// anything, but it's only used for reading internal data structures so whatever
class Asn1Parser {
// Parse a single ASN 1 BER entry
static func parse(_ data: Data, startIdx: Int = 0) -> Asn1Entry {
var idx = startIdx
// Try to parse the tag
var val = data[idx]
idx += 1
let tagClass = Int((val >> 6) & 3)
let constructed = (val & (1 << 5)) != 0
var tagVal = Int(val & 0x1F)
if tagVal == 31 {
val = data[idx]
idx += 1
while (val & 0x80) != 0 {
tagVal <<= 8
tagVal |= Int(val & 0x7F)
val = data[idx]
idx += 1
}
tagVal <<= 8
tagVal |= Int(val & 0x7F)
}
let tag = Asn1BerTag(constructed: constructed, tagClass: tagClass, tag: tagVal)
// Try to parse the size
var len = 0
var nextTag = 0
val = data[idx]
idx += 1
if val & 0x80 == 0 {
len = Int(val)
nextTag = idx + len
} else if val != 0x80 {
let numOctets = Int(val & 0x7f)
for _ in 0..<numoctets {
len <<= 8
val = data[idx]
idx += 1
len |= Int(val) & 0xFF
}
nextTag = idx + len
} else {
// Indefinite length. Scan until we encounter 2 zero bytes
var scanIdx = idx
while data[scanIdx] != 0 && data[scanIdx+1] != 0 {
scanIdx += 1
}
len = scanIdx - idx
nextTag = scanIdx + 2
}
return Asn1Entry(tag: tag, data: data.subdata(in: idx..<(idx + len)), len: nextTag - startIdx)
}
static func parseSequence(_ data: Data) -> [Asn1Entry] {
var toReturn : [Asn1Entry] = []
var idx = 0
while idx < data.count {
let entry = Asn1Parser.parse(data, startIdx: idx)
toReturn.append(entry)
idx += entry.len
}
return toReturn
}
static func parseInteger(_ data: Data) -> Int {
let len = data.count
var val = 0
for i in 0..<len {
if i == 0 {
val = Int(data[i] & 0x7F)
} else {
val <<= 8
val |= Int(data[i])
}
}
if len > 0 && data[0] & 0x80 != 0 {
let complement = 1 << (len * 8)
val -= complement
}
return val
}
static func parseObjectIdentifier(_ data:Data, startIdx: Int = 0, len: Int? = nil) -> [Int] {
let dataLen = len ?? data.count
var idx = startIdx
var identifier: [Int] = []
while idx < startIdx + dataLen {
var subidentifier = 0
var val = data[idx]
idx += 1
while (val & 0x80) != 0 {
subidentifier <<= 7
subidentifier |= Int(val & 0x7F)
val = data[idx]
idx += 1
}
subidentifier <<= 7
subidentifier |= Int(val & 0x7F)
identifier.append(subidentifier)
}
return identifier
}
}
class IapReceipt {
var quantity: Int?
var product_id: String?
var transaction_id: String?
var original_transaction_id: String?
var purchase_date: Date?
var original_purchase_date: Date?
var expires_date: Date?
var is_in_intro_offer_period: Int?
var cancellation_date: Date?
var web_order_line_item_id: Int?
}
class AppReceipt {
var bundle_id : String?
var application_version : String?
var receipt_creation_date: Date?
var expiration_date: Date?
var original_application_version : String?
var iaps: [IapReceipt] = []
}
class ReceiptInsecureChecker {
func parsePkcs7ReceiptForPayload(_ data: Data) -> Data? {
// Root is a sequence (tag 16 is sequence)
let root = Asn1Parser.parseSequence(data)
guard root.count == 1 && root[0].tag.tag == 16 else { return nil }
// Inside the sequence is some signed data (tag 6 is object identifier)
let rootSeq = Asn1Parser.parseSequence(root[0].data)
guard rootSeq.count == 2 && rootSeq[0].tag.tag == 6 && Asn1Parser.parseObjectIdentifier(rootSeq[0].data) == [42, 840, 113549, 1, 7, 2] else { return nil }
// Signed Data contains a sequence
let signedData = Asn1Parser.parseSequence(rootSeq[1].data)
guard signedData.count == 1 && signedData[0].tag.tag == 16 else { return nil }
// The third field of the signed data sequence is the actual data
let signedDataSeq = Asn1Parser.parseSequence(signedData[0].data)
guard signedDataSeq.count > 3 && signedDataSeq[2].tag.tag == 16 else { return nil }
// The content data should be tagged correctly
let contentData = Asn1Parser.parseSequence(signedDataSeq[2].data)
guard contentData.count == 2 && contentData[0].tag.tag == 6 && Asn1Parser.parseObjectIdentifier(contentData[0].data) == [42, 840, 113549, 1, 7, 1] else { return nil }
// Payload should just be some bytes (tag 4 is octet string)
let payload = Asn1Parser.parse(contentData[1].data)
guard payload.tag.tag == 4 else { return nil }
return payload.data
}
func parseReceiptAttributes(_ data: Data) -> AppReceipt? {
var appReceipt = AppReceipt()
// Root is a set (tag 17 is a set)
let root = Asn1Parser.parse(data)
guard root.tag.tag == 17 else { return nil }
// Read set entries
let receiptAttributes = Asn1Parser.parseSequence(root.data)
// Parse each attribute
for attr in receiptAttributes {
if attr.tag.tag != 16 { continue }
let attrEntries = Asn1Parser.parseSequence(attr.data)
guard attrEntries.count == 3 && attrEntries[0].tag.tag == 2 && attrEntries[1].tag.tag == 2 && attrEntries[2].tag.tag == 4 else { return nil }
let type = Asn1Parser.parseInteger(attrEntries[0].data)
let version = Asn1Parser.parseInteger(attrEntries[1].data)
let value = attrEntries[2].data
switch (type) {
case 2:
let valEntry = Asn1Parser.parse(value)
// tag 12 = utf8 string
guard valEntry.tag.tag == 12 else { break }
appReceipt.bundle_id = String(bytes: valEntry.data, encoding: .utf8)
case 3:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 12 else { break }
appReceipt.application_version = String(bytes: valEntry.data, encoding: .utf8)
case 12:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 22 else { return nil }
appReceipt.receipt_creation_date = parseRfc3339Date(String(bytes: valEntry.data, encoding: .utf8) ?? "")
case 17:
let iap = parseIapAttributes(value)
if iap != nil {
appReceipt.iaps.append(iap!)
}
case 19:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 12 else { break }
appReceipt.original_application_version = String(bytes: valEntry.data, encoding: .utf8)
case 21:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 22 else { return nil }
appReceipt.expiration_date = parseRfc3339Date(String(bytes: valEntry.data, encoding: .utf8) ?? "")
default:
break
}
}
return appReceipt
}
func parseIapAttributes(_ data: Data) -> IapReceipt? {
let iap = IapReceipt()
// Root is a set (tag 17 is a set)
let root = Asn1Parser.parse(data)
guard root.tag.tag == 17 else { return nil }
// Read set entries
let receiptAttributes = Asn1Parser.parseSequence(root.data)
// Parse each attribute
for attr in receiptAttributes {
if attr.tag.tag != 16 { continue }
let attrEntries = Asn1Parser.parseSequence(attr.data)
guard attrEntries.count == 3 && attrEntries[0].tag.tag == 2 && attrEntries[1].tag.tag == 2 && attrEntries[2].tag.tag == 4 else { return nil }
let type = Asn1Parser.parseInteger(attrEntries[0].data)
let version = Asn1Parser.parseInteger(attrEntries[1].data)
let value = attrEntries[2].data
switch (type) {
case 1701:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 2 else { return nil }
iap.quantity = Asn1Parser.parseInteger(valEntry.data)
case 1702:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 12 else { return nil }
iap.product_id = String(bytes: valEntry.data, encoding: .utf8)
case 1703:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 12 else { return nil }
iap.transaction_id = String(bytes: valEntry.data, encoding: .utf8)
case 1704:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 22 else { return nil }
iap.purchase_date = parseRfc3339Date(String(bytes: valEntry.data, encoding: .utf8) ?? "")
case 1706:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 22 else { return nil }
iap.original_purchase_date = parseRfc3339Date(String(bytes: valEntry.data, encoding: .utf8) ?? "")
case 1708:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 22 else { return nil }
iap.expires_date = parseRfc3339Date(String(bytes: valEntry.data, encoding: .utf8) ?? "")
case 1719:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 2 else { return nil }
iap.is_in_intro_offer_period = Asn1Parser.parseInteger(valEntry.data)
case 1712:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 22 else { return nil }
iap.cancellation_date = parseRfc3339Date(String(bytes: valEntry.data, encoding: .utf8) ?? "")
case 1711:
let valEntry = Asn1Parser.parse(value)
guard valEntry.tag.tag == 2 else { return nil }
iap.web_order_line_item_id = Asn1Parser.parseInteger(valEntry.data)
default:
break
}
}
return iap
}
func parseRfc3339Date(_ str: String) -> Date? {
let posixLocale = Locale(identifier: "en_US_POSIX")
let formatter1 = DateFormatter()
formatter1.locale = posixLocale
formatter1.dateFormat = "yyyy'-'MM'-'dd'T'HH':'mm':'ssX5"
formatter1.timeZone = TimeZone(secondsFromGMT: 0)
let result = formatter1.date(from: str)
if result != nil {
return result
}
let formatter2 = DateFormatter()
formatter2.locale = posixLocale
formatter2.dateFormat = "yyyy'-'MM'-'dd'T'HH':'mm':'ss.SSSSSSX5"
formatter2.timeZone = TimeZone(secondsFromGMT: 0)
return formatter2.date(from: str)
}
}
</pre><br />
<div>To use the code, you would write something like this:<br />
<br />
</div><pre>let data = Data(base64Encoded: "... BASE 64 DATA ...")
let receiptChecker = ReceiptInsecureChecker()
let payload = receiptChecker.parsePkcs7ReceiptForPayload(receipt!)
let appReceipt = receiptChecker.parseReceiptAttributes(payload!)
print(appReceipt!.iaps)
</pre><br />
<div>Note: I'm not a Swift coder. I only starting learning Swift about a month ago, so I apologize if the code is not very Swift-y</div><br />
Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-86796078763530742832019-07-31T23:22:00.000-04:002019-07-31T23:24:39.968-04:00CorelDraw Graphics Suite 2019 ReviewSince I'm originally from Ottawa, I've always used CorelDRAW for vector graphics. This actually works out well. Since I'm not an artist or designer, I rarely need to do any vector graphics work, so CorelDRAW has worked for me because it comes with a lot of functionality, I could make a one-time purchase of a perpetual license to the software, and I was occasionally able to get good deals when buying it.<br />
<br />
I previously used CorelDraw X5, and it did what I needed it to do, but the menus didn't work quite right on Windows 10, so I was looking to upgrade if the upgrade price ever dropped to around $100-$150 or so, but the price never dropped that low, so I just kept using my old version. Especially since I now make my own vector graphics package, I rarely needed CorelDraw except for some occasional obscure feature. Unfortunately, Corel declared that 2019 would be the last year they would offer upgrade pricing on CorelDraw, so I decided to pick up a copy of CorelDraw 2019 since it would be my last chance to get an upgrade.<br />
<br />
I have to say that I feel a little disappointed with CorelDraw 2019. CorelDraw has always been a buggy piece of software. But usually it's the new features that are buggy, but if you stick with the core vector graphics stuff then it works fine. Usually, the new features would be so buggy that they would be unusable, but Corel wouldn't bother fixing it until a later version, so you would just have to pay for an upgrade to fix those bugs and get working versions of the new features. Unfortunately, it seems like they rewrote the core user interface code in this version, so now the core vector graphics functionality is buggy. I suspect it might be related to the fact that they've rewritten stuff so that it works on the Mac (previously, CorelDraw was Windows only). This is annoying to me because CorelDraw 2019 is too buggy for basic vector graphics work, but it likely won't be fixed unless I buy an upgrade to a later version, but Corel isn't going to be selling upgrades any more. I'm sorely tempted to keep using CorelDraw X5. The bugs are just little annoying little things like the screen blanking out if you scroll the window using the scrollbar, requiring you to press ctrl-W to manually refresh the screen. Groups also no longer snaps properly to grids. If you try to move a group, CorelDraw will choose one of the objects of the group (I think it's the top one?), and snap that to the grid instead of aligning the group as a whole. This makes grids sort of useless to me. CorelDraw also doesn't let you snap to grids and snap to objects at the same time. It gets confused and tries snapping to objects, and will completely ignore any possible grid snapping you can do. If basic functionality like scrolling and snap to grid don't work, then how is anyone supposed to get any productive vector graphics work done with CorelDraw?<br />
<br />
On top of that, CorelDraw feels slow and sluggish. To be fair, CorelDraw has always felt slow and sluggish, but if you keep using an old version, then after a few years, your computer gets fast enough that it feels snappy and usable. Still, I was hoping that Corel would have left well enough alone, and stopped meddling with the old code so that it would stay fast. That's not the case. It feels sluggish. After all these years, Corel still has not learned that responsiveness is one of those magic unspoken features that make a graphics package feel good to use. Even though Corel Photo-Paint has many more features than my old copy of Photoshop Elements, I still use Photoshop Elements as my primary paint program because it's just so much faster and responsive. CorelDraw 2019 also just stops and hangs for a couple of seconds sometimes. I think it might be that the saving code is now very slow for some reason. Since CorelDraw autosaves fairly often (due to its buggy nature), I think CorelDraw will just occasionally become unresponsive as its incredibly slow autosave happens.<br />
<br />
In the end, I feel like I've wasted my money. I bought CorelDraw 2019 because it was the last upgrade version they would offer. But CorelDraw 2019 is really buggy and not very usable. These bugs likely won't be fixed until a later version of Corel, which there won't be any upgrade pricing available for. Every time I use CorelDraw 2019, I keep wanting to go back to using my old version of CorelDraw X5 instead, which I sometimes do. I think the verdict is that if you are in a rush to upgrade CorelDraw because it's a last upgrade version available, DON'T get CorelDraw 2019 because it's too slow and buggy. If you can find an upgrade to an older version of CorelDraw, that might be a better choice to buy actually. Otherwise, just stick to your old version.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-28623563009352685232018-06-18T05:07:00.000-04:002018-06-18T05:08:03.770-04:00Ranking of Racism against Asian Americans at Ivy League SchoolsIt has long been assumed that the Ivy League schools are racist against Asians, but it's been hard to understand the extent of the racism. There are some universities that try to run purely meritocratic admissions systems involving fewer subjective evaluations. For example, Caltech has an Asian enrollment of 43%, but it's not clear whether that's comparable to other schools because of the heavy engineering focus of the school. Berkeley is a more well-rounded university and it has an Asian enrollment of 41%, but it's a public university in a state with a high Asian population, so it's not clear if it's comparable to universities in other parts of the country.<br />
<br />
Fortunately, <a href="https://slate.com/news-and-politics/2018/06/harvard-admissions-lawsuit-alleges-bias-against-asian-american-applicants.html">Harvard ran the numbers back in the 2013</a>, and they found that if they ranked students solely by academic qualifications, then Asians would make up around 40% of the admissions. Even if Harvard continued to set aside spots for athletes and undeserving rich people, then Asians would make up 31%. If extra-curriculars and other subjective measures were included as well, then Asians should still make up around 26% of the student population (even though in 2013, only 19% of the admitted class was Asian). I believe that the Asian population has only grown since 2013.<br />
<br />
These numbers agree with the Berkeley and Caltech numbers, so I feel it's safe to use these numbers to do back-of-the-envelope calculations for how racist against Asians each university is. The Harvard numbers should be comparable with other prestigious, well-rounded, private universities that attract students from across the country. So it should be safe to compare the numbers to other Ivy League universities.<br />
<br />
So I visited the websites of the Ivy League universities, and grabbed their reported diversity statistics on Asian admissions. The numbers are hard to compare because different universities categorized their students differently. If universities had a separate category for unknown and/or foreign students, then I left them out of the total. If there was a category for multi-racial, I did not include that number in the count of percentage Asians. As a result, the numbers are very noisy, but I think they still give a basis for comparing universities. I think that universities with Asian enrollment in the high 20s or low 30s are demonstrating low amounts of racism against Asians.<br />
<br />
So here are the results:<br />
<br />
1. Brown (18.16%) - most racist<br />
2. Yale (21%)<br />
3. Dartmouth (21.74%)<br />
4. Harvard (22.2%) - probably worse than Dartmouth, but the numbers are hard to compare<br />
5. Cornell (23.26%)<br />
6. UPenn (23.56%)<br />
7. Princeton (25.29%)<br />
8. Columbia (29%) - least racist<br />
<br />
And here's Stanford's numbers even though they aren't an Ivy League university:<br />
<br />
Stanford (26.44% Asian)<br />
<br />
Initially, the numbers seem to suggest that Brown is the most racist against Asians of the universities. They also have the lowest enrollment of African Americans of the other <i>Ivy League</i> universities, so they just fail on diversity in general. They do have a large number of students classified as multi-racial, which makes things unclear, but things still look bad after removing them from the totals. It's possible that Yale or Harvard are, in fact, the worst universities because they don't have a "multi-racial" category and their percentage of enrollment being Asian is pretty low.<br />
<br />
There's a bunch of universities in the middle that seem to be sort of racist. Princeton seems to be the best of the middle.<br />
<br />
The least racist, by far, seems to be Columbia, which achieves a high-20s in Asian enrollment, which is the safe zone. They also have strong African enrollment as well. It demonstrates that it is possible to have diverse minority enrollment without unduly punishing Asians.<br />
<br />
So what's going on? These universities (other than Columbia) are using the implicit bias effect to willfully keep down Asian enrollment. These universities intentionally add subjective measures into student evaluations that are known to be subject to bias. They then hire admissions officers of <a href="https://www.thecrimson.com/article/2007/4/26/mit-admissions-dean-resigns-after-fake/">dubious qualifications</a> who don't understand the Asian experience or don't like Asians in general, who implicitly prefer applicants who are more like themselves, and who are told to find applicants who match an Ivy League "culture" or "character profile" that run contrary to Asian stereotypes and biases. As a result, they end up giving lower subjective scores to Asians. Is the applicant who plays badminton more or less "brave" than the applicant who plays football? Is the atheist applicant more or less kind than the church-going applicant? There is no way of knowing those things, but people will inevitably form an opinion based on their implicit biases. For example, Harvard gave lower <a href="https://www.theguardian.com/education/2018/jun/15/harvard-sued-discrimination-against-asian-americans">"personality" ratings to Asian applicants</a> in general. I can imagine that, yes, some universities might prefer people with certain personalities over others. But that mix of personalities should be evenly distributed among all people. If one race consistently score poorly on the personality rating versus all other races, then there's some implicit racism going on there that needs to be fixed. Strangely, no personality deficiencies were found <a href="https://www.thecrimson.com/article/2018/6/15/admissions-internal-report/">during alumni interviews</a>. The bias only appeared during the ranking of personal qualities by the Admissions Office.<br />
<br />
Here are some more in-depth articles if you want a deeper dive in the issues involved: <a href="https://www.chronicle.com/article/In-Court-Battle-Over-Harvard/243689">1</a>, <a href="http://www.theamericanconservative.com/articles/the-myth-of-american-meritocracy/">2</a>, <a href="https://harvardlawreview.org/2017/12/the-harvard-plan-that-failed-asian-americans/">3</a>.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-72713286693970171692018-06-12T13:02:00.001-04:002018-06-15T23:19:15.931-04:00Nan Native Module Asynchronous Callbacks in Electron with GWTThis problem has been causing me frustration for weeks, and I think I've finally figured out what was wrong.<br />
<br />
I have a GWT application that I'm running as a desktop application using Electron. To access some Windows services, I wrote a <a href="https://electronjs.org/docs/tutorial/using-native-node-modules">native module</a> in C++ that my JavaScript code can call into and call some Windows functions. Some of the newest Windows APIs are asynchronous and long-running, so I made use of Nan's <a href="https://github.com/nodejs/nan/blob/master/doc/asyncworker.md">AsyncWorker</a> framework for running C++ code in another thread and then calling a callback function in JavaScript with the result afterwards.<br />
<br />
But the code would always crash. If I executed the commands from the Electron/Chrome debugger console, it would run fine. But if I ran the same instructions in my compiled GWT application, the application would crash when the callback function is invoked from C++.<br />
<br />
I spent weeks looking over the code and trying different variations, tearing my hair out, and I could never figure it out. Native modules (much like everything else in node.js and electron) are underdocumented, but my code looked the same as the examples, and I couldn't find any reports of other people having problems. Maybe I was compiling things incorrectly? Was my build set-up wrong? Maybe mixing in winrt and managed code was causing problems? But I think I've finally figured things out.<br />
<br />
The problem is that the GWT code runs in an iframe, so the callback functions are defined in the iframe, and somehow, this leads to a crash when the C++ code tries to call these callback functions. To solve this problem, I've created a separate JavaScript shim that creates the callback functions in the context of the main web page. My GWT code can call into the main web page to create the callback functions and to pass them to the native module. Then the native module can safely call back into JavaScript from the AsyncWorker without any crashes.<br />
<br />
Side Note: When running Electron in Windows, it seems that the Windows message queue is managed from the main process. So if you have a Windows API that needs to be called from the UI thread, it should probably be called from the main process not the renderer process.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-23343761019455273072018-05-01T02:46:00.001-04:002018-05-01T03:11:04.989-04:00ES6 Modules: Limp and OvercookedI've been eagerly awaiting a module system for JavaScript for many years. Although plans for a standardized module system have been floating around even for ECMAScript 4, it's only become standardized and available during the last year or so. Usually, modules are a pretty intuitive language concept. You briefly look at a couple of examples, and then you dive and start using it, and everything just works. For some reason though, when I tried using ES6 modules in a project, my mind absolutely refused to accept ES6 modules. I literally spent hours staring at these lines of code, and my brain couldn't do it:<br />
<blockquote>
<pre>import foo from './library.js';
import {foo} from './library.js';</pre>
</blockquote>
Both lines of code are valid ES6 Modules code. Only one line is correct though, and it depends on how you've set-up your modules. The difference is so confusing and the error messages are so cryptic that I just couldn't get my feeble brain to understand it.<br />
<br />
Apparently, the JavaScript module system spent so long in the standardization oven that it has become overcooked and ruined. It's limp and dry and completely unappetizing. ES6 Modules are actually two completely different module systems that have been thrown together into JavaScript with no attempt to unify them at all. What's worse is that the two module systems use very similar syntax, and it's very easy to get things mixed up. Some misplaced squigglies results in you using the wrong module system that's incompatible with the library you want because the library was built with the other module system. What's doubly-worse is that one of the module systems is already deprecated, and the preferred module system has the more complicated syntax. If one system is preferred, then why does the other one exist? If the two module systems are different, why couldn't they have two completely syntaxes for them?<br />
<br />
What's weird is that they could have unified it. There would have been a lot of weird corner cases, but they could have made a consistent syntax. When I see the two lines above, I think of destructuring assignment.<br />
<blockquote>
<pre>pair = getFullName();
[firstName, lastName] = getFullName();
point = getPoint();
{x, y} = getPoint();</pre>
</blockquote>
It's a bit unusual, but it's consistent. My mind could accept that.<br />
<blockquote class="tr_bq">
<tt>import foo from './library.js';</tt></blockquote>
could be for importing everything in the library into an object named foo<br />
<blockquote class="tr_bq">
<tt>import {foo} from './library.js';</tt></blockquote>
could be for importing foo from a library.<br />
<br />
But, no, that's not how it works. Instead, the first line is for doing imports from modules built using the default module method, and the second line is for doing imports from modules built using the namespace module method. Oh, you can also build your libraries so that they are compatible with both types of modules, but since the two module systems are completely distinct, you can design your libraries to export completely different things depending on whether they are imported using the default module method or the namespace module method.<br />
<br />
After several hours of my mind rejecting the ES6 approach to modules, I think I've finally gotten it accepted. I explained to my mind that the ES6 module system is complete garbage, but that's all that there is to eat, and it better not barf it all out like last time. It's not happy about it though.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-81593059070307366552018-03-10T02:47:00.002-05:002020-09-22T14:02:19.176-04:00Copying Hard Disks with Bootable Windows GPT Partitions Using Linux<i>This is just a placeholder blog post. I keep intending to do a proper blog post on this topic, but I never get around to it. Unfortunately, I always forget the steps I need to do when I'm in the middle of copying my hard disks and can't consult my notes, so I'm going to just put some placeholder notes here and flesh them out properly later.</i><br />
<br />
<b>Note</b>: Copying hard disks is tricky. I disclaim all responsibility if you use these steps and you lose data or your BIOS becomes corrupted or whatever. This blog post mainly serves as notes to myself.<br />
<br />
<b>About GPT Partitions</b><br />
I still don't fully understand how a modern UEFI system boots from GPT partitions. I think with GPT and UEFI, your hard disk contains multiple partitions. One of them is a FAT partition that's special because it contains some boot loader programs that contain the instructions needed to load an OS from one of the other partitions. That partition is called the EFI System Partition. The UEFI BIOS of a computer will start Windows in two ways<br />
<ol>
<li>Usually, the BIOS stores the specific UUID label of the boot EFI System Partition and the name of the boot loader program on that partition to load. The BIOS can then quickly load the boot loader and then continue on to load the OS.</li>
<li>When you first install Windows, the BIOS doesn't have that information yet, so the BIOS is able to find the EFI System Partition on the hard disk itself, find the default Windows boot loader on that partition, and then run that to start Windows.</li>
</ol>
<div>
<b>Why It's Tricky Copying Windows</b></div>
<div>
Copying Windows GPT partitions is hard because</div>
<div>
<ul>
<li>Windows makes it hard to copy Windows partitions</li>
<li>the bootloader program on the FAT partition has to be changed to load things from the new partition</li>
<li>the BIOS has to be changed to know about the new bootloader</li>
</ul>
<div>
<b>Steps for Doing the Copy</b></div>
</div>
<div>
By default, Windows is configured with some settings that lets it keep the file system in an inconsistent state on shutdown (in order to have faster shutdowns and bootups), which is a bad time to copy it. I tried various methods for disabling that, but the only reliable approach seems to be to turn off hibernation entirely. You need to start a command prompt in Administrator mode. Then run</div>
<div>
<br /></div>
<div>
powercfg -h off</div>
<div>
<br /></div>
<div>
If you're copying to a smaller hard disk, you might want to use Windows Disk Management (right-click "This PC", choose "Manage", then choose "Drive Management" under the "Storage" category on the left) to shrink your partitions in advance, but that usually never works, and Linux can shrink your partitions anyway.</div>
<div>
<br /></div>
<div>
Also make sure that you have a bootable version of a Windows rescue CD or the <a href="https://www.microsoft.com/en-ca/software-download/windows10">Windows installation media</a>.</div>
<div>
<br /></div>
<div>
Now, you can start-up Linux to start copying your hard disk. I always use the <a href="https://gparted.org/livecd.php">GParted LiveCD</a> to do this. GParted can be slow to start-up because it scans through all your hard disks very slowly to find all the partitions. I think it might also do some really slow thing with Windows partitions as well. That scanning step gets slower the bigger your hard disk is too. But I've found it to be pretty reliable. Use this to copy your partitions to the new hard disk.</div>
<div>
<br /></div>
<div>
<b>msftres Partition</b></div>
<div>
You may find that your hard disk has a msftres (Microsoft reserved) partition. GParted is unable to copy this partition. It is not necessary to copy the partition. It is just a partition that Microsoft reserved so that if you ever install Windows Bitlocker disk encryption, Microsoft can store the decryption code there. If you do want to keep this msftres partition, you can manually create one. First, copy all the partitions before the msftres partition. Then boot into the Windows installation CD (you did remember to <a href="https://www.microsoft.com/en-ca/software-download/windows10">make one</a>, right?). Go into the Advanced Repair settings and get to the command prompt. Then use the "diskpart" program to create an msftres partition. I forget the exact steps. You can type stuff like "help," "help select disk," "help list partition," "help create partition msr" or something to get the exact commands you need. But you need to select the new hard disk, then use something like "create partition msr size=128" to create a 128MB msftres partition. Then, you can reboot into GParted to copy the rest of the partitions.</div><div> </div><div>Update 2020-9-22:</div><div style="margin-left: 40px; text-align: left;">> list disk</div><div style="margin-left: 40px; text-align: left;">> select disk <i>n</i></div><div style="margin-left: 40px; text-align: left;">> list partition</div><div style="margin-left: 40px; text-align: left;">> create partition msr size=128 <br /></div>
<div>
<br /></div>
<div>
<b>GParted Fix-ups</b></div>
<div>
GParted will make the copied partitions have the exact same IDs as the old partitions. This can be a problem if you keep both the new hard disk and the old hard disk in the same computer (e.g. when moving from a hard disk to an SSD, you might want to keep the old hard disk around in the same computer as a backup). It's not clear which boot partition the BIOS will use when starting up. And when Windows is loaded, the wrong one can potentially start up. And even if the right one starts up, the wrong partition might end up being mapped to the C drive. To be safe, if you intend on keeping both hard disks in your system, you should go in and change the UUIDs of all the partitions on either the new drive or the old drive. I'm not 100% sure about Windows recovery partitions. To have the recovery work properly on the new hard disk, it might be better to have it keep the same UUID, so then generating new UUIDs for the old hard disk may be better. But I could never get that recovery partition to work properly anyway, and I would rather have at least one proper working copy of my hard disk in case the copy goes bad, so maybe it's better to generate new UUIDs for the new hard disk.</div>
<div>
<br /></div>
<div>
GParted also often doesn't copy the partition labels and flags correctly. You can go in and set those manually so that they're the same on both disks. I've forgotten to do this before, and everything still seemed to work, so this might not be necessary.</div>
<div>
<br /></div>
<div>
<b>Update UEFI BIOS with Location of Bootloader</b></div>
<div>
Now that Windows is copied, you need to update the BIOS with which bootloader to use on start-up. If you didn't change the UUIDs of the newly copied partitions, then this might not be necessary since the existing BIOS entry for the location of the old bootloader should still work. You might need to swap hard drive cables to ensure that the new hard drive takes over the old drive number from the old hard drive. Or maybe not? </div>
<div>
<br /></div>
<div>
If you did change the UUID of the boot partition, then this is definitely necessary. Open a Linux terminal. Become root by using "sudo bash". Then use the "efibootmgr" program to create the necessary entries. </div><div> </div><div>Update 2020-9-22:</div><div style="margin-left: 40px; text-align: left;">> efibootmgr or efibootmgr -v </div><div style="margin-left: 80px; text-align: left;">to list boot entries</div><div style="margin-left: 40px; text-align: left;">> efibootmgr -o <i>i,j,k</i></div><div style="margin-left: 80px; text-align: left;">to change boot order e.g. efibootmgr -o 2,1,3<br /></div><div style="margin-left: 40px; text-align: left;">(check man pages for other commands)<br /></div>
<div>
<br /></div>
<div>
I can never figure out how to create new BIOS EFI entries using efibootmgr, so I sometimes try to use some other approach to create these new entries. Sometimes, if you start up your system with only your new hard disk, the BIOS won't find the default bootloader, so it will default to searching for the Windows bootloader itself, and it will then add an entry for it in the BIOS itself. Or you can try starting a Windows Recovery CD or installation DVD, going to the command-line repair tools, and trying to use "<a href="https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcdedit-command-line-options">bcdedit</a>", "<a href="https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcdboot-command-line-options-techref-di">bcdboot</a>", or "<a href="https://support.microsoft.com/en-ca/help/927392/use-bootrec-exe-in-the-windows-re-to-troubleshoot-startup-issues">bootrec /rebuildbcd</a>" to do this. I'm not sure what these programs do, but I think one of them will create a new EFI BIOS entry for the bootloader partition.</div>
<div>
<br /></div>
<div>
Once you have an EFI entry in the BIOS for the bootloader, you can go back to using "efibootmgr" to rearrange the order of your boot entries so that it comes first, which is a lot easier than creating a new entry from scratch.</div>
<div>
<br /></div>
<div>
<b>Update Bootloader with New Location of Windows Partition</b></div>
<div>
Now the BIOS can find the bootloader to start loading Windows, but the bootloader may not point to the correct partition to actually start the OS. Again, you can try starting a Windows Recovery CD or installation DVD, going to the command-line repair tools, and try to use "<a href="https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcdedit-command-line-options">bcdedit</a>", "<a href="https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcdboot-command-line-options-techref-di">bcdboot</a>", or "<a href="https://support.microsoft.com/en-ca/help/927392/use-bootrec-exe-in-the-windows-re-to-troubleshoot-startup-issues">bootrec /rebuildbcd</a>" to do this. Again, I'm not sure what these programs do. I don't actually know what the BCD is, and the Microsoft documentation is very vague on that fact. I think it refers mostly to the configuration files for the bootloader on the EFI FAT system partition, but I don't know. Sometimes, everything has gone bad, and you need to use "bcdedit" to start a completely new BCD store. In any case, after randomly running some combination of those programs, Windows will somehow fix itself and become bootable.</div>
<div>
<br /></div>
<div>
<b>Checking Windows</b></div>
<div>
When you do manage to successfully boot into Windows again, going into Drive Management to make sure that the correct drive is listed as your boot drive and that it is the C drive. You might also need to manually remove drive letters from some of your recovery drives and other drives. Don't forget to reenable hibernation by going to the command prompt as an administrator and using</div>
<div>
<br /></div>
<div>
powercfg -h on</div>
<div>
<br /></div>
<div>
<b>Fixing Up Your Linux Bootloader</b></div>
<div>
I normally use Windows, but I keep a copy of Linux on my drives for occasional use. Normally, I just let Linux install a boot loader into the EFI system partition and add an entry to the BIOS. I change the boot order to normally boot to Windows, but I use the "boot from alternate drive" key on startup to show all the EFI boot entries, and then I manually choose the Linux one. </div>
<div>
<br /></div>
<div>
After copying a Linux partition to a new hard disk, you have to reinstall the grub bootloader on the EFI system partition to point to the new Linux partition. Reinstalling grub is mostly impossible, so I find it easier to simply reinstall Linux over the old version (I keep my Linux data on a separate /home partition, so that it's safe to do that without losing data).</div>
<div>
<br /></div>
Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-2148101280395174452018-03-04T17:47:00.000-05:002018-03-04T18:01:15.784-05:00Creating a SDF Texture for a Font at RuntimeI was recently trying to implement text for the graphics engine behind <a href="https://www.wobastic.com/omber/">Omber</a>. Unfortunately, although I had a complete vector graphics engine, I hadn't gotten around to implementing support for shapes with holes in them, so I couldn't just take the vector representation of each font glyph and render them directly. Instead, I ended up using the standard approach used in many 3d game engines, which is to use <a href="https://github.com/libgdx/libgdx/wiki/Distance-field-fonts">Signed Distance Field</a> fonts.<br />
<br />
Most people pre-generate their SDF textures, but that didn't really seem feasible to me. I wanted my code to let people use their own fonts in their drawings, so I couldn't precalculate SDF textures for those fonts. Also, international fonts might contain thousands of characters, and it would be too memory intensive to calculate textures for all of them in advance. So I went about trying to figure out how to generate my own SDF textures, and I learned some good lessons about how to do it.<br />
<br />
At first, I didn't really understand how SDF worked, so I tried using the dumb approach of drawing a character on a bitmap, and then manually trying to calculate the SDF values. This actually takes a bit of time to code up, it's really slow to run, and the results are sort of poor. I didn't really understand that the shader really only cares about SDF values for the one or two pixels near the edge of a glyph, and the SDF values need to have subpixel accuracy. True, you might see <a href="https://youtu.be/X5eHU0VUMbs?t=1m41s">some demos of people adjusting SDF cut-off values to make variations of a font with different font weights</a>. But for normal situations, you only care about SDF values within a pixel or two of the edges of a glyph because when those values are linearly interpolated, you get a good approximation of the angle of the edge in that area. And you really do need subpixel accuracy or your linear interpolation will simply give you back your chunky pixels. To get that sub-pixel accuracy using a raster approach, you would have to draw your glyphs at a really big size and then scale the bitmaps down, but that's even slower and you lose a lot of accuracy.<br />
<br />
Instead, it turned out to be both faster and more accurate to generate the SDF directly from the vector representation. I already had a vector graphics engine, which made it easier, but you actually don't need much vector logic. Basically, you only need a few things. You need a way to extract the bezier curves of each glyph. I was working in JavaScript, so typr.js and opentype.js were available libraries for that. Then you need a Bezier subdivider to convert all the bezier curves to lines. Then you take that bag of lines and throw them into a Point-in-Polygon routine (that calculates whether you cross an even or odd number of lines to see if you're inside a polygon or not) to get the sign and a Distance-to-Line routine to get the distance, and you're done. Since you're working with floating-point values, you get very high precision with sub-pixel accuracy. And since you don't have to scan through lots of pixels to calculate distances, it turns out it's really fast. That actually makes sense because even old computers could render vector fonts at a reasonable speed, so it should be possible to calculate a low-resolution SDF quite quickly too.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigxizJmW89LrWvV6Jka2p-YZ_FoVQW0pl4DapRuhFRX4aUDbPa7MHonpfG4Bm_M_0-hujp5he5Su_2tpjSIppLfzpywtm4g3cHoO1pJw52ynX0wTifBI49Od2XZlOVYlf2C8roog/s1600/sdf.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="201" data-original-width="539" height="119" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigxizJmW89LrWvV6Jka2p-YZ_FoVQW0pl4DapRuhFRX4aUDbPa7MHonpfG4Bm_M_0-hujp5he5Su_2tpjSIppLfzpywtm4g3cHoO1pJw52ynX0wTifBI49Od2XZlOVYlf2C8roog/s320/sdf.jpg" width="320" /></a></div>
<br />
So, yeah. Calculate your SDF textures at runtime straight from the vector representation because it's not much code and it's faster.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-68345737373220292022017-10-26T15:41:00.004-04:002017-10-26T15:42:17.626-04:00glTF 2.0: I Like It!Although I'm not a 3d graphics person, I have worked with several 3d file format<a href="http://my2iu.blogspot.com/2004/11/understanding-x3d.html" style="vertical-align: super;">1</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2004/12/starting-with-x3d.html" style="vertical-align: super;">2</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2004/12/x3d-ambitions.html" style="vertical-align: super;">3</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2004/12/x3d-and-importance-of-high-quality.html" style="vertical-align: super;">4</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2005/02/u3d-vs-x3d.html" style="vertical-align: super;">5</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2005/02/extensibility-in-x3d.html" style="vertical-align: super;">6</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2005/04/maybe-x3d-isnt-such-bad-spec-after-all.html" style="vertical-align: super;">7</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2005/04/finding-u3d-files.html" style="vertical-align: super;">8</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2005/04/some-first-impressions-on-u3d-format.html" style="vertical-align: super;">9</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2005/04/u3d-is-half-baked.html" style="vertical-align: super;">10</a><span style="vertical-align: super;">, </span><a href="http://my2iu.blogspot.com/2005/06/fbx-file-format.html" style="vertical-align: super;">11</a>. In general, I've been very disappointed in the design of these file formats. But I've finally found a 3d file format that I've liked. <a href="https://www.khronos.org/gltf">glTF</a> <a href="https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md">2.0</a> is actually pretty <a href="https://godotengine.org/article/we-should-all-use-gltf-20-export-3d-assets-game-engines">nice</a>.<br />
<br />
It's a mostly straight-forward, easy to understand file format that's pretty unambiguous. It doesn't try to implement any fancy features like U3D. It doesn't contain weird legacy baggage like X3D or COLLADA. Its design isn't so overly configurable or flexible that it's impossible to know whether what you store in it can be read by other programs like COLLADA or TIFF. It just holds a bunch of triangles and associated data structures. It seems like it was built from the ground up as a proper file format for interchange instead of growing out of some existing system with all sorts of strange behavior based on how the codebase for the original system evolved. It also has good extension points making it easy to store additional application-specific data in a file.<br />
<br />
I think part of the reason why it came out so well is that it was originally designed for one purpose only: for sending 3d models to be displayed by WebGL. With a well-defined and basic use case, the designers had the focus to make something straight-forward and easy to work with. With glTF 2.0, the file format has been extended to support more general use cases, but the core use case--holding 3d models--hasn't been diluted by that. Storing 3d models in glTF 2.0 is still clear and concise without a lot of confusion.<br />
<br />
I still have a few niggles with it though that could be improved. Right now, the file format doesn't have widespread support yet, but it is starting to grow. Still, given that this is a file format specification, I feel like there should have been at least one proper reference importer/exporter for the file format before it was finalized. There are multiple implementations of the spec, which is good, but none of the implementations are complete and comprehensive and allow for a proper bidirectional interfacing with a proper 3d application, so it's just hard to know if the files I've created are correct or whether all the corners of the file format has been fully tested.<br />
<br />
Some parts of specification don't really give proper explanations or context for why they are needed. For example, I still don't understand why <span style="font-family: Courier New, Courier, monospace;">accessor.min</span> and <span style="font-family: Courier New, Courier, monospace;">accessor.max</span> exist. Like, I'm sure there's a good reason, but they just seem like an unnecessary hassle to me. Especially given that it's impossible to properly encode a 32-bit floating point number as a decimal string, I just can't see what use an inaccurate record of the min and max x,y,z values of some points are. Having more context there would be useful for implementors. Another example are the different buffer, bufferview, and accessor objects needed to refer to memory. It took me a long time to figure out what the difference was. At first, I thought you could the data for everything in a single bufferview, and just use different accessors to refer to different chunks of it. It was only later when I read that bufferviews were intended to refer to OpenGL memory buffers did I finally understand what each level of memory reference is for. The different buffers are meant to refer to different data stored on-disk. Usually, you'll only have one buffer, but if you have different models that share data, you can put this shared data in a separate file/buffer that those two models can share. The bufferview refers to a single in-memory chunk of data loaded into memory for a model. So, having a single bufferview for an entire scene would be wrong. You would normally have one or more bufferview for each 3d object in a scene. In general, when accessing data from a bufferview, you would always read from the start of the bufferview. If you find yourself reading from an offset into the bufferview, then you should probably just use a separate bufferview instead. The accessors describe how to read individual data fields of a bufferview. Notably, the bufferview contains a byteStride property that allows a bufferview to be broken up into different records or entries. An accessor describes how different fields are stored/interleaved inside a record or entry of a bufferview. An accessor's byteOffset is supposed to be used for offsets into a record or entry, not for starting at an offset into a bufferview.<br />
<br />
glTF 2.0 also offers a convenient format for storing all the 3d data in a single file called GLB. The GLB specification is nice in that it's really basic and straight-forward, but its design is a little sloppy. The GLB file format has its file size encoded in it, which is unnecessary and prevents the data from being streamed. Even if that were fixed, the design of the chunks inside the file also prevent being able to write out the data in a single stream. All the parts of the file have to be written out separately first, their sizes determined, and then they can be assembled and written out into a GLB file. This is caused by the fact that there can only be a single buffer chunk, and the JSON chunk (which will contain references into the buffer chunk) has to be written out before the buffer chunk.<br />
<br />
Overall though, I really like the glTF 2.0 format. I really hope it gets widespread adoption. I definitely see it displacing the .OBJ format in the long term.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-70770313486702343772017-07-27T15:01:00.000-04:002017-07-27T23:09:36.300-04:00WKWebView for Clueless Mac ProgrammersI've been recently trying to package up my <a href="https://www.wobastic.com/omber/">vector design web app Omber</a> as a Macintosh app. Unfortunately, I had zero knowledge about Mac programming. Like, I never owned a Mac. I didn't even know how to get the cursor to go to the start of a line or skip a word using the keyboard without having to look up Stack Overflow. I tried using Electron, but after spending a long time going through various <a href="https://medium.com/@flaqueEau/releasing-an-electron-app-on-the-mac-app-store-c32dfcd9c2bd">documentation</a> to <a href="https://github.com/electron/electron/blob/master/docs/tutorial/application-distribution.md">rebrand</a> and <a href="https://github.com/electron/electron/blob/master/docs/tutorial/mac-app-store-submission-guide.md">package</a> things (the <a href="https://github.com/nwjs/nw.js/wiki/Mac-App-Store-%28MAS%29-Submission-Guideline">nw.js documentation</a> is so much better. The nw.js documentation is always such a joy to read compared to the electron docs), I wasn't too satisfied with the result. It worked, but it was sort of clunky, and I think there was some weird sandbox thing going on that caused file reading to sometimes work but sometimes not. With Windows, it makes sense to use Electron because the Windows default browser engine has weird behaviour and not everyone has the latest version of Windows. But on the Mac, everyone gets free OS upgrades to the latest version and the browser engine is fairly decent, so there's no need to include a 100MB browser engine with an app. So I figured I could whip together a quick Mac application that's just a window with a web view in it in about the same amount of time it would take to debug the Electron sandbox issues.<br />
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;"><rant about Mac programming></span></b><br />
<i>Programming for the Mac is just like using a Mac. Apple hides important details and tries to force you to do things their way. Apple keeps changing things underneath you so all the documentation online or in books is always vaguely out of date. It's also expensive. I bought the cheapest Mac mini with 4GB RAM and a hard disk for development, thinking I could do mostly command-line stuff, but that's not the case. You really need to work from Xcode, and Xcode is a pig of a program that takes up a lot of RAM and is sort of slow. I almost immediately had to switch to using an external SSD on USB to get any reasonable responsiveness from my system. Apple is really trying to stuff Swift down everyone's throats, but I opted to go with Objective-C because of my Smalltalk background. It's not bad except the syntax is somewhat awful. My main issue with it is that part of what Smalltalk so productive is that it comes with an advanced IDE that's super fast and makes it easy to browse around large application frameworks to figure out how to use an API. Objective-C comes with an overwhelmingly huge application framework as well, but Xcode is slow and pokey and doesn't come with good facilities for diving through the framework. Code completion is not good enough. There should be a quick way to find how other people call the same method, check out the documentation for a method, and to check out the inheritance tree. Xcode is more of a traditional IDE with some code completion. It would be nice if Xcode actually labelled all of its inscrutable icons too. No one knows what any of those buttons mean, but using those buttons isn't optional either. The latest MacOS/OSX versions do include a web view, but I always get the feeling that Safari developers don't really understand web apps and want to discourage people from making them. I find that they only implement just enough features to Safari support their own uses and then lose all interest in implementing things in a general way that can have multiple uses. For example, for the longest time, they refused to implement the download attribute on links because Apple didn't need it, so why should anyone else need it? Then, when they did implement it, it initially didn't work on data-urls and blobs because they didn't understand how important that was for web apps. Similarly, the new WKWebView initially could only show files from the Internet and not load up anything locally, making it useless for JavaScript downloadable apps. Then, even when they did fix it, things like web workers or XMLHttpRequest are still broken, really limiting its usefulness. </i><br />
<b><span style="font-family: "courier new" , "courier" , monospace;"></rant about Mac programming></span></b><br />
<br />
Anyway, I found a great <a href="http://www.lostdecadegames.com/how-to-embed-html5-into-a-native-mac-osx-app/">blog post</a> that shows how to make a one window app with a web view in it. It lists every step, so it's easy to follow along even with no understanding of Mac programming. It worked for me, but Xcode has changed its default app layout to use storyboards so some of the instructions don't work any more, and it used the old WebView which is very limited. The new WKWebView is better because it allows for JIT compilation of the JavaScript, and it comes with proper facilities for letting the JavaScript send data to native code (the old web view required a bad hack to do that). So here are some updated instructions:<br />
<ol><li>Get Xcode and start it up</li>
<li>Create a new Xcode project</li>
<li>Make a MacOS Cocoa Application</li>
<li>Fill in the appropriate application info, choose Objective-C for the language</li>
<li>That should bring you to the screen where you can adjust the project settings</li>
<ol><li>If you want to run in a sandbox, I think you have to turn on signing. I think Xcode will take care of getting the appropriate certificates for you (I had already gotten them earlier).</li>
<li>At the bottom of the General settings, under "Linked Frameworks and Libraries", you should add the WebKit.framework</li>
<li>In the Capabilities tab, you can turn on the App Sandbox if you want (I think this is needed for the Mac App Store). Be careful, there seems to be a UI bug there. Once you turn on the app sandbox, you can't turn it off from the UI any more.</li>
<li>If you do enable the App Sandbox, you also need to enable "Outgoing Connections (Client)" in the Network category. This is required even if you don't use the network. WKWebView seemed to have problems loading local files if the network entitlement wasn't enabled.</li>
</ol><li>Go to your ViewController.h, and change it to</li>
<pre>#import <Cocoa/Cocoa.h>
#import <WebKit/WebKit.h>
@interface ViewController : NSViewController
@property(strong,nonatomic) WKWebView* webView;
@end
</pre><li>When using storyboards, the app delegate doesn't have direct access to the view, so you have to control the view from the view controller instead.</li>
<li>Then go to your ViewController.m. Usually, you would draw a web view in the view of the storyboard and then hook it up to the view controller. Although this is possible with the WKWebView, all the documentation I've seen suggest manually creating the WKWebView instead. I think this might be necessary to pass in all the configuration you want for the WKWebView. To manually create the WKWebView, add these methods that show a basic web page:</li>
<pre>- (void)loadView {
[super loadView];
_webView = [[WKWebView alloc] initWithFrame:
[[self view] bounds] ];
[[self view] addSubview:_webView];
// Instead of adding the web view as a subview as
// in above, you can also just replace the whole
// view with the web view using
// [self setView: _webView];
}
- (void)awakeFromNib {
[_webView loadRequest:
[NSURLRequest requestWithURL:
[NSURL URLWithString:@"https://www.example.com"]]];
}
</pre><li>Now when you run the program, you should see the web page from example.com there.</li>
<li>The next step is to create a directory with all your local web pages that you want to show. Create a folder named html in the Finder (i.e. just a normal folder somewhere outside of Xcode)</li>
<li>Drag that folder onto your project in the file list. Enable "Destination: Copy if needed" and "Added folders: Create folder references"</li>
<li>You should now have a html folder in your project. You can delete the original html folder that you created earlier in the Finder since you no longer need it any more. (You can confirm that the html folder will be included in your project properly by looking at your project file under the Build Phases tab, and the html folder should be listed in the Copy BundleResources section)</li>
<li>Create an index.html file in your new html folder. Put some stuff in it.</li>
<li>To show that page, go to your ViewController.m and change the awakeFromNib method to this:</li>
<pre>- (void)awakeFromNib {
NSString *resourcePath =
[[NSBundle mainBundle] resourcePath];
NSString *htmlPath = [resourcePath
stringByAppendingString:@"/html/index.html"];
NSString *htmlDirPath = [resourcePath
stringByAppendingString:@"/html/"];
[_webView
loadFileURL:[NSURL fileURLWithPath:htmlPath]
allowingReadAccessToURL:
[NSURL fileURLWithPath:htmlDirPath isDirectory:true]];
}
</pre></ol>Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-53846431820843816722017-06-03T00:42:00.002-04:002019-09-11T02:50:50.460-04:00Nw.js vs. ElectronToday, I tried porting an <a href="https://www.wobastic.com/omber">html5 web app of mine</a> into a desktop application. When it comes to running JavaScript programs on the desktop, there are two main choices: <a href="https://nwjs.io/">node-webkit (nw.js)</a> and <a href="https://electron.atom.io/">Electron</a>. I wasn't sure which one to choose. I didn't think that my web app was very complicated, so I decided to use nw.js. It's simpler, older, has an easier programming model, and I've been happy when I've used apps based on nw.js in the past.<br />
<br />
Using nw.js was great. It was so simple and easy to use. I just unzipped nw.js somewhere, dropped my own web pages there, and off it went. It was nothing like the days and days of agony involved in making a <a href="https://cordova.apache.org/">Cordova app</a>. The amount of documentation was very manageable, so I was soon diving through it to figure out various polishing issues. And it was all pretty simple. Fixing the taskbar icon was one line. Making it remember the window size from when it was last closed--also one line. Putting in a "save as" dialog was a little more work, but, again, nothing to sweat about.<br />
<br />
Then, I decided that I wanted the save dialog to default to saving to Windows' My Documents folder. And that was hours and hours of agony. The nw.js API is pretty small, so I went through all the documentation with a fine-tooth comb, looking for how to do it, and I couldn't find anything. I then thought that maybe that API was in node.js, so I went through all the node.js documentation to find out how to do it--nothing. Then I thought there might be an NPM package to do it. After much searching, I turned up nothing. I think most people use node.js for server stuff, so they never need to store stuff in a user's Documents folder.<br />
<br />
After hours of this, I took a peek at Electron, and it was right there. Electron has an extensive platform integration API for not only getting at the documents folder, but also for crazy MacOS dock stuff, etc. Electron is used by bigger companies that ship more complicated applications, so they care deeply about all the subtle platform integration issues needed for a polished app. As a result, Electron has a much deeper and much more extensive platform integration API than nw.js. Of course, the Electron programming model is more complicated than nw.js, so it seems like it will require a lot more code to be written to get things going. And there's a lot more documentation, so I don't think it's possible to read it all, like I could with nw.js. And I'm concerned there might be annoying configuration issues. But it looks like I'll have to move to using Electron.<br />
<br />
So if you need extensive platform integration APIs, use Electron, despite the fact that it's more complicated. If you're making something more self-contained, like, say, a game, then nw.js is probably fine though, and you'll save time because it's so easy to set-up.<br />
<br />
<b>Update (2017-6-7)</b>: Apparently, there's another difference in philosophy between nw.js and Electron too. nw.js tries to create a programming environment that imitates a normal web environment as much as possible. Platform integration is implemented as minor embellishments on existing web APIs with reasonable defaults chosen. With Electron, using normal web APIs will work, but not well. Lots of platform integration features are available, but the programmer has to explicitly write separate Electron code to take advantage of those features, and the API isn't that nice (due to Electron's multi-process architecture and lots and lots of optional parameters). For example, to open a file dialog in nw.js, you can simply reuse the existing html5 file dialog APIs, and the return results are augmented with some extra path information that you can use to open files. To open a file dialog in Electron, you can't reuse your existing html5 file dialog code because Electron's implementation is missing a couple of features, so instead you have to make use of Electron's file dialog APIs. Electron's file dialog APIs are fine, but a little messy to set-up, and by default they aren't modal, so you have to jump through some hoops to get normal file dialog behavior.<br />
<br />
<b>Update (2017-8-28)</b>: Despite what some people say, the nw.js documentation is much better than the Electron documentation. Electron has a lot of documentation, but it's not well-written. For example, I found the Electron docs would often just list a bunch of method names and method parameter names without really saying what the parameters do (this is similar to the node.js documentation, actually). The documentation that Intel and others have provided for nw.js is very clear and almost a pleasure to read. To show you how good the nw.js documentation is, when I was making a Mac App Store version of an Electron app, I consulted the nw.js documentation on how to do it because the nw.js documentation was just so much more clear and detailed.<br />
<br />
<b>Update (2019-9-11)</b>: This is only indirectly related to this topic, but I gave a talk earlier in the year on the topic of <a href="https://youtu.be/7V0Q2EkerCo">Java on JavaScript VMs</a>. As part of the talk, I give a survey of different ways of running JavaScript on the client such as Cordova, nw.js, Electron, WKWebView, UWP, etc. Just to warn you in advance, the talk is intended for a Java users group, so the tone of the talk is light-heartedly derogatory about JavaScript. But it's intended in good humor--don't take it too seriously.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com15tag:blogger.com,1999:blog-9350640.post-59186625834830221572017-05-12T02:25:00.000-04:002017-05-12T02:25:11.445-04:00Building a Basic City Builder<i>This post is me rambling about trying to understand how city builder games work by making a very, very simple city simulation model.</i><br />
<br />
I love city builder games, but I always get very frustrated playing them. I think the problem is that most game designers and programmers of city builder games don't actually obsess about cities and don't spend hours reading about and thinking about cities. Now, Will Wright, the designer of the first SimCity, did spend a lot of time reading about the philosophy of cities, and the original SimCity was a great simulation for a game that had to be able to run on a 4.77Mhz computer. It was built around themes of how residents needed a balance of residential, commercial, and industrial zones and of how land values and transportation access was important. Later city builder games do include much more complex city models, but I find they lack any over-arching theme or philosophy about the nature of cities. How is it New Urbanism, one of the biggest movements in urban planning over the last few decades, not have any of its tenets reflected in any of the most recent city builders? The designers of the latest city builder games simply focus too much on the game aspects of city builders and not enough on the urban planning. They seem to design game mechanics and simulation parameters based on what seems "fun" instead of reflecting upon a philosophy or theme of what constitutes a city.<br />
<br />
Part of the joy of cities is that they are full of stories. Every neighbourhood has a story about how it evolved and grew and all the little things that people do there. Where do people shop? How do they get to work? What do they do for fun? Every city has a different story. But most city builder games have their simulation parameters set in such a way that they can only tell one story. For example, SimCity 4, which I consider to still be the pinnacle of the SimCity series, has its simulation set-up in such a way that you almost inevitably end up with a city that looks like northern California. The simulated residents are highly biased in favour of driving, and you have little ability to influence that. The city simulation is resistant to the levels of densification typical of non-American cities. The simulation doesn't allow farms in built-up areas, but I encountered plenty of urban farms when I lived in Switzerland. Even a basic assumption of the game like the fact that you need to supply water and sewage infrastructure to have even a basic level of housing development isn't actually true. Dubai was able to build many towering skyscrapers that <a href="http://gizmodo.com/5857475/without-trucks-the-tallest-building-in-the-world-would-become-the-tallest-mountain-of-poop">weren't hooked up to a sewage system</a>. All of these sorts of assumptions and fixed parameters in the city simulation constrains what sort of cities can be produced in the game and restricts the types of stories that players can tell. Even worse, SimCity 5 completely abandoned all pretense of accurately simulating a city and embraced purely game-based mechanics for modelling cities.<br />
<br />
There's a lot of buzz about Cities: Skylines, which was made by a Finnish developer that previously made some awful transportation simulation games. Their transportation simulator never worked well for trains and buses, and it still doesn't, but they did get it to work well enough for cars that they were able to make a financially successful city simulator. Similar to how the developers built many transportation games that focused on modelling minute details of bus scheduling and bus stops while completely missing the big picture understanding of how mass transit actually works, Cities: Skylines has a detailed city model underneath that provides a simulacrum of a city when viewed at scale, but has no meaningful philosophy in its design and doesn't make much sense when you poke into it. One of the major aspects of the simulation models the player's ability to move dead bodies through the city! I'm currently living in Toronto, and I can't help but think that Jane Jacobs would cry if she know how many YouTube videos there were of people building multi-block, multi-story so-called "optimal" intersections in Cities: Skylines. The sheer prevalence of these videos is a sign that the underlying simulation model and theme of the game is broken. Note to armchair city builders: if you're building a continuous flow intersection in your city, you've already failed.<br />
<br />
Of course, it's easy to complain about things. It turns out that I don't really know how to make a better system. Although I think I figured out how to make reasonable <a href="http://my2iu.blogspot.ca/2014/09/setting-up-simulations-of-drl.html">transportation models</a> <a href="http://my2iu.blogspot.ca/2010/03/transportation-simulation-games.html">many years ago</a>, I've never figured out how the underlying economic models of city simulators should work. In fact, I'm not entirely sure how the economic models of existing city simulators are designed. As such, it's hard to know what their underlying assumptions are, how they might be wrong, and how they might be fixed. The economic models of games are obviously biased in favour of growth. If a player lays out tracts and tracts of residential zones in the middle of nowhere, people will suddenly build houses in those zones for no apparent reason. Admittedly, in many places in the world, this is a reasonable assumption. In the areas near large, booming, metropolitan centres, if the government were to spend millions to build out sewage, power, and highway infrastructure to an area and then zone it for a subdivision, developers would quickly build tracts and tracts of suburban housing there. And for gameplay purposes, it's important for the city simulation to be biased towards growth because players love the feel of an expanding city where bigger and better things are constantly being built (though playing a dying city where the infrastructure must be slowly rolled back as people move out and where its role has to be reinvented might make an interesting scenario). But is this biasing towards growth done in a heavy-handed way that restricts the ways that a city can evolve or in a subtle way that still lets players design a city the way that they want?<br />
<br />
To get a better insight into the way these economic models might work, I dabbled a bit in reading academic papers on urban planning models, but I never could figure them out. I tried out a trick I figured out in high school and tried to find the oldest paper I could find on the subject, and I actually found one that was somewhat comprehensible: Kenneth Train's "a Validation Test of a Disaggregate Mode Choice Model." My takeaway from the paper is that real-world urban planning models are based on polling of a population and building statistical/fitting models of how this population weighs decisions on choices they make on where to live or get around. For people building a computer game simulation, then a micro-economic agent simulation should capture this. Basically, you have a statistical distribution of how for every 100 people, 30 of those people prefer a house with a yard, 20 people choose their home based on the quality of the schools, 35 need to live within 10 minutes of work, and 15 like having a lot of cultural amenities. Then during the game, you randomly generate people based on the statistical distribution, throw them into the city, and have them make individual choices based on their preferences. Then, you just have to choose an appropriate statistical model of people to get the biases you want for your game. In hindsight, this is pretty obvious. If you model a bunch of individual, different people, then in aggregate, you will get an accurate city model. This still left a big problem though. This agent simulation will accurately model the residents in a city, with all of its assumptions explicitly encoded. This approach doesn't really work in modelling a city's growth. Why do people move to a city? How do you bootstrap an initial population for a city that has no buildings, no residents, and no infrastructure? If a game just regularly generates random people based on a statistical distribution and throws them into the city, then the whole simulation is inherently biased towards growth again. It seems like too blunt an approach to the problem. Surely, there must be a more nuanced way of modelling growth that has a better philosophy behind it other than the theme of unlimited growth? Is there a way of modelling growth that provides more adjustment knobs that can be used to encode different assumptions about growth?<br />
<br />
I thought about this growth problem for a few years, but I could never make any headway with it. Regularly generating new random people to move to a city in order to get some growth might work for competitive games of SimCity. If there are different cities with different amenities, then depending on how your city compares to others, newly generated people might choose to move to other cities instead of your one. This sort of model might work well for multiplayer competitive SimCity. But I couldn't figure out how a growth model would work for a single-player city building game. I decided that the only way to find a reasonable approach to this problem would be to actually build a small game where I could dig into the details of the problem. Hopefully, after being enmeshed in the details, I would be able to see something that I couldn't see from far away. I came up with a design for a small city simulator that would focus on the economic model (since I felt I already understood how to design the transportation model), and then it was just a matter of finding the time to build it.<br />
<br />
Finally, last week on Thursday, I received a last minute e-mail saying a spot opened up at the TOJam game jam that was running during the weekend, so I decided that it was time to dive in. I had worked out a design for a simplified city builder earlier. The city builder would present the side view of a single street. Since the focus was on the economic model and streetscaping and not on transportation issues, there was no need for a full 2d city. Having a side view also meant that the game could have a simplified interface as well that might even work ok on cellphones. In the game, players would place individual buildings and not zones. I think most city builder players like to customize the looks of their cities, but placing individual buildings doesn't work well at a large scale. But on the small scale of a side-view game, I was hoping that placing individual buildings would be feasible. During the first day, I was able to finish coding up a basic UI that would let players plop buildings on the ground and query them. There was a floating artist at TOJam, Rob Lopatto, who drew some amazing pixel art for me of a house and two types of stores, which really looked amazing.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB91xSTnSRTue5bjavGlJsW-tiajgh9d3CjnCWEq3fQ2BGQI4Tz9RLYFX_c9KDcMyw-4BIICSMKm-9ER8Mq2TpQXZIR61LLeQUVedomON9r3jLBlv9vF24_5tpq6lG4C8p7F9nOw/s1600/houses.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="105" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB91xSTnSRTue5bjavGlJsW-tiajgh9d3CjnCWEq3fQ2BGQI4Tz9RLYFX_c9KDcMyw-4BIICSMKm-9ER8Mq2TpQXZIR61LLeQUVedomON9r3jLBlv9vF24_5tpq6lG4C8p7F9nOw/s400/houses.png" width="400" /></a></div>
<br />
<br />
On the second day, I coded up a basic traffic model. Since I was just trying to make something as simple as possible in a limited time, I only modelled people walking between buildings at a fixed speed. Similar to SimCity 1-4, I modelled the aggregate effect of people walking around instead of actually modelling the specific, individual movements of those people on the road. I think the lessons of SimCity 5 and Cities: Skylines is that modelling the movement of individual cars can be slow and leads to strange anomolies, especially when there is extreme traffic. In real-life, during extreme traffic, people shift their schedules to travel during non-peak times or they change routes or they move. It is rare for a traffic situation to become so dire that people end up in multi-day traffic jams and never reach their destinations. The problem with modelling the aggregate effect of traffic is that the simulation simply outputs some traffic numbers for chunks of road. There's nothing to see, and players like seeing little people and cars moving around. So I had to code up a separate traffic visualization layer that would show people moving around proportional to the amount of traffic there is. I wasn't sure if I would end up showing the right traffic is I generated people doing whole trips (my queuing theory is really bad), so instead I used the SimCity 4 trick of randomly generating people to walk around for short sections of road and then have them disappear again. I could then just periodically generate new people on sections of road that weren't showing enough traffic over time in their visualization. Surprisingly, even though my simulation was small enough that I could simulate the whole world, 60 times a second, I still ended up using my <a href="http://my2iu.blogspot.ca/2010/03/transportation-simulation-games.html">geometric series approach</a> in both the traffic visualization and parts of the simulation. It worked really well!<br />
<br />
By the end of the second day though, I had a hit a wall. I still couldn't figure out how to model city growth. I could simulate people in the city, but I couldn't figure out how to get new people to move in. I didn't want to explicitly encode a rule for having people automatically move into the city. Perhaps I could come up with some sort of hacky rule for when new residents would be induced into moving into the city. The new rule would likely still have an emergent behaviour of causing an implicit bias towards growth in the city, but if the rule still made thematic sense, then it would be more satisfying and could be tweaked and improved later on. I started leaning towards the idea of using jobs to induce people to move to the city. If there were companies looking to hire in the city, then people would move there. That mostly makes sense, and avoid explicitly biasing the city simulation towards growth.<br />
<br />
I still had a bootstrapping problem though. The companies in a city won't hire people unless they have customers and are making money. But if there's no one living in a city, then companies will have no customers and hence have no jobs. I could make companies hire people even when they have no customers, or I could maybe implement a hack where companies might tentatively hire people to see if they can make money and then fire them if it doesn't work out. I think games like SimCity and Cities: Skylines have a hack where cities with small populations have an explicit macroeconomic boost to industrial jobs. If you zone some industrial areas, some companies will move in and create some factories to employ people even if they have no customers and no one lives in the city. This seemed like just another artificial bias towards growth, even if it was in a different form, so I wanted something different.<br />
<br />
Instead, I went with a different cheat in that I created a type of building that was self-sufficient and could supply an initial boost of employment without depending on the existence of other people or infrastructure. I opted for subsistence farming plots. They could provide a minimal income even in a city with no other people or infrastructure, thereby attracting an population base. 100-150 years ago, the Americas were settled by offering free plots of farming land to immigrants, so it's not entirely unheard of, though I'm not sure how realistic that assumption would be now. Once the simulated city developed a sufficient population, there would be enough collective demand to make stores or small workshops profitable so they will employ people, resulting is a positive feedback loop of growth. This ends up supporting a theme that a city is dependent on investments of infrastructure to support certain types of economic activity and growth. Or maybe is says that people power a city but it requires infrastructure to improve efficiency and productivity to unlock that power. In any case, I think those are reasonable philosophies around which a city simulation can be designed. I'd be a little bit afraid of making the rules too deterministic so that it feels more like a game than a story-generating city simulator (e.g. you need electricity to have a factory over size 3, you need an outside road link to let your industrial population grow more than 3000 or stuff like that). And there's another a danger of inadvertently building an arbitrary civilization simulator instead (e.g. you need iron mines, coal mines, and an iron smelter to build the steam engine, which is then a prerequisite to industrial age buildings etc). But it does show that this philosophical approach is broad enough to capture many different city models.<br />
<br />
On the third and final day, I polished up the UI and tweaked the city model a bit. Since there were only three different building types, and given the shortness of time, the ending city model was still very simple, but it seemed to work well enough, and it helped me work out a different way to simulate growth in a city builder game. Here's an overview of the final simulation code:<br />
<ol>
<li>People without a job will go to work at the building with the greatest demand for workers (demand must be at least one full worker). Farms always need one worker while stores need workers proportional to the number of visitors/customers they have</li>
<li>People without a home will move to any home they can find</li>
<li>People will move to a home that's closer to their work than their current home</li>
<li>People will visit the closest store to their home to buy things, but if the number of visitors exceeds the store's capacity, the people will move to the next closest store (and so on)</li>
<li>People without homes or jobs will leave the city</li>
<li>People will cause 1 unit of traffic for each square of the street they need to walk on to get from their home to their work</li>
<li>Any building that still has a demand for workers that can't be filled from the local population will hire someone from outside the city (provided that person can find housing)</li>
<li>Every 100 turns, rent will be collected from each building's residents and workers. Upkeep costs for each building will also be deducted.</li>
</ol>
<div>
Here's the <a href="https://my2iu.itch.io/street-simulator">final game</a>.</div>
Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-80209365048919644402017-01-24T17:01:00.000-05:002017-01-24T17:01:30.333-05:00Trying Out Some Emscripten on Chrome<a href="http://www.wobastic.com/omber/">Omber</a> is the GWT JavaScript app that I'm currently working on. It runs in a browser, and I've also created an Android version using <a href="https://cordova.apache.org/">Cordova</a>. It has some computationally-intensive routines, so it's sometimes a little sluggish on cellphones, which is understandable given how under-powered cellphones are. I've been looking at whether there are ways to improve its performance.<br />
<br />
The cellphone version of Omber runs on Chrome (specifically, the <a href="https://crosswalk-project.org/">Crosswalk version of Chrome</a>). It's unclear how to get optimal performance out of Chrome's V8 JavaScript engine. The Chrome developers talk a lot about how great its Turbofan optimizer is, but they never actually give any advice on how to write your code to get the best code generation from Turbofan. My code does a lot of floating point math, and I really need the numbers to be packed tightly to get the best performance out of the system. Should I be manually using Float64Arrays to do this? Or is V8's Turbofan smart enough to put them directly into objects? Are there ways I can add type hints to arrays and other methods? Can I reduce the number of array bounds checks? In a language like C++, I could simply write my code in a way that would produce the code generation that I wanted, but how do I guide Chrome into generating the code that I want?<br />
<br />
Mozilla has their <a href="http://emscripten.org/">Emscripten </a>project that can compile C++ to JavaScript asm.js style code. Firefox then has a special optimizer for translating JavaScript written in the asm.js style into highly optimized machine code. Personally, I think asm.js isn't a great idea. The asm.js subset is very limiting and sort of hackish. As far as I can tell, the code it produces is not very portable either. Basic things like memory alignment and endianness are ignored or simply handled poorly. For these reasons, most of the other browsers don't support asm.js-specific code optimization, but they claim that their optimizers are so good that their general optimization routines will still get good performance out of asm.js code.<br />
<br />
So is it worth using Emscripten or not then? To try things out, I made a small test where I took my polygon simplification code and rewrote it in C++, compiled it using Emscripten to JavaScript, and compared the performance to my original GWT code. I was too lazy to record the actual numbers I was getting during my benchmarking runs, but here are the approximate numbers:<br />
<br />
<b>Original code on Chrome:</b> ~280ms<br />
<b>Emscripten code on Chrome:</b> ~230ms<br />
<b>Emscripten code with -O2 on Chrome:</b> ~300ms<br />
<b>Original code on Firefox:</b> ~4000ms<br />
<b>Emscripten code on Firefox:</b> ~160ms<br />
<b>C++ code:</b> ~150ms<br />
<br />
Takeaways:<br />
<br />
<br />
<ul>
<li>The Firefox code optimizer isn't very good, so having a special optimizer for asm.js is really useful for Firefox. Firefox was able to get performance that was pretty close to that of raw C++ code when dealing with asm.js code though.</li>
<li>The Chrome optimizer is so good that the performance of the normal JavaScript code is almost as good as the Emscripten code. In fact, it probably wasn't worthwhile rewriting everything in C++ because I could have probably gotten similar performance by trying to optimize my Java(Script) code more</li>
<li>Since the Chrome optimizer isn't specifically tuned for Emscripten code, the Emscripten code might actually result in worse performance than JavaScript depending on whether Turbofan is triggered properly or not. For example, compiling Emscripten code with more optimizations (i.e. -O2), actually resulted in worse performance from Chrome</li>
</ul>
<div>
I was a little worried that Chrome's V8 engine might be tuned differently on cellphones, meaning that I might not get similar performance numbers when running on a cellphone. So I also ran the benchmarks on Cordova:</div>
<div>
<br /></div>
<br />
<b>Original code on Chrome:</b> ~2600ms<br />
<div>
<b>Emscripten code on Chrome:</b> ~1600ms<br />
<b>Emscripten code with -O2 on Chrome:</b> ~2800ms</div>
<div>
<br /></div>
<div>
Here, we can see that the Turbofan optimizer is still triggered even on cellphones, and the resulting code performs much better than the original JavaScript code. The Turbofan optimizer still isn't reliable though, so you might actually get worse performance depending on the Emscripten code output.</div>
<div>
<br /></div>
<div>
I'll probably stick with the Emscripten version for now, but I'll later try to optimize my original JavaScript and see if I can get similar performance out of it. It would be nice if I could just link my C++ code directly with JavaScript, but Cordova doesn't allow this. In Cordova, all non-JavaScript code must be triggered asynchronously through messages, which isn't a good fit for my application. It might be possible to do something with Crosswalk, but it seems messy and I'm too lazy. </div>
<div>
<br /></div>
<div>
Alternately, I could try using Firefox on cellphones since its optimizer can get performance that's near that of C++, but the embedding story is a little unclear. The Mozilla people <a href="http://chrislord.net/index.php/2016/03/08/state-of-embedding-in-gecko/">abandoned support for embedding their Gecko browser engine</a>, and they ceded that market entirely to Chrome/Blink. They now realized that it was a mistake and they're trying to get back in the game with their <a href="https://github.com/mozilla/positron">Positron project</a> <a href="https://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=0ahUKEwj4_-_Z49vRAhUCchQKHY7MCi0QFgguMAM&url=https%3A%2F%2Fmedium.com%2F%40david_bryant%2Fembed-everything-9aeff6911da0&usg=AFQjCNEhOA24IuJVZKGVMlABm0Qg1OXTgg&sig2=N5vvPiZyDscJ7KjmS0U-DQ">etc</a>, but I think they've entirely missed the point. They're building an embedding API that's compatible with Chrome's CEF, but Chrome's CEF already works fine, so why would anyone want to use Mozilla's version? The space to play in is the mobile market. Instead of wasting time on FirefoxOS, they should have spent more time working on embedded Firefox for mobile apps. An embedded Firefox for iOS with a JavaScript precompiler would be really useful, and Mozilla could dominate that space. Well, whatever.</div>
Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-22080892703229671322016-09-23T20:04:00.001-04:002016-09-23T20:26:35.337-04:00It's Impossible to Write Correct JavaScript ProgramsDuring a coding session involving JavaScript, UIs, and new asynchronous APIs, I realized that it's no longer possible to write correct programs with user interfaces any more. The JavaScript language has long had a problem with isolated language designers who tinker on their own isolated part of the system without seeing how things fit together as a whole. Though their individual contribution may be fine, when everything gets put together, it's a mess that just doesn't work right. Then, later generations of designers patch things over with hacks and fixes to "smooth things over" that just make the language more and more convoluted and complicated.<br />
<br />
Right now, the language designers of JavaScript are all proud of their asynchronous JavaScript initiatives like promises, async, and whatnot. These "features" shouldn't be necessary at all. They are "fixes" to bad decisions that were made years earlier. It's clear that the designers of these asynchronous APIs mainly do server-side work instead of front-end work because asynchronous APIs make it next to impossible to write correct user interfaces.<br />
<br />
In all modern UI frameworks, the UI is single-threaded and synchronous. This is necessary because UI code is actually the trickiest and hardest code to write correctly. People who write back-end code or middleware code or computation code actually have it easy. Their code can rely on well-defined interfaces and expected protocols to ensure the correctness of their algorithms. As long as your code calls the libraries correctly and follows proper sequences, then everything should work fine. By contrast, UI code interfaces with the user. The user is messy and unpredictable. The user will randomly click on things when they aren't supposed to. You might think, how hard can it be to write some code for clicking on a button? But what happens when they start clicking on different buttons with their mouse and finger at the same time? What happens when they start dragging the scrollbar in one direction while pressing the arrow keys in the opposite direction at the same time? Users will drag things with the mouse while spinning the mousewheel and then press ctrl-v on the keyboard and get angry when the UI code becomes confused and formats their hard drive. Then when you fix that problem, some other user will get angry because they were using that combination as a quick shortcut for formatting hard drives and want the old behavior back. Reasoning about the correctness of UI code is very hard, and the only thing that makes it tractable at all is that it's all synchronous. There is one event queue. You take an event from the queue and process it to completion. When you take the next event off the event queue, you don't know what it is, but it will be dependent on the new state of the UI, not the old one. You don't know what crazy thing the user is going to do next, but at least you know the state of the UI whenever the next event occurs.<br />
<br />
Asynchronous JavaScript inconveniently breaks the model. All of these asynchronous APIs and promises are based on the idea that you start an action in another thread and then execute some sort of callback when the execution is complete. This is fine for non-UI code because you can use modularity to limit the scope of how crazily the state of the program will change from when you invoke the API and when the callback is called. It's even fine if these sorts of APIs are needed occasionally in UIs. During the rare time that an asynchronous XMLHttpRequest is needed, I can spend the day mapping out all the mischief that the user might do during that pause and writing code to deal with it when the request returns. But these asynchronous APIs are now becoming so widespread that I'm just not smart enough to be able to work out these details any more. The user clicks a button, you call an asynchronous API, then the user navigates to a different view, then the asynchronous call comes back to show its result, but all the UI elements are different now. The textbox where you wanted to show the result is no longer there. So now in your promises code, you have to write all sorts of checks to validate that the old UI actually still exists before displaying anything. But maybe the user clicked on the button twice, so the old UI still exists, but the 2nd asynchronous call returned before the 1st one, so now you need to write some custom sequencing code to make sure things are dispatched in the proper order. It's just a huge unruly mess.<br />
<br />
The only practical solution I can find is to suppress the user interface during asynchronous calls, so that the user can't go crazy on the user interface while you're doing your work. This is a little dangerous because if you make a mistake, you might accidentally forget to unsuppress the user interface during some strange corner cases, but dealing with these corner cases is a lot easier than dealing with the corner case of the user generating random UI events while you're waiting on an asynchronous call. There was one proposal to add an "inert" attribute to html to disable all events, but that was <a href="https://www.w3.org/Bugs/Public/show_bug.cgi?id=24983">eventually killed</a>. Right now, the only hope for UI coders is to misuse the <a href="https://developer.mozilla.org/en/docs/Web/HTML/Element/dialog"><dialog> tag</a>, but very few browsers support it currently.<br />
<br />
The annoying thing is that these things are just sad hacks that make programming more and more convoluted. Despite all the pride that the JavaScript designers have in their clever asynchronous promises API, that too is just a hack to paper over previous questionable decisions. The root cause of all these issues is the arbitrary decision that was made many years ago that there would be no multithreading in JavaScript. As a result, the only way to run something in parallel is to use the shared-nothing Web Worker system to run things in, essentially, separate processes. Although the language designers proudly proclaimed that there would be no concurrency errors because the system didn't allow shared objects or concurrency mechanisms, this system ended up being so limited that no one really used it. There were no concurrency errors in JavaScript programs because no one used any concurrency. (Language designers are now trying to "fix" Web Workers by creating a convoluted API that adds back in <a href="https://tc39.github.io/ecmascript_sharedmem/shmem.html">shared memory and concurrency primitives</a>, but only for JavaScript code that is translated from C++.) Once JavaScript multithreading was killed, a certain old dinosaur of a browser company (no, not Microsoft, I meant <a href="https://commons.wikimedia.org/wiki/File:Mozilla_dinosaur_head_logo.png">dinosaur literally</a>) discovered that their single-threaded browser kept hanging. Although every other browser maker moved to multi-process architectures that ensured that browser remained responsive regardless of the behavior of individual web pages, this single-threaded browser would become unresponsive if any tab made a long-running synchronous call. Somehow, the solution to this problem was to remove all synchronous APIs from JavaScript. And now we can't write correct UI code in JavaScript any more.<br />
<br />
JavaScript is getting to be a big mess again. The fact that it's no longer possible to write correct user interface code any more is a clear signal that something has gone wrong. The big browser vendors need to call in some legendary language gurus to rethink the language and redirect it down a more sane path. Perhaps they need to call in some academics to do some original research work on possible better concurrency models. This has actually happened in the past, when Guy Steele was brought in for the original JavaScript standardization or when Douglas Crockford killed ES4. It looks like something like that is needed again.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-12845284916947486012016-01-10T00:27:00.001-05:002016-01-10T17:16:56.377-05:00Java Metaprogramming Is Widespread But Slowly DyingMetaprogramming is one of those cool academic topics that people always talk about but never seem all that practical or relevant to real-life programming. Sure, the idea of being able to reprogram your programming language sounds really cool, but how often do you need to do it? Is it really that useful to be able to change the behavior of your programming language? How often does a programmer need to do something like that? Shouldn't you be able to do everything in the programming language itself? It seems a lot like programming in Haskell--technically cool, but totally impractical.<br />
<br />
I've recently started realizing that metaprogramming features in programming languages aren't important for technical reasons. Metaprogramming is important for social reasons. Metaprogramming is useful because it can extend the life of a programming language. Even if language designers stop maintaining a programming language and stop updating it with the new features, metaprogramming can allow other programmers to evolve it instead. Basically, metaprogramming wrestles some of the control of a programming language away from its main language stewards to outside programmers.<br />
<br />
One of best examples of this is Java. Traditionally, Java isn't really considered to have good metaprogramming facilities. It has some pretty powerful components though.<br />
<ul>
<li>It has a reflection API for querying objects at runtime. </li>
<li>It has a nice java.lang.reflect.Proxy class for creating new objects at runtime. </li>
<li>By abusing the classloading system, you can inspect the code of classes and create new classes. </li>
<li>The JVM instruction set is well-documented and fairly static, making it feasible for programs to generate new methods with new behavior. </li>
</ul>
The main missing pieces are<br />
<ul>
<li>The instruction set is so big and complicated that it's cumbersome to analyze code or to generate new methods</li>
<li>You can't really override any of the JVM's behaviors or object behaviors</li>
<li>You can't really inspect or manipulate the running code of live objects</li>
</ul>
<div>
The crowning piece of the Java metaprogramming system though is annotations. To be honest, most of the real metaprogramming stuff is too complicated to figure out. Annotations, though, are simple. It's just a small bit of user-specified metadata that can be added to objects and methods. Its simplicity is what makes it so powerful. It's so simple to understand that many programmers have used annotations to trigger all sorts of new behaviors in Java. Annotations have been used and abused so much that their use is now widespread throughout the Java ecosystem. This type of metaprogramming is probably the most used metaprogramming facility in programming languages right now. </div>
<div>
<br /></div>
<div>
I believe that metaprogramming through annotations has allowed Java to evolve and to add new features despite long periods of inactivity from its stewards. For example, during the 10 years between Java 5 and Java 8, there weren't any major new language features to the Java language. While Java was stagnating during that period, other languages like C# or Scala were evolving by leaps and bounds. Despite this, Java was still considered competitive with others in terms of productivity. One of the reasons for this is that Java's metaprogramming facilities allowed library developers to add new features to Java without having to wait for Java's stewards. Java gained many powerful new software engineering capabilities during those 10 years that put it on the leading edge of many new software practices at the time. Metaprogramming was used to add database integration, query support, better testing, mocking, output templates, and dependency injection, among others, to Java. Metaprogramming saved Java. It allowed Java to be used in ways that its original language designers didn't anticipate. It allowed Java to evolve and stay relevant when its language stewards didn't have the resources to push it forward.</div>
<div>
<br /></div>
<div>
What I find worrisome, though, is that the latest language developments in Java are weakening its metaprogramming facilities. Java 8 weakened metaprogramming by not providing any reflection capabilities for lambdas. Lambdas are completely opaque to programs. They cannot be inspected or modified at runtime. From a functional/object-oriented cleanliness perspective, this is "correct." If an object/function exports the right interface, it shouldn't matter what's inside of it. But from a metaprogramming perspective, this causes problems because any metaprogramming code will be blind to entire sections of the runtime. Java 9 will further weaken metaprogramming by imposing extra visibility restrictions on modules. Unlike previous versions of Java, these visibility restrictions cannot be overridden at runtime by code with elevated security privileges. From a cleanliness perspective, this is "correct." For modules to work and be clean, normal code should never be able to override visibility restrictions. The problem is that the lack of exceptions hampers metaprogramming. Metaprogramming code cannot inspect or alter the behavior of huge chunks of code because it is prevented from seeing what's happening in other modules. </div>
<div>
<br /></div>
<div>
Although its great to see the Java language finally start improving again, the gradual loss of metaprogramming facilities might actually cause a long-term weakness in the language. As I mentioned earlier, I think the benefits of metaprogramming are social, not technical. It's a pressure valve that allows the broader programming community to add new behaviors to Java to suit their needs when the main language stewards are unable or unwilling to do so. With the language evolving relatively quickly at the moment, it's hard to see the benefits of metaprogramming. The loss of metaprogramming features will be felt in the future when outside developers can't extend the language with experimental new features and, as a result, the language fails to embrace new trends. The loss will be felt if there's ever another period of stagnation or conflict about the future direction of the language, and outside developers can't use metaprogramming to independently evolve the language. Hopefully, this gradual loss of metaprogramming support in Java is just a temporary problem and will not prove detrimental to the long-term health of the language.</div>
Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-49775047609615236482015-12-09T01:29:00.001-05:002015-12-09T01:30:11.625-05:00Transcoding Some VideosOne of my websites has some videos on it, and I usually just embed some YouTube videos there. You don't have to pay for hosting it, YouTube takes care of encoding the videos so that it can be used on multiple devices, and you can potentially get some views from people searching for stuff on YouTube. But recently, I've started to get concerned about embedding third-party widgets like that. It's a little unclear how compliant my website can be with its privacy and cookie policy if these third party widgets can change their own cookie and privacy policies at will.<br />
<br />
So I looked into what's involved in hosting the videos myself. It turns out the hit from hosting the videos myself wouldn't be too bad. Since the videos were video slideshows, it turns out they actually compress really well. I played with different ffmpeg, and I found that I could drop from the 50MB files that my video program produced to 5MB files by using two passes, variable bit rate, and a large maximum duration between key frames.<br />
<br />
Now the second problem. There are two main video formats on the web: webm and mp4. Apple owns patents on mp4, and they purposely refuse to support any video formats except mp4 on their devices so that anyone who wants to provide video content to Apple users must pay for Apple's patents. I couldn't just use ffmpeg to transcode my videos to mp4 format because it doesn't come with a proper patent license (licenses are needed to decode and encode the h.264 video and AAC audio). I tried scouring around the Internet for a properly licensed version of ffmpeg that I could buy, but I had no luck in this. I could have just purchased a whole new video program with its codec packs, but it's hard to tell whether the codecs that come with a video program would expose the tuning parameters I needed to get the small sizes I wanted.<br />
<br />
In the end, I went with a cloud transcoder since they presumably purchase a patent license for their services. It turns out most of the cloud transcoding services have gone bankrupt, so there's only a few big ones left like Amazon Elastic Transcoder, Zencoder, and Telestream cloud. Initially, I was leaning towards Zencoder because they were pretty upfront about the fact that they let you set all the ffmpeg parameters yourself and they said they 2-pass encoding. But the system seemed sort of messy--you needed to copy your files into s3 and give them read rights to it. At that point, it seemed easier just to go with Amazon since I already had an account with them. At first, I couldn't get the Amazon stuff to start, but apparently, the Amazon Transcoding console won't start until you upload a video file to s3 first, which is a little bizarre, but whatever. In the end, Amazon actually exposed the parameters I needed to get my slideshow to compress well, and the final file sizes seemed to be competitive with what I was getting from 2-pass encoding with ffmpeg myself, so I suspect that Amazon must be enabling that but not saying it in their documentation. The web interface requires you to manually enter in the settings for every single file you want to transcode, which is a pain, but I only had 15 videos or so, so it wasn't too bad. It's possible to script the transcoding using their APIs, but I was too lazy to do that.<br />
<br />
Actually serving mp4 videos on a website also requires a patent license (separate from the patent license for encoders and decoders). Fortunately, I was offering free educational Internet videos, and those are exempt from royalties.<br />
<br />
It's sort of annoying that the technical aspects of putting a video up on my website only took me an hour or two to figure out, but the process of trying to figure out how to do so legally ended up taking two days. I really hate how Apple is doing everything possible to sabotage the web and extract maximum profit from it--they patent important parts of the HTML specification, they refuse to support formats that can be used without patents, they refuse to support new standards in their browsers if it makes them competitive with apps--it's just ridiculous sometimes.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-41646167457587634752015-11-24T21:29:00.000-05:002015-11-24T21:29:40.133-05:00How John Tory Can Get Out of Building SmartTrack While Still Building SmartTrackWhen Mayor John Tory was campaigning for his position, a key part of his platform was that he would solve Toronto's transportation issues by building a transit system called SmartTrack. Unfortunately, SmartTrack never made much sense as a transit plan: it's expensive for the limited transit benefits it provides; it's unlikely to deliver any of its promised benefits; and it's not actually within the the mayor's powers to build it. In fact, the majority of Toronto voters voted for candidates who wanted to build an alternate transit plan, the Downtown Relief Line, instead. But as a major campaign promise, he has to deliver something, but it doesn't make sense for Toronto to waste money building SmartTrack. A wily politician would be able to get out of that promise without wasting all that money. But how can John Tory "build" SmartTrack without actually building it? Or, alternately, how can John Tory get out of spending all the money and political capital needed to implement his SmartTrack plan while still being able to face voters at the next election and claim that he's building it?<br />
<br />
This blog post will look at what SmartTrack is, why it won't work, and how John Tory can get out of building it.<br />
<h3>
Why People Want SmartTrack</h3>
On it's surface, the SmartTrack proposal sounds pretty promising. SmartTrack is supposedly able to provide a fast, frequent, high capacity train service that covers most of the city and links together several major employment centers in the GTA such as downtown, business parks near the airport, and business parks in Markham. By taking advantage of existing railway tracks and unused lands throughout the city, the system can supposedly be built quickly and affordably. Who wouldn't want something like that? If you can build a useful transit service for not a lot of money, why woudn't you do that?<br />
<br />
The full system is 53km long and is comprised of 22 stations. It runs along "unused land" between the Airport Corporate Centre eastwards towards the Kitchener GO train line. From there, it follows the same path to Union Station in downtown. From Union Station, SmartTrack would extend north-east along the same path as the Stoufville GO train line up to the in-development Markham downtown. The whole plan would supposedly only cost $8 billion dollars and be built in under seven years.<br />
<h3>
Why SmartTrack Doesn't Work</h3>
When SmartTrack was proposed, many people were confused because transit planners had never proposed building such a system before. Since the proposal was new, no one had actually studied whether it would be possible to actually build it, so no one could intelligently argue against it. On the surface, it seems like it could be feasible. Don't we already have train tracks running through Toronto? Surely, we could just build a bunch of stations and run a service on them?<br />
<br />
In reality though, the reason that no transit planner had ever proposed such a system before was that it wasn't that useful and it's much more difficult to build than suggested. No one had done a formal study of the issue, so no one could authoritatively criticize of the project. But just by looking at maps, looking at ridership numbers of existing services, and by listening to details that transit planners have made in the past about the capacity of the existing train track, it was pretty clear that SmartTrack would not be an easy system to build and run. I suspect that the transit planners for the provincial government could have easily rebutted the claims made about SmartTrack, but they were told to keep quiet so as to not interfere with the election.<br />
<br />
So why isn't SmartTrack feasible? Well, let's look at its promises:<br />
<br />
<ul>
<li><b>Lots of Stations</b>: One of the claimed benefits of SmartTrack is that there would be lots of stations around the city where people can get on the train. The problem with having lots of stations is that when a train is stopped at a station, no other train can pass by. On a subway or LRT, this isn't a problem, but SmartTrack runs along other people's train tracks, and those owners won't be happy if their trains have to stop every few hundred metres while the SmartTrack train pulls into station after station. SmartTrack probably can't get approval for building so many stations unless they also build a lot of extra traffic so that SmartTrack trains don't interfere with existing trains using the track.</li>
<li><b>Frequent Service</b>: SmartTrack supposedly will offer "frequent" service. When people think of frequent service, they usually think of a subway-like service that comes every 5 minutes. In reality, SmartTrack would at best be able to offer service every 15 minutes, and service would most likely only be able to reach 30 minute intervals. If you have a choice between waiting 30 minutes for a train or just taking the local bus that comes every 5 minutes, most people would rather take the bus. The reason that SmartTrack is so infrequent is that the train tracks have limited capacity. Just because a train track exists, doesn't mean you can just run an infinite number of trains on it. Trains take a while to speed up and slow down, so you need to carefully manage the trains to prevent them from colliding into each other. Although it's possible to increase the capacity of the system through electrification, improved signalling, better train management, and building more track, these aren't straight-forward changes to make. The other problem with frequent service is that running a frequent service is expensive, and there likely isn't enough demand to justify running that many trains. This is discussed in more detail later on.</li>
<li><b>Fast</b>: SmartTrack is supposedly faster than other transit alternatives because it runs on its own train track and doesn't have to worry about traffic lights or car traffic. Although that is true, the SmartTrack train has a lot of train stations. Because trains have steel wheels, they don't have much traction, so they are slow to speed up and slow down. The more stations there are, the more time it has to spend slowing down at each stop, waiting for passengers, and then speeding up again. With so many stations, SmartTrack will likely be a lot slower than promised. It will almost definitely be slower than driving.</li>
<li><b>Cheap to Build</b>: Because SmartTrack runs along an existing rail corridor and other unused land, it will supposedly be cheap to build. If you don't need to build new tunnels or bridges, then it should be pretty cheap to build, right? The SmartTrack plan says that it can be built for only about $8 billion (still a HUGE sum of money). The problem, though, is that the existing rail corridor might not have the capacity to handle all the SmartTrack trains, so to make room for the SmartTrack trains, you would have to build a lot of new tunnels and bridges. The railway corridor on the eastern leg of SmartTrack only has a single track, so to expand it to support SmartTrack will require expropriating land, adding extra track, and building new tunnels and bridges when it crosses roads. The western leg of SmartTrack is already jammed with trains, so new track might need to be built there. The western track extension to the airport is supposed to run on unused land, but that land is now being used by condo projects. The downtown leg of SmartTrack is <a href="http://stevemunro.ca/2011/12/02/union-station-rail-corridor-capacity/">so near to capacity</a> that the province was thinking of diverting trains to an alternate train station or building a giant tunnel in the future. Adding SmartTrack to downtown could exceed the capacity of those lines and force the building of those expensive projects.</li>
<li><b>Useful</b>: Well, SmartTrack might be expensive and might not be as quick or as frequent as promised, but it would still be a nice service to have, right? True, but SmartTrack will be an expensive service to run, and it's not clear how many people will actually use it. GO Transit already runs trains along that route. Although it doesn't have that many stops and doesn't come too frequently, we can look at <a href="http://www.metrolinx.com/en/docs/pdf/board_agenda/20140905/20140905_BoardMtg_Regional_Express_Rail_EN.pdf">its ridership numbers</a> to give us an idea of how much demand there might be for train service along that route. Those trains carry the highest demand part of the line: passengers commuting to downtown during peak periods. The reality is that there isn't that much demand. Although traffic in the northwest and northeast parts of the city isn't great, it still usually makes more sense to drive there than to take the train. Those parts of the city were specifically designed for driving and have a decent road system. Is it worthwhile spending hundreds of millions of dollars a year to run a service that won't be used by that many people? </li>
<li><b>Quick to Build</b>: One of the strangest parts of SmartTrack is that John Tory was promising to build it at all. It's strange because it's not within his power to build it. SmartTrack will run along a rail corridor that belongs to the province and some private companies. It's a variation of an existing train service that is owned and run by the province. The bulk of the financing is supposed to come from the federal and provincial governments. It's not clear what the city would contribute and how it would be within its power to build and run such a service. It would be as if John Tory made a campaign promise that Air Canada would run more frequent flights between Toronto and Windsor, using funding from the federal government. Although more frequent flights would be nice for the city, the mayor has no influence over Air Canada or the federal government. How could he make a promies on behalf of someone else?</li>
</ul>
<div>
The main problem with SmartTrack though is that it has been made completely redundant by the province's plans for a Regional Express Rail train running along mostly the same route. The Regional Express Rail train provides many of the same benefits of SmartTrack but is much cheaper (in fact, the province does not expect any financial contributions from the city beyond, perhaps, moving some utilities).</div>
<br />
<h3>
How to Get Out of Building SmartTrack</h3>
Given all the problems with the SmartTrack proposal, how can the mayor get out of building it? With the right messaging, it shouldn't be too hard. The province has seen that the mayor has painted himself into a corner and has provided him with ways to make a face-saving exit. But the mayor seems oblivious to this and is mishandling his communication in such a way that he can't change direction.<br />
<br />
The province has seen the mayor's problems, and they have started building their own transit system that provides 70-80% of the benefits of SmartTrack at absolutely no cost to the city of Toronto. The Ontario government's Regional Express Rail project was originally supposed to only serve the west of the city. It provides fast service to a smaller number of stations along the same route as SmartTrack. In light of SmartTrack, the provincial government has decided to extend it to the east of the city along the same route as SmartTrack (even though existing ridership numbers didn't justify the building of such an extension) and they decided to add several new stations. They're also strongly considering the electrification of the tracks even though earlier studies suggested that it would be more beneficial to electrify a different set of tracks. Regional Express Rail makes SmartTrack a redundant transit service. Why spend money building SmartTrack if it duplicates an existing transit service? If John Tory were to do absolutely nothing, then he would get most of the benefits of his system without having to spend any money or political capital. The province will build it for him. But if he does nothing, he looks like he's abandoning his campaign promise. How can John Tory back off from building SmartTrack without looking like he's abandoning his campaign promise?<br />
<br />
It's all about messaging. Instead of focusing on SmartTrack as a specific plan involving 53km of track and 22 stations, John Tory can redefine SmartTrack so that it can include the Regional Express Rail. He can define it as a plan to leverage Toronto's existing rail corridors to help move Torontonians through the city. Instead of specifically requiring a heavy rail link to the airport business centres, he can just say something like, "we need to find a way to connect downtown with other important employment centers throughout the city, including Markham and the airport area." Instead of requiring there to be 22 stations, he can just say that that the existing rail corridors don't serve Torontonians well because there aren't enough stations. Instead of SmartTrack being a specific transit plan, he can describe SmartTrack as being "smart" about taking advantage of Toronto's existing infrastructure to quickly build new infrastructure connecting as much of the city as possible. In particular, instead of getting the city's planners to study the building of the specific SmartTrack plan (a mistake he already made, unfortunately), he should have told the city's planners to come up with a plan that would leverage Toronto's existing rail corridors to better connect downtown with with other employment centres thoughout the city. He should define SmartTrack in terms of the outcomes and the benefits it will provide instead of implementation. He should focus on the ends, not the means. That way, any transit plan that delivers the same benefits can be labelled as "SmartTrack." By defining SmartTrack more generally, it gives him more leeway to alter the plan to accomodate the realities on the ground.<br />
<br />
It also allows him to build something cheaper while still "building SmartTrack." For example, building a light rail to the airport is much cheaper and more appropriate than an underground heavy rail line. By defining SmartTrack in terms of "connecting other employment centers to downtown," then he could build credibly build a light rail line while still claiming it to be part of SmartTrack. By defining SmartTrack as "leveraging the existing train tracks that cross the city to provide better transit service for Torontonians" then he could get the TTC to pay the province to build more Regional Express Rail stations in Toronto and to let TTC riders ride it while paying a regular TTC fare. The outcome is the same, and he can apply his political pressure to ensure that the final Regional Express Rail system is good for Torontonians, but the actual financial and political cost is much less. And at the end of the seven years, he can still take credit for "building SmartTrack" even though the final system might not be exactly what he promised on the campaign.<br />
<br />
The original SmartTrack plan promised by John Tory has been made redundant by other transit systems being built by the province. He needs a way to back out of those plans without looking like he's abandoning his promise. He can do that by redefining SmartTrack in terms of its outcomes instead of its implementation. By describing SmartTrack in terms of how it will help Torontonians move through the city instead of as a specific set of stations and train lines, he gains the flexibility needed to adapt the plan to the changing circumstances.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-62380806706895952042015-07-03T16:13:00.002-04:002015-07-03T21:37:16.905-04:00UIs and Layout Managers Using HTML and CSSFor the past few years, I've decided to stop learning new UI frameworks and to make all my user interfaces using HTML5. Making user interfaces is HARD. The idea that I should constantly throw away my old UI code and rewrite things in scratch every few years using new, half-baked UI frameworks is preposterous. It takes ages to learn the ins and outs of a UI framework and figure out how to get the behaviour "just right." Why would I want to discard code that works perfectly well and which I spent ages fine-tuning and replace it with new code based on a new, buggy UI framework? Since I do all my coding in Java, I went and ported GWT Elemental to JavaFx so that I could use HTML5 in my UIs from my Java code. I can now take my same UI code and reuse it on websites, on desktop applications, and for mobile UIs.<br />
<br />
In the past, I've found using HTML for user interfaces to be problematic. HTML has traditionally used a word processor layout model. There's a central flow of text, and you can position pictures and other elements to the sides of the text. HTML really wants you to lay things out this way. If you try to do something different, you end up really fighting against the layout model and causing yourself grief. The standard components of a desktop UI -- widgets and toolbars that dock on the sides and status bars on the bottom -- really doesn't fit in well with HTML layouts.<br />
<br />
HTML also often works at the wrong abstraction level for making good UIs.<br />
<ul>
<li>the UI engine has to be on guard against exploits by non-trusted code, so you can't easily capture the mouse or manage the clipboard or talk to other applications, etc</li>
<li>you can't really do pixel fiddling. Sometimes, you just want to get in there and just tweak the pixels to get the perfect look, but since HTML is a retained mode UI, you can't easily do that. In the end, that has worked out ok because it made adding support for high dpi screens fairly painless. But you can't do things like make rounded buttons that have pixel-perfect shading without lots and lots of hoops to jump through. You can't take existing widgets and buttons and tweak the look a little bit by fiddling with the pixels. If you want to fiddle pixels on a button, you have to write all the logic for the button yourself (some of the accessibility stuff can get hairy!). You can't take an existing button and just fiddle with how it gets painted.</li>
<li>HTML has poor support for text input and internationalization. Even after many years of studying it, it's still unclear to me how to do rich-text internationalized input in a web browser. Maybe it's easy, maybe it's not. There's just not much talk about it.</li>
<li>HTML doesn't really have a concept of widgets. This is coming in the form of web components, shadow DOM and templates, but these things are very much a work in progress and it isn't clear when they'll be available for widespread use. In the meantime, HTML doesn't really support the idea of having self-contained UI components. If you make a custom UI widget, the "guts" of your widget are exposed in the HTML. Other components might accidentally, move things around in your widget or restyle their CSS because there is no way to modularize your own code to prevent accidental tampering by other widgets.</li>
<li>the event model doesn't have easy hooks for doing common UI stuff like keyboard shortcuts, context menus, menu bars, enabling/disabling widgets, modal dialog boxes, file choosers, etc. Handling these things require awkward flows of events, so regular UI frameworks like win32 or Swing have special hooks that allow you to tap into this event flow without having to build your own convoluted event handling framework</li>
<li>it's hard to lay things out at their "natural size." If you have a short form that you want people to fill in, it can get a little tricky to set its width and height set to the minimum size needed to hold the form. Often you simply need to guess at an appropriate size.</li>
<li>since HTML is designed for making web pages, it does a poor job exposing platform dependent behaviour to the application. What's the default font on the system? What's the default keyboard button used for keyboard shortcuts? What keyboard events is it safe to intercept without destroying accessibility of the platform? What's the default language?</li>
<li>you have to code the common UI widgets yourself because HTML doesn't come with any. Things like toolbars, menu bars, context menus, spin buttons, and scrollbars are all things you have to do yourself.</li>
</ul>
<div>
Despite all these major deficiencies in using HTML for traditional UIs (and I'm sure there's many more too), HTML does have many advantages over other UI frameworks.</div>
<div>
<ul>
<li>there are many more developers working on improving HTML5 than there are developers working on other UI frameworks, so it advances quickly</li>
<li>it's well-supported on new hardware and is easily cross-platform</li>
<li>it embraces certain features much earlier than other UIs (e.g. touch support and high dpi)</li>
<li>it has easy support for printing</li>
<li>it ages well, so old HTML code generally still works even on modern systems</li>
</ul>
<div>
So given that we want to use HTML for a traditional UI, how do we go about doing it? In the last few years, the layout options available using HTML and CSS have improved dramatically with endless new features that cater to people designing UIs as opposed to word processor print layouts. With all of these features though, it has taken me a while to figure out how to use those options to make a traditional looking UI. Here are some of the tricks that I've used.</div>
</div>
<div>
<br />
The first thing is to make sure to zero out the margin, padding, and borders of all your html, body, div, and span elements. In the past, it was also necessary to set the height and width of the html and body elements to 100%, but I don't think that's necessary any more. I also don't think it's necessary to add "position: absolute;" or "position: relative" on the html and body elements any more. This is all necessary so that you can accurately stick things in the corners and sides of the page using absolute positioning. In a word processor layout, you want to have a margin on the sides, but in a proper UI, you want to have toolbars and menus there.<br />
<br />
In the past, it was important to avoid using pixels for positioning because people with poor eyesight would increase the font size to make things easier to read. Most designers couldn't handle this, so the modern approach is to use pixel positioning, but let users with poor eyesight adjust what the size of a pixel is. I'm a traditionalist though, and I still try to use layouts based on font size where possible while resorting to pixel sizes when I actually need containers that hold images with a known pixel size. HTML5 now has <a href="https://developer.mozilla.org/en/docs/Web/CSS/length">new measurement units</a> that make laying out resizable things easier.<br />
<br />
The "rem" unit is the width of an "M" character on the body element. You can lay things out based on how many characters should fit in a certain area. Unlike the old "em" unit, which is the width of an "M" for the current element, you don't have to worry that you might be nested inside another element that changed the size of the font or something.<br />
<br />
Similar to the "rem" unit, HTML5 also has the new measurement units "vw" and "vh", which express things in terms of percentage of the viewport width and height (i.e. width and height of the browser window). If you want your UI to resize when you resize the browser, then you need to express things in terms of percentages. Unfortunately, the old "%" unit was always a little confusing because it sized things in terms of percentage of the parent (and sometimes, it was not of the parent but of the first relatively or absolutely positioned parent). Often, you need to position div elements inside other div elements to get the right layouts, but you still wanted things to resize globally, so using "vw" and "vh" units lets you do that. There's even a "vmin" unit that's useful for making elements that have a certain ratio of height to width but that still resizes when you resize the browser. I suspect that "vw" and "vh" units might have similar problems to "%" when using things for widths. Sometimes, when you set two things with a width of "50%" beside each other, the actual size in pixels might be something like 500.5, and the browser might round those values up or down, meaning the final width might leave an extra pixel somewhere or it might overflow the width of the browser. I think modern browsers actual use floating point numbers for sizes and are a bit more generous about half pixels at the edge of the screen because I haven't had an issue with things like that in a while. There's also a "vmax" unit. That unit might be useful for scaling images when used in combination with max-width and max-height. I haven't had an occasion to use it yet though.<br />
<br />
Using physical units like inches and picas are still ill-advised, I think. In the past, there was an issue where some browser makers would actually use real units there. So if you said you wanted something to be one inch, but you were using a 60inch TV, the browser would actually make your element only a few pixels wide because that was what one inch was on a large TV (whereas on a tiny mobile phone, one inch might be half the screen). I think most browsers just set one inch to be 96 "pixels" now, but if that's the case, you might as well just use pixels directly for sizing things.<br />
<br />
Once you have your units figured out, you need to a way to stick things in different places in the window in order to make a traditional UI. To do that, you can use "position: fixed" or "position: absolute".<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikvODl38rz4pQkLVO6Xy1P0HnORm5Tfs9nJYLQF3eEkg-XEn6hdo6Me-v7DJGTDFoHtPz8pfQ4-BmjC0W9AOqukMchDsiEMaibXWu9C_g90bSC4oWO5YwBEpcBuvKtrLpAYXLTuA/s1600/fixedlayout.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="306" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikvODl38rz4pQkLVO6Xy1P0HnORm5Tfs9nJYLQF3eEkg-XEn6hdo6Me-v7DJGTDFoHtPz8pfQ4-BmjC0W9AOqukMchDsiEMaibXWu9C_g90bSC4oWO5YwBEpcBuvKtrLpAYXLTuA/s320/fixedlayout.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<i>fixed positioning</i></div>
<br />
In the past, Apple sabotaged fixed positioning because the iPhone wouldn't follow it, but modern iPhones do behave properly now. I just find absolute positioning to be more flexible and easier to use though. If your UI has a central resizable area, but some fixed sized things on the side, then you could possibly use fixed positioning. The scrollbars for the whole web page will only control the central area, but that might be what you want. When using a keyboard to control a UI, this is useful because using the the cursor keys to move around will always scroll the central area even if the keyboard focus is on one of the side panels. With absolute positioning, for example, your keyboard focus might be on a side panel when you first create your UI, so when the user tries using the cursor keys to scroll things, the central area won't scroll, contrary to their expectations. But this is fixable, so I'm not sure it's worth using "position: fixed" just for that. People might get confused by having the main scrollbar only control the central area too.<br />
<br />
Funnily enough, in the past, Apple also sabotaged absolute positioning on the iPhone too. Sometimes, you wanted side panels in your UI that scroll, and the iPhone wouldn't show scrollbars on them, so people wouldn't realize that they scroll, and the iPhone gesture needed to actually scroll them was really confusing (some sort of two finger thing). That's fixed now though.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQjTqOGafArbnhH80f55VmD5sOYBvvKKxklu1WhLIXzr-v8r61xWjJ9PGxw0vXHBYrcOZe6K_zzbLMpv-yXuCL7OtKHAQVeqXcEYopzZaHPfewt4XpHJz7rwiqo5l4V12jjtaXag/s1600/floatingpanel.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="175" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQjTqOGafArbnhH80f55VmD5sOYBvvKKxklu1WhLIXzr-v8r61xWjJ9PGxw0vXHBYrcOZe6K_zzbLMpv-yXuCL7OtKHAQVeqXcEYopzZaHPfewt4XpHJz7rwiqo5l4V12jjtaXag/s320/floatingpanel.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<i>floating panel</i></div>
<br />
Absolute positioning is obviously useful for free-floating toolboxes and windows, but you can also use it in UIs to provide the functionality of a BorderLayout layout manager. You can easily create one expandable center area, with fixed sized components above, below, to the left, and to the right of it. Unlike fixed positioning, elements with absolute positioning can be nested, so any element or UI component can, in turn, use absolute positioning to layout out its internal elements using this border-style layout. HTML's absolute positioning is also limited because you have to specify sizes for the elements that you place on the sides. You can't let those components be laid out "naturally" and let the UI automatically figure out a natural width and height for them. You must explicitly give them a size.<br />
<br />
To use absolute positioning properly in this way, you need to watch for some things:<br />
<ul>
<li>By default, the width and height of an element given in CSS specifies the size of the content only and does not include the border and padding. This makes it hard to get boxes to line up properly beside each other because the size of an element is often given in different measurement units from the size of its border and padding. Typically, you would specify the width of an element in vw, its padding in rem, and its border in px. In the past, you would need to get around this problem by using nested div elements, but CSS now offers two better ways to deal with this problem. One is the calc() function in CSS that lets you calculate a measurement that mixes different measurement units. This is still a bit of a pain to use though, so the easier approach is to use the CSS "box-sizing: border-box" property to explicitly state that measurements should include the border and padding.</li>
<li>You have to make sure that you've positioned everything perfectly to fit inside the browser window, or you'll end up scrollbars on the browser, which will throw the whole layout off. Sometimes, it's useful to sprinkle "overflow: auto;" and "overflow: hidden" on various elements to make sure content doesn't accidentally spill over and become larger than the browser window, triggering the appearance of scrollbars</li>
<li>If you've worked with other UI frameworks or even drawing frameworks like the HTML5 Canvas, you get into a habit of specifying the sizes and positioning of things using left, top, width, and height. With absolute positioning, this can get you into trouble because you have to mix different measurement units, so things can get confusing really quickly. To get the most use out of your absolute positioning, you have to remember that CSS lets you specify the sizes of things in terms of right too (i.e. distance from the right side). So a sidebar on the left can be positioned using "left: 0; width: 20rem;", a sidebar on the right can be positioned using "right: 0; width: 30vw;" and the central array that expands as the window is resized can be positioned using "left: 20rem; right: 30vw;". Notice how a width isn't even specified for the central area. It's size is specified by simply giving the positions of its left and right sides, and different measurement units are used for the two sides too.</li>
</ul>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlqYj2ysvzV_4mQEiZsR6ST45DJR2dlWogV1rzDlmDcEmsAyUJ8H4IyhjR6om8KI_WGFjYlncNCEXOa41-an5v1JgTzAlIMfO0SICdQqHBnL2fke3amLkhPwAbwat3kl2KfmFfkA/s1600/absoluteborderlayout.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="307" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlqYj2ysvzV_4mQEiZsR6ST45DJR2dlWogV1rzDlmDcEmsAyUJ8H4IyhjR6om8KI_WGFjYlncNCEXOa41-an5v1JgTzAlIMfO0SICdQqHBnL2fke3amLkhPwAbwat3kl2KfmFfkA/s400/absoluteborderlayout.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<i>using absolute positioning for a border layout</i></div>
<br />
Recent web browsers also support new layout tools such as <a href="https://css-tricks.com/snippets/css/a-guide-to-flexbox/">flexbox</a>. Flexbox is nice because it lets you do some nice things like vertical centering, aligning elements, mixing of different measurement units and making some limited use of natural sizes of elements when doing layout. Unfortunately, there's a lot of knobs that you need to adjust to get the flexbox to work, and those knobs have confusing names so I always forget what they are and have to spend a lot of times looking things up every time I want to use a flex box.<br />
<br />
I sometimes end up using flexbox layouts for really mundane things that should be easy in CSS, but that I always forget how to do, like making a line of boxes or a line of images. I always forget to set the vertical-align property on those boxes, so they end up being positioned inconsistently depending on what their contents are. Flexbox uses its own alignment rules, so you can avoid that whole mess.<br />
<br />
In the future though, I'm eagerly awaiting the arrival of grid layouts to CSS. Although you can sort of do the same thing using tables, grid layouts should provide much more layout power than flexbox while reducing the amount of confusing HTML verbiage you need to write. With grid layouts, you can actually align things both horizontally and vertically! And you don't need to put elements in your HTML just to designate how things should be laid out. You just specify the different pieces of content you want in HTML, and then the CSS is used to position them in a grid. There's a bit of a concern that Apple has no desire to add support for grid layouts to Safari, but hopefully, they'll be swayed in time.</div>
Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com1tag:blogger.com,1999:blog-9350640.post-85170294500962133462015-03-24T13:28:00.000-04:002018-03-10T01:03:19.411-05:00LibreOffice vs. OpenOfficeIf you're using Windows, use <a href="https://www.openoffice.org/">OpenOffice</a>. If you're using Linux, use <a href="https://www.libreoffice.org/">LibreOffice</a>.<br />
<br />
OpenOffice was stagnating a while ago, so I switched to LibreOffice. Both systems were a little sloppy back then, but LibreOffice seemed a little nicer. When OpenOffice was revived as Apache OpenOffice, I tried it out, but it seemed to have poor support for file formats and it couldn't import some of my old LibreOffice files (even though they're both supposedly using the same standard ODF file format), so I stayed with LibreOffice.<br />
<br />
But LibreOffice has always been problematic for me on Windows with lots of features not working quite right for many years. I just tried OpenOffice right now, and it just "works." No rendering artifacts. No problems with PDF export. It just works. The download and installation experience isn't quite as polished as with LibreOffice, but it just works.<br />
<br />
I think the LibreOffice developers are mostly Linux developers who work for Linux distributions, so it is just works better there and is better integrated with those distributions. OpenOffice is used in commercial Windows products, so the developers make sure it works properly on Windows.<br />
<br />
<b>Update (2018-3-10)</b>: Development on OpenOffice has mostly stopped, so I decided to try the latest LibreOffice 6. <a href="https://bugs.documentfoundation.org/show_bug.cgi?id=37559">PDF export was still broken</a> on Windows. Performance on Windows has now gotten so bad that it was painfully slow to just type up some text on some slides for a presentation. I went back to OpenOffice.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-14890689851126510922014-10-09T02:11:00.002-04:002014-10-09T02:11:46.121-04:00Comparing DRL Plans for TorontoIn my previous <a href="http://my2iu.blogspot.ca/2014/09/setting-up-simulations-of-drl.html">blog post</a>, I mentioned that I was setting up some simulations to see if I could learn some insights into some of the proposals for downtown relief lines for Toronto. I've now finished running the simulations.<br />
<br />
<h3>
Baseline</h3>
Here is the baseline map from the previous blog post. It shows what the fastest route to King and Bay is from various points in the city. Yellow dots show that the fastest route involves taking the Yonge subway. Green dots show that the fastest route involves taking the Bloor-Danforth subway and then transferring to the Yonge subway at the Bloor-Yonge subway station. Orange dots show that the fastest route doesn't require the use of the Yonge subway or Bloor-Danforth subway on its busiest sections. A good plan for relieving pressure from the Yonge subway should involve turning as many yellow and green dots into orange dots as possible.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgtyfB2rGL4RDiSGZpxzuV4j6USQoxBgBUjbYJJnS4TKaXkYJ1tbG1C84vXAVSCUgVa2InMDv-UABjXz4DLT_rH6uNyf1hE2N4zfmcRTIKzu6jfWTjH5Q8IYuyDs2LUaWCXAZE8Q/s1600/before.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgtyfB2rGL4RDiSGZpxzuV4j6USQoxBgBUjbYJJnS4TKaXkYJ1tbG1C84vXAVSCUgVa2InMDv-UABjXz4DLT_rH6uNyf1hE2N4zfmcRTIKzu6jfWTjH5Q8IYuyDs2LUaWCXAZE8Q/s1600/before.jpg" height="287" width="320" /></a></div>
<br />
<br />
<h3>
Downtown Relief Line</h3>
For the downtown relief line, I modeled the shortest possible line. It runs from Danforth and Pape southwards, then travels west until it hits Wellington and Bay. The simulation assumes six minute headways. Although the DRL will probably come less often than the Yonge subway, it is expected that people who would normally take the Bloor subway and then transfer onto the Yonge subway will prefer to take the DRL because it will be slightly faster due it having fewer stops than the Yonge subway. It will also be less crowded. As can be seen from the simulation results, the DRL does seem like it can intercept people who ride the Bloor subway to the financial district and redirect them away from the Yonge subway. Pretty much all the green dots become orange.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1eVLm_Gw37x1SpLMf-8xwHTFnRJxrhjRbNDElsUXHIaDtIck0hCdRnw0s-NOKN3ag2oIBh5wm1bZ5GxpDP1vki7ymX93T_wjbkHz_tgzxWloBTeYJv4Jzg_gNBSYMRXbGUHjkmg/s1600/drl.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1eVLm_Gw37x1SpLMf-8xwHTFnRJxrhjRbNDElsUXHIaDtIck0hCdRnw0s-NOKN3ag2oIBh5wm1bZ5GxpDP1vki7ymX93T_wjbkHz_tgzxWloBTeYJv4Jzg_gNBSYMRXbGUHjkmg/s1600/drl.jpg" height="287" width="320" /></a></div>
<br />
<br />
<h3>
Regional Express Rail - Lakeshore</h3>
The provincial government has recently expressed an interest in upgrading its GO train service to a faster and more frequent regional express rail service. It seems that they are looking primarily at upgrading the Lakeshore and Weston lines initially. In theory, the provincial government has been working on this plan for more than a decade already, but they've recently implied that they're going to prioritize this improvement much higher than before. I modeled this improvement by taking the existing schedule of the Lakeshore GO train line and adding new trips so that it would come every 15 minutes instead of every 30 minutes like it currently does. The simulation shows that increased frequency of the Lakeshore line won't draw any riders away from the Yonge subway line. This might be due to a limitation of the simulation. With more frequent Lakeshore GO train service, the TTC might offer more frequent and better bus connections to the Lakeshore line's stations, which might alter the simulations results somewhat.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiNFeugrU81Jt-bXT49x9PswEQkVD5ap2VaYcYM_BbFHupFmTK4RuJjqTYu81ybMr1efTZG6jsLNSbodvLPvIEAtDcTHe2Q3hjxthVJTUY-DW7hDinNfHhg21dapIDcpEkDYEeHg/s1600/lakeshore.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiNFeugrU81Jt-bXT49x9PswEQkVD5ap2VaYcYM_BbFHupFmTK4RuJjqTYu81ybMr1efTZG6jsLNSbodvLPvIEAtDcTHe2Q3hjxthVJTUY-DW7hDinNfHhg21dapIDcpEkDYEeHg/s1600/lakeshore.jpg" height="287" width="320" /></a></div>
<br />
<br />
<h3>
Regional Express Rail - Stouffville</h3>
The provincial government could potentially build a regional express rail on the Stouffville GO train line. I think this is unlikely because it requires double-tracking and other expensive track upgrades. Despite many small improvements to this track by previous governments over the years, I don't think any accommodation was made to make it easy to upgrade to a much higher capacity line. Also, current ridership on this line is poor, so it would be hard to justify increased frequency on the line. I modeled this improvement by taking the schedule of the Stouffville GO train line and adding new trips so that it would come every 15 minutes. The simulation results show that anyone who normally rides down to Kennedy station to take the Bloor subway to downtown would benefit from upgrading the Stouffville line to a regional express rail. One effect that I did not model was that of a possible extension of the Bloor subway to Sheppard. This change might affect the relative benefits of taking the subway vs. taking a less frequent regional train line, causing people to still prefer taking the subway.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigDcYfNk_axZuE4s1RZygQULrbIzvy2chNrBuWPaa5uAjc1AlLi3ftOLQIzcN-pBHVhW7um6UyPnlL7wYbe1Y2RFfCLn5nZRLbSk1feB7tbyg6NW_6amXWk1N391iVdKvWPYem-A/s1600/stouffville.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigDcYfNk_axZuE4s1RZygQULrbIzvy2chNrBuWPaa5uAjc1AlLi3ftOLQIzcN-pBHVhW7um6UyPnlL7wYbe1Y2RFfCLn5nZRLbSk1feB7tbyg6NW_6amXWk1N391iVdKvWPYem-A/s1600/stouffville.jpg" height="287" width="320" /></a></div>
<br />
<br />
<h4>
SmartTrack</h4>
I don't quite understand the SmartTrack plan, so I wasn't sure how to model it. It's probably best understood as being equivalent to the Regional Express Rail - Stouffville plan shown above. Although the plan does seem very intriguing, I'm not actually sure it's within the power of the City of Toronto to actually build it. The plan seems to involve convincing the federal and provincial governments to pay for upgrades to a provincial regional train service that runs on track owned by private companies. I don't really see how the City of Toronto would have any power to actually get the thing built. The plan is also premised on the idea that there is room to add large numbers of trains onto existing track running through the city. It's not clear if that's actually the case. Those lines might already be packed with other trains. The province has already expressed an interest in building a new downtown train station or new downtown train tunnels due to bottlenecks in moving trains through downtown. And running frequent local train service through the city would impede fast, frequent regional GO train service, so the province might not be willing to make that sacrifice. As far as I can tell, the SmartTrack plan might not involve building anything at all. One possible interpretation of the SmartTrack plan is that Toronto will do absolutely nothing for 10 years, wait for the provincial government to build a Regional Express Rail, and then the city will retroactively call the Regional Express Rail as being "SmartTrack."<br />
<br />
<h3>
Compromise? Downtown Tunnel (DRL-lite?)</h3>
One problem with all of these different plans is that politicians will end up arguing over them for years and nothing will get built. All of these plans do have a common element though in that they all probably require new train tunnels through downtown. One compromise might be to start building a tunnel through downtown as early as possible that can later be repurposed for a subway, regional express rail, SmartTrack or whatever once a final plan is agreed on in ten years time. The tunnel can run from near Exhibition (the possible second downtown train station) to just east of the Don Valley (where it could later be extended to the existing rail right of way or north as a DRL). Since a short tunnel is probably useless on its own, it can initially be outfitted for streetcar use, so that the tunnel could actually be used until the politicians secure funding for some bigger plan.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrV19jvnop1IWbHkF8MWXOorDfAN9AotdWc13lLQrqAW7YD29TkGw0cjsPI8J3Y5DDBj4_MUC4CKa1TwYu-AsBGcHDRkEiso_qaYzMEhytXSA5LFo4G5cmaQ6jfCzs4c0-0om8vg/s1600/tunnel.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrV19jvnop1IWbHkF8MWXOorDfAN9AotdWc13lLQrqAW7YD29TkGw0cjsPI8J3Y5DDBj4_MUC4CKa1TwYu-AsBGcHDRkEiso_qaYzMEhytXSA5LFo4G5cmaQ6jfCzs4c0-0om8vg/s1600/tunnel.jpg" height="287" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-52306955194183491662014-09-24T01:08:00.000-04:002014-09-24T01:08:14.721-04:00Setting up Simulations of the DRL<div>
</div>
With all the talk this year of transit issues in Toronto and the importance of building some sort of downtown relief line (DRL), I thought I would try to grab some open data and see if it's possible for amateurs to gain some insight into the problem.<br />
<br />
The current argument behind the downtown relief line is that most people want to go downtown for work. Also, the population of downtown is exploding in itself because both the millenial generation and the aging boomer generation are favouring the downtown lifestyle over the suburban lifestyle. This is supposedly causing a number of problems. The primary means for getting into downtown is the Yonge subway, and it's always full. It's so full that one of the main arguments against extending the subway system in the suburbs is that these extensions would simply feed more traffic onto the Yonge subway, which can't handle the additional load. Since the whole system relies on the Yonge subway line to move people into downtown, if there are any problems on that line (which happens often), the whole transit system grinds to a halt. There is very little redundancy in the system to provide riders with alternate ways to get downtown if the Yonge line has problems. A related problem is that the Bloor subway feeds riders onto the Yonge subway at the Bloor-Yonge station, and supposedly that station is also becoming a bottleneck in the system too in that it's becoming physically difficult to transfer all the people from the Bloor subway to the Yonge subway because there's just too many people. And then there's also a concern that the primary means of moving people east and west through downtown--the streetcar system--can no longer handle all the people who have now moved downtown.<br />
<br />
The primary goal of the DRL is to build a new north-south subway line to relieve the pressure on the Yonge line. Since people primarily want to ride the subway into downtown, this DRL will also have to go into downtown somehow. If the DRL also happens to relieve pressure on the east-west streetcar system, that's a bonus.<br />
<br />
To gain some insight into different DRL plans, I've started setting up a simulation that shows who is riding the Yonge subway now. I took the GO Transit and TTC schedules for September 24, 2014, and I calculated the optimal route for people who need to get to work at Bay and King at 8:55am and 9:00am. I tracked whether the route used the Yonge subway south between Queen and King as an indication that a particular route used the Yonge subway. I also tracked which routes used the Bloor line through Bloor-Yonge station and also use the Yonge subway between Queen and King as an indication of riders who transferred from the Bloor to the Yonge subways. Since I calculated routes for two different times, it's possible that different routes are optimal for those two times. Since the two times are only 5 minutes apart, I assumed that riders would take the route that avoided the Yonge subway, if possible, and if not, then one that avoided a transfer from the Bloor subway to the Yonge subway.<br />
<br />
The result of the simulation is shown below. The red dots show areas where people can get to downtown without needing to use the Yonge subway. The yellow dots are areas where people's optimal route to downtown involves taking the Yonge subway. The green dots are areas where people's optimal route to downtown involves riding the Bloor subway to Bloor-Yonge station, and then taking the Yonge subway into downtown.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8Ysap05bKygc4MB2A3nAF2Thn3kCZV31vXbzfpiZsgD9rJ43YsdNybmYUz5bVZLIdM-vNF9SkhI1d-SeyJ3TV40kSG4u13RgOLLRqWuR_R1WTK1DtFVHvV1nt5OM0slnFVI06eg/s1600/drl_trips.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8Ysap05bKygc4MB2A3nAF2Thn3kCZV31vXbzfpiZsgD9rJ43YsdNybmYUz5bVZLIdM-vNF9SkhI1d-SeyJ3TV40kSG4u13RgOLLRqWuR_R1WTK1DtFVHvV1nt5OM0slnFVI06eg/s1600/drl_trips.jpg" height="271" width="400" /></a></div>
<br />
<div>
As can be seen on the map, everyone in the west of the city can take the University-Spadina subway line into downtown, so that area is all red. Surprisingly, it is often optimal for people just to the east of the University-Spadina subway to still travel further east to the Yonge subway line to get downtown.<br />
<br />
In southern east York, the fastest way downtown seems to involve taking the streetcar. In south-east Scarborough, I think the fastest way downtown involves taking a bus down to the Lakeshore GO train possibly? Or possibly TTC express buses direct to downtown? Or GO buses? It's hard to tell.<br />
<br />
There's an area around the DVP where I think it's fastest to use express buses from the TTC or GO to get downtown rather than travel east or west to a subway line. Those also sporadic red spots in places around GO bus stations and GO train stations. These stations offer quick ways to get downtown, but they don't come often enough to make it worthwhile to take them if you need to get somewhere by a certain time unless you live really near to the stations.<br />
<br />
Anyway, that's just a preliminary map of what I can calculate using the readily available transit schedule information. I'll later try plugging in some suggested DRL plans to see what effect they have on the map.</div>
<div>
<br /></div>
Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-11292158738238785342013-12-01T05:26:00.002-05:002013-12-01T05:33:41.365-05:00Which Casual Gameplay Mechanics are Good for Building NarrativesI was recently playing some computer pinball. I've always been fascinated with the narrative aspects of computer pinball. From when I saw my first pictures of Devil's Crush, I was captivated by the idea that you could have a pinball game with enemies that you could fight by playing pinball. I imagined that game designers could build whole war games and strategy games that you could control by playing pinball.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMae5-suVOwXiqb-61R7HP8kyD5VoAMfy2tnwvxJq6Y603ze9VN76XSRdI0aqWhgXWf-HwUozgVRo9GkfuWG1Zlne_fVYhk09U6CN1H0SgtlZ-DBW07XTGhl2n1yii20_IpAs6Cw/s1600/devilscrushsmall.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMae5-suVOwXiqb-61R7HP8kyD5VoAMfy2tnwvxJq6Y603ze9VN76XSRdI0aqWhgXWf-HwUozgVRo9GkfuWG1Zlne_fVYhk09U6CN1H0SgtlZ-DBW07XTGhl2n1yii20_IpAs6Cw/s320/devilscrushsmall.png" width="123" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Anyway, I actually don't ever play pinball in real-life, but it was interesting to see how Pinball FX2 builds missions and progression inside its pinball games. That got me thinking about what sort of casual gameplay mechanics lend themselves to having narratives built on top of them?<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaXbVL7YspvtM6gkE5bzgKXNbv-RifZHXwFz55XQpkF4lK3OxeoNs_utZxAMiUR3mxNdsumM-o4_v1NYa6Kx1Yu35p5XXNTBqmxCF2hLe2FkZAlqprt-seIS_KZ3oS6JFDSX-dDw/s1600/Pinball_FX2_Zen_Classic_Tesla_screenshot01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaXbVL7YspvtM6gkE5bzgKXNbv-RifZHXwFz55XQpkF4lK3OxeoNs_utZxAMiUR3mxNdsumM-o4_v1NYa6Kx1Yu35p5XXNTBqmxCF2hLe2FkZAlqprt-seIS_KZ3oS6JFDSX-dDw/s1600/Pinball_FX2_Zen_Classic_Tesla_screenshot01.jpg" width="320" /></a></div>
<br />
<br />
For the last few years, there's been several games that have built narratives on top of match-three games. In games like Puzzle Quest or 10000000, you have a generally linear plot, and you control action by creating matches. Making matches of different colours results in different types of attacks or defenses. There are also RPG elements where you can upgrade to gain new powers and abilities. The problem is that the action of the match-3 game is tightly bound to the action of the narrative. The gameplay doesn't lend itself to letting the player make "choices" in the narrative, so choices must be made outside of the gameplay mechanic.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/en/9/9d/Puzzle_quest_360.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://upload.wikimedia.org/wikipedia/en/9/9d/Puzzle_quest_360.jpg" width="320" /></a></div>
<br />
<br />
One of the cool things with pinball is that your actions *indirectly* affect the flow of the narrative. The story can involve quite deep stories and exciting interactions that you can influence with a limited number of levers (unlike match-3 where you have shallow scenarios that you directly control through your matches). You can have exciting scenarios that would be too complicated to build a casual game around (like controlling the events of a war, building an economy, controlling complicated machinery, being an archaeologist), abstract the mechanics so that a player can control these scenarios with simple gameplay levers (to the point that it's so simple that it would be boring if the player were given direct control of those levers), and let the player adjust these levers through the gameplay mechanic (thus keeping things interesting). Also you have choices in that the ramps and targets that you aim for allow you to choose different paths in the narrative. For example, in a pinball game, you can have a story where the protagonist needs to gather five hidden gems and combine them to defeat a boss. The player can control the locations to be searched by targeting different areas with their ball. That sort of plot can be represented as a pinball game.<br />
<br />
The main problem with pinball though is that regular unskilled players don't have enough talent to carefully aim their balls, so the game ends up being mostly random. If you don't have enough control over the aim, then you can't really control the flow of the narrative. It might be possible to build a random narrative. For example, you could build a giant map that you can play pinball on (like in Snowball). Even though unskilled players won't be able to control where on the map that they go, the map can be used to represent a protagonist questing through life or through a real map.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8_wQtDdDGnAjacRbXIl9ecQ6eqcm5LMA21y3c-5dmSKfpYhEDHr-sY1zkzb3BmM6WUMIcl55MO-tcxQHwcyLyJvi2YEMYrevEXfKhIzSFs_aSBwUoe9MesJY_gc3Oz2pnkgcH0w/s1600/snowballsmall.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8_wQtDdDGnAjacRbXIl9ecQ6eqcm5LMA21y3c-5dmSKfpYhEDHr-sY1zkzb3BmM6WUMIcl55MO-tcxQHwcyLyJvi2YEMYrevEXfKhIzSFs_aSBwUoe9MesJY_gc3Oz2pnkgcH0w/s320/snowballsmall.png" width="168" /></a></div>
<br />
<br />
But that got me thinking about whether there are casual gameplay mechanics that would be even better for building narratives on. Ones that provide more control, offer interesting narrative possibilities, yet are interesting to play in themselves.<br />
<br />
A good game narrative for interactive games lets the player make choices. But we don't want the player to make choices directly. Choice can be represented as "aiming." Could a narrative game be built around Puzzle Bobble? Worms or Scorched Earth? Breakout? Marble Madness? Pachinko? Minigolf? Or maybe the pinball mechanics could be made even more casual? Instead of flippers, maybe you directly send out pulses or something to more directly influence the movement of the ball?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHG1RiMuudG_fYCTSGxNTD3I8-OeN4DEecctrUEGyUjGL4f0xppUOPJRZTKYEq0LvS7IhH9oVy3oJqUwaplQg6MVCGG8p4r6-OuKrZXQXGaeg0KhmiPXJDyxgkW08fl5_fQi9mgw/s1600/wonderputtsmall.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHG1RiMuudG_fYCTSGxNTD3I8-OeN4DEecctrUEGyUjGL4f0xppUOPJRZTKYEq0LvS7IhH9oVy3oJqUwaplQg6MVCGG8p4r6-OuKrZXQXGaeg0KhmiPXJDyxgkW08fl5_fQi9mgw/s320/wonderputtsmall.png" width="320" /></a></div>
<br />
<br />
These games focus on a single ball, meaning they're great for narratives with a single protagonist who quests around. But what about a more complex tale? Is there a casual gameplay mechanic that lends itself to resource allocation? If so, you could build RPGs or simulations by letting players assign values to different resources somehow. You could build a game of civilization where you assign your populace to do research or wage war or grow food. Pachinko might work, but it's a bit too random and too slow. Puzzle Bobble? Peggle? Is there a game mechanic that is time constrained and that can become more difficult over time?<br />
<br />
Anyway, I think there should be a way to build some more interesting narrative casual games if someone were to put enough thought into this topic.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-5359563931702578432013-09-17T04:13:00.002-04:002013-09-18T03:46:51.506-04:00Gradients 5: Refinement of the Precomputed GradientOne issue with the precomputed gradient was that the triangle mesh was too coarse to capture the details of the gradient. To solve this problem, I progressively refine the triangle mesh until it is detailed enough to show the gradient in the detail that I want. What I do is I first find all the edges of the triangle mesh. I then go over them and look for one that can be split, which creates a new vertex, which can be assigned a new colour, improving the detail in the gradient. When scanning over the edges, I arbitrarily go over them in order from longest edge to shortest edge. I was hoping that doing it that way would help prevent the creation of "skinny" triangles, but in pratice it didn't seem to help too much. For each edge, I split the edge, recompute the gradient, and then compare the new gradient with the old gradient at the vertex points. If the colours of the vertices change by above a certain amount, I keep the split. Otherwise, I revert the split. Then, I move on to the next edge and continue the process until I can't find any edges worth splitting.<br />
<br />
Below, I have the triangle mesh generated when splitting an edge doesn't result in any colours changes above 0.1 in any one component (where colours range from 0 to 1).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBE_-Yr6azOBD4AnjrleqlK2VHW9B58o8_fXdLPeyBdkh169olNWsHuZ9qtQxjE4Nh9YXMeFAeurAqarYCw3QK_kGApd3qdBZJMYcy6ka-cnn03lDGTiXwttlK0uvKM6GZY-L7MQ/s1600/refined-10percent-triangle.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBE_-Yr6azOBD4AnjrleqlK2VHW9B58o8_fXdLPeyBdkh169olNWsHuZ9qtQxjE4Nh9YXMeFAeurAqarYCw3QK_kGApd3qdBZJMYcy6ka-cnn03lDGTiXwttlK0uvKM6GZY-L7MQ/s1600/refined-10percent-triangle.png" /></a></div>
<br />
And here is a triangle mesh when I only split edges that result in a colour change above 0.05. Because I never split exterior edges, there tend to be very long and skinny triangles along the exterior. I guess it would make sense to develop some sort of heuristic to figure out when it makes sense to make a split there.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwyChX1gKh9kbQv98PrlAqC8D0_aj0z2jLAaLCkcZWZ-M_dMt87J0PIihs7tMS98NHrVTbeyWeVS-Vr0X45zlGOQtVTJvCg21SlPSrPHNxt2ZL2EHU0FtIpQVHshp7mesrnHjKSg/s1600/refined-5percent-triangle.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwyChX1gKh9kbQv98PrlAqC8D0_aj0z2jLAaLCkcZWZ-M_dMt87J0PIihs7tMS98NHrVTbeyWeVS-Vr0X45zlGOQtVTJvCg21SlPSrPHNxt2ZL2EHU0FtIpQVHshp7mesrnHjKSg/s1600/refined-5percent-triangle.png" /></a></div>
<br />
And if we remove the triangle mesh and just look at the resulting gradient, here is the final result:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2J8BWPuSQMeqiJQhv9kZQnLNd_OI5-VfmEXpphwd42jRCmsJSlbK4Wn2If4j-ksNdtxVthsPF3HxYDysAptm1fomY9o1OZGa6rbTymfeAOKlvI6dES0Mt5uRpJHRx6JtJ0nSPwA/s1600/refined-5percent-final.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2J8BWPuSQMeqiJQhv9kZQnLNd_OI5-VfmEXpphwd42jRCmsJSlbK4Wn2If4j-ksNdtxVthsPF3HxYDysAptm1fomY9o1OZGa6rbTymfeAOKlvI6dES0Mt5uRpJHRx6JtJ0nSPwA/s1600/refined-5percent-final.png" /></a></div>
<br />
The result isn't too bad. The gradient is clearly there, but the colour transitions aren't completely smooth (due to the triangulation). The effect of gouraud shading between edges of the triangle are also visible. The white of the center vertex seems to extend deeper into the polygon that it really should.<br />
<br />
But if you compare the generated gradient with that calculated by diffusion or by pure mean value coordinates, the results are decently. Below, the precomputed gradient is in the middle. The diffusion gradient is on the left. The mean value coordinates gradient is on the right.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYO9a5ZiCcH-IY165fRVJtPTtSw6iIkYG6DKRWwx5YauljFGttY7mCTgZMULbGx8y4CTAVTcBcMmEtFuQgMPOQEWpQ9Msb-KsgJ1vtE73Uu_eFgKkPTwAkp4x51Bc0dESvNIBdZg/s1600/refine-5percent-comparison.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="125" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYO9a5ZiCcH-IY165fRVJtPTtSw6iIkYG6DKRWwx5YauljFGttY7mCTgZMULbGx8y4CTAVTcBcMmEtFuQgMPOQEWpQ9Msb-KsgJ1vtE73Uu_eFgKkPTwAkp4x51Bc0dESvNIBdZg/s400/refine-5percent-comparison.png" width="400" /></a></div>
<br />
One of the main reasons for not using mean value coordinates directly in the gradient was that for concave polygons, it could produce illegal colour values that aren't in the range of colours specified along the border of the gradient. As we can from the concave polygon below, we don't encounter any illegal colour values like when using pure mean value coordinates.<br />
<br />
Although we still mean value coordinates, because of the way we set them up, it shouldn't be possible to get illegal values. Suppose we have an interior vertex that we're computing mean value coordinates for. Since the vertices adjacent to each interior vertex can all be arranged in a nice clockwise order, the interior vertex ends up with a nice mix of the adjacent colours with all the weights positive and adding to one. So the interior vertex can't have a colour outside the colour ranges of the vertices it is adjacent to. As such you shouldn't be able to get any colour values for an interior vertex that is outside the range of colours assigned to the boundary of the polygon. The problem with mean value coordinates on a concave polygon is that the vertices sometimes run clockwise or anti-clockwise, resulting in negative weights, which can result in the illegal values.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbQ5b0zxN1LeTq0EUgKGf3o9ejmB90QtDEZAq2picFlsU4xPz_9FS8kV5Sj8g1lzgJE58CebrD3S76VMZYOgpKAx8AR1Cb21r9XqyBQaTzirHfg0kzq9B7WtaATx7JlhUg7f4Bfg/s1600/refine-3percent-legal.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbQ5b0zxN1LeTq0EUgKGf3o9ejmB90QtDEZAq2picFlsU4xPz_9FS8kV5Sj8g1lzgJE58CebrD3S76VMZYOgpKAx8AR1Cb21r9XqyBQaTzirHfg0kzq9B7WtaATx7JlhUg7f4Bfg/s1600/refine-3percent-legal.png" /></a></div>
<br />
So that's my gradient. The resulting precomputed gradient can be rendered quickly on graphics hardware and has all the nice properties we want from a gradient. The only issue is that it's still a little slow to compute. It would be better if there were a better policy that could be used to create the triangulation of the polygon and its refinement. If I were better at the math, it might be cool trying to develop some sort of process for the diffusion or generating the weights that would result in only local changes to vertex colours when an edge is split, but I'm not sure if that's actually feasible (but it would make the mesh refinement process much faster!).Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0tag:blogger.com,1999:blog-9350640.post-36860090318784047202013-09-11T03:29:00.002-04:002013-09-18T03:46:45.245-04:00Gradients 4: Precomputing Mean Value Coordinates and DiffusionAlthough I have already shown two really good techniques for calculating gradients using diffusion or mean value coordinates, they don't quite do everything that I want them to do. One of the main benefits of vector drawings over raster drawings is that they are easier to animate, but mean value coordinates and diffusion aren't fast enough for real-time animation. Graphics hardware is optimized for displaying triangles on the screen, so the ideal gradient algorithm would be able to break down a gradient shaded polygon into a set of triangles that can be blasted to the screen quickly by the graphics hardware. We can use techniques similar to 3d animation where the animated geometry is precomputed: we'll try to break down the gradient polygons into triangles in advance, and those triangles can then be animated easily. This does mean that the gradients can't change during an animation, but hopefully allowing the geometry to change during an animation provides sufficient flexibility for artistic expression.<br />
<br />
Precomputing a diffusion gradient is a little messy since it relies on a pixel grid instead of the triangles that we want in our final output. If I paid more attention in my classes on calculus and numerical methods, I might be able to rederive the diffusion equations for use on a triangle mesh instead of on a grid, but that's really beyond my mathematical ability at the moment. On the other hand, applying mean value coordinates to a triangle mesh is straight-forward, but mean value coordinates can potentially produce bad values when used on concave polygons. Instead, I've tried to combine both approaches. I'm precomputing a diffusion gradient by using the mean value coordinates as the basis for diffusing values through a triangle mesh. Basically, instead of finessing the problem, I'm going to bash this problem with a brute force hammer until I get something that seems to work. It may have no proper mathematical basis, but it should hopefully produce something good enough for real use.<br />
<br />
The first step is to create a triangle mesh over which I can diffuse a gradient. The scientific computation community has all sorts of techniques for computing triangle meshes that are optimal for doing various things, but I don't know any of that work, so I'm just going to put together something hacky. The first step is to use some sort of bog standard polygon triangulation algorithm to create an initial triangulation.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-Hy74XoDX_dNqkYY5rG54KUVS9r2liyfSiNerKiDz9CKaMduhpqDl_7KCNotzB9Yygiwzfo821sPmyJCgU7GQMf7dABfHu0twM4KcPdaeor5qnjt_xipRQMiOilRoAtQbhQJN1w/s1600/precompute-basictriangulati.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-Hy74XoDX_dNqkYY5rG54KUVS9r2liyfSiNerKiDz9CKaMduhpqDl_7KCNotzB9Yygiwzfo821sPmyJCgU7GQMf7dABfHu0twM4KcPdaeor5qnjt_xipRQMiOilRoAtQbhQJN1w/s1600/precompute-basictriangulati.png" /></a></div>
<br />
The edges in a minimal triangulation of a polygon always go between corner points of the polygon (i.e. no new points or interior points are necessary). Since these points already have colours, we can build a gradient using barycentric coordinates for the triangles in the triangulation. The resulting overall gradient for the polygon has the correct colours along the exterior edges of the polygon but looks inconsistent and odd in its interior.<br />
<br />
Since the main primitive in graphics hardware are triangles shaded using barycentric coordinates, if we want a different colouring at the interior of the polygon, we're going to have to add some new points to the interior of the polygon and change the triangulation. As a heuristic, I generate these new points in this way: I find triangle edges that join points that aren't adjacent in the original polygon, and I split that edge. This gives me extra point that I can use to control the colouring at the interior of a polygon.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi09mlbe3bWuy8kP6ORLYIP0zJmUvRzB53bll9cGjbcw15O2x-eC7cFpSXS0Xk-jnSvdG7RDtCKKXssy_ad3aVZWpGIEZQH1zzOhpqdfPlPt9PsSh7TTTffAOMLyNQw_biGDXNYvA/s1600/precompute-split.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi09mlbe3bWuy8kP6ORLYIP0zJmUvRzB53bll9cGjbcw15O2x-eC7cFpSXS0Xk-jnSvdG7RDtCKKXssy_ad3aVZWpGIEZQH1zzOhpqdfPlPt9PsSh7TTTffAOMLyNQw_biGDXNYvA/s1600/precompute-split.png" /></a></div>
<br />
From there, I calculate new colours for all the interior vertices I've created inside the polygon. I do this by diffusing colours inwards from the boundary of the polygon: I iterate over all the interior vertices, and I set the colour of each vertex by mixing together the colours of adjacent vertices using the ratio given by the mean value coordinates, and I keep doing that until I reach convergence. In actuality, the mean value coordinates of all the interior vertices actually form a linear system of equations that should be small enough to solve so that might be a better way of computing the final gradient than iteratively diffusing colours through the mesh (in fact, I'm not sure diffusing colours with mean value coordinates will actually converge to the correct values). But I already had code for diffusion but I didn't have code for solving a linear system of equations, so I went with the diffusion route.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin_4y1uE38XT57zp8sPM-Ap84nQFnt3lmaeoOjnfkL7vT0BmKBxVEE0xStNw6YdTp3U2ymiaOh_3FgBfA-DfxUI3x5FEfhVZpoilAengRsKmJirD71COHSCX1otMIuhcSD8pbUsw/s1600/precompute-diffuse-mvc-tria.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin_4y1uE38XT57zp8sPM-Ap84nQFnt3lmaeoOjnfkL7vT0BmKBxVEE0xStNw6YdTp3U2ymiaOh_3FgBfA-DfxUI3x5FEfhVZpoilAengRsKmJirD71COHSCX1otMIuhcSD8pbUsw/s1600/precompute-diffuse-mvc-tria.png" /></a></div>
<br />
If we remove the triangle mesh, we arrive at the final result.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihqySqfMILufFX7qv80_Xgox_wU9Y3qZ4HBlBrZwzIWmTn07NwABDMqiwPSS-aiUprS_CaJtsHQA88lSvKAQmrwW25JM3pDv7AN8y2xmZ_jyI0BpIpXylrR8g-pkuJD5eulzdu8g/s1600/precompute-diffuse-mvc-fina.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihqySqfMILufFX7qv80_Xgox_wU9Y3qZ4HBlBrZwzIWmTn07NwABDMqiwPSS-aiUprS_CaJtsHQA88lSvKAQmrwW25JM3pDv7AN8y2xmZ_jyI0BpIpXylrR8g-pkuJD5eulzdu8g/s1600/precompute-diffuse-mvc-fina.png" /></a></div>
<br />
The result is similar to the gradient created by diffusing colours, but it still needs more refinement. The area around the white vertex in the middle of the polygon has too much white because the triangles mesh is too coarse there.Minghttp://www.blogger.com/profile/01458103015154082202noreply@blogger.com0