Bloodpact Blogging

Subscriptions

July 06, 2011

Benjamin McGraw

The Tiniest Gruedorf

All of my time is currently being spent moving all of the websites on this VM to a new VM in the cloud.

Whee, operations!

by mcgrue at July 06, 2011 06:35 PM

June 26, 2011

Benjamin McGraw

A lapse?!

Crap, things have been busy folks. Been getting ready to leave my old job and start a new one.

A new one where I have it in writing that they don’t have any interest in any game-related side-project I ever work on! ;D

Anyways: This week’s gruedorfing activities include working on a refurbished breadbros.com (not yet launched), and starting a Hello World for android.

(Oh, I got a Sensation 4G, and so am incentivized to put games upon it.)

by mcgrue at June 26, 2011 11:47 PM

June 17, 2011

Benjamin McGraw

Golden

The boys at Gaslamp Games just cut a gold of Dredmor tonight.

I’m safe in saying that I identified at least one major hole (item dupe), and helped sort out any number of annoying issues alongside all of the other testers in #dredmor.

Ustor and I have been continuing to augment the Achievements list as we play. Here’s some samples:
Yer a Wizard, ‘Arry! – Master the Magic Training Skill Tree – Winner of The Tom Marvolo Riddle Award for Excellent Scholastic Achievement
Tastes Like Pennies – Master the Blood Mage Skill Tree
Gettin’ Ley’d – Master the Ley Walker Skill Tree

(No promises on final names or actual achievements there.)

by mcgrue at June 17, 2011 09:30 AM

June 14, 2011

Joshua McKenty

What It Means to be OpenStack

What does it mean to be OpenStack?

OpenStack has become something of a lightning rod for media attention – attention which, sadly, focuses reliably on the fear, uncertainty and doubt that any disruptive ecosystem will naturally produce. Much like the eddies that form on the edges of fast-moving water, this open-source project throws off a fair amount of confusion, conjecture and rumor.

I’m hoping to get in front of the upcoming round of media frenzy (which is certain to spin out as soon as someone from the Register, CIO Magazine or Network World bothers to skim recent posts to the OpenStack-PPB list), and try to answer a simple question – what does it mean for a project to be “part of” OpenStack?

Ignoring, for just a moment, the eloquent-but-high-level mission statement of the OpenStack project, I will make three simple observations:

1. OpenStack means: a cloud operating system that meets Rackspace’s needs.
2. OpenStack means: a cloud operating system that meets NASA’s needs.
3. OpenStack means: a community effort that meets the needs of its’ members.

While the relative *order* of these observations is likely to provoke outrage from many members of the OpenStack community, I think it’s still reasonably fair to describe it in this fashion – if OpenStack ceases to meet the needs of Rackspace or of NASA, the founding partners, then it will cease to be OpenStack.

What constraints does this put on new projects?

Many.

What has been talked about, at length and by many of the most prominent members of our community, is scale. Rackspace (and indeed many others in our ecosystem) is a service provider, with (at least) tens of thousands of customer accounts. OpenStack has got to scale.

Also discussed, at least in the earliest days, is development philosophy and methods. No, I’m not talking about “development in the open” (although I think it’s generally a good idea), I’m talking about agile, fail-fast, and test-driven development. I’m talking about continuous integration and good unit test coverage. I’m talking about a working-code-trumps-designs-or-promises attitude.

One thing that *hasn’t* been discussed much, is language. Or more precisely, language consistency. Yes, a foolish consistency is the hobgoblin of little minds – and yet, I think it’s no accident that OpenStack is, thus far, 98% Python (plus or minus a smattering of bash).

I sit on the OpenStack Project Policy Board, and I’m getting ready to make myself fairly unpopular. There are a *large* number of projects in the “affiliated with OpenStack” phase, and a few of them are gearing up to apply for official OpenStack status.

Which brings me back to what it *means* to be OpenStack, and particularly NASA’s needs.
(DISCLAIMER: I no longer work for, at, or in any particular association with NASA. I’m speaking merely from my historical position as the architect of the NASA Nebula project, the precursor to OpenStack.)

NASA, like any US Federal agency, is under near-constant IT attack. Far more than any service provider, we have had to focus on the security of our platform. No, I’m not saying that python is more secure than any other language (and NASA has had its share of security issues with EVERY language and platform) – simply that limiting OpenStack to *as few languages as necessary* keeps the strike surface smaller. It keeps system complexity down. And it makes OpenStack monitoring and maintenance a more straightforward process.

Which brings me back to the process of becoming unpopular.

To the best of my knowledge, there are “related” projects in:

Every one of these projects will be considered on the basis of its own merits, by the Project Policy Board and in good time. But for my part, I’ll be looking at security. I’ll be looking at test coverage. I’ll be looking at scale and, yes, I’ll be looking at language.

This applies equally to new project submissions, as well as to legacy projects from both Rackspace and NASA – even a few that I had always thought *were* part of OpenStack.

Oh, and a final thought – to me, OpenStack has always been about being *pythonic*, even in the rare case when we weren’t (yet) writing Python. Which means, as opposed to Perl, there really is “one way to do it”. I love the fact that OpenStack supports every possible hypervisor – but I hate the fact that the test coverage is not equal for each. I love that we have several (many?) different network models available – but I find the current proposal to write, big-design-up-front, a new-from-scratch OpenStack network component misguided at best, and counter-productive at worst. Ditto for the efforts to write a new-from-scratch replacement for the nova-volumes component. Again back to the principles of agile, test-driven development – unless there’s something fundamentally *wrong* with the architecture of the existing components, this strikes me as a bit of NIH syndrome. Iterative improvement, a long set of carefully reviewed changes to an existing codebase, will keep code quality high and interface pain down.

So if I vote against your project tomorrow – it’s not personal. Really. I’ll be voting against some of my own as well – and yes, that really *is* as weird as it sounds.

Share This

by admin at June 14, 2011 10:04 AM

June 10, 2011

Benjamin McGraw

Achievement Unlocked

Still beta testing Dredmor. It feels awful playing a game so much without earning achievements.

Oh, and I’m assembling the achievement list for Dredmor. I guess that’s like earning them.

by mcgrue at June 10, 2011 02:22 PM

June 03, 2011

Benjamin McGraw

Beta Testing

I was in the middle of writing a big post on my 7-year-long involvement with the upcoming indie RPG Dungeons of Dredmor by Gaslamp Games…but then my browser crashed and wordpress doesn’t save drafts as you write them like everything by google does.

Screw you, wordpress.

I’ll put the history post in later.

Anyways: for this week’s Gruedorf I’ve been beta-testing a lot.

A lot.

This is actually work-ish, but I am enjoying the game. I’m also filing a lot of bugs and making vaguely Executive Producerish suggestions like “screw balance, it was more fun when you got cool shit faster, and it didn’t make it easier per se.”

Yeah, I say balance is for losers. I just wanna feel like a badass when I play games.

And so do you.

by mcgrue at June 03, 2011 09:23 AM

May 27, 2011

Benjamin McGraw

Even more boring (and beta-testing)

This week I spent a small amount of time maintaining vrpg. This is not exciting.

I also spent way too many hours beta testing http://gaslampgames.com/ ‘s Dungeons of Dremor. This is a game I funded back in 2005 and have been annoying mordred off-and-on ever since.

It’s fun.

by mcgrue at May 27, 2011 10:08 AM

May 20, 2011

Benjamin McGraw

Harvest Sole.

This week’s update brings base item spawning, item deletion, and the base dialogs for interacting with characters (ie, quest givers).

Next up is actually filling in the game content.

(Maaan, I should work more than an hour a week on this. I need more time)

by mcgrue at May 20, 2011 12:20 PM

May 13, 2011

Benjamin McGraw

More ways to combine things, refactoring.

At the all-night SHDH last weekend I added more tools to the mix and started in on some necessary helper functionality, most important of which is the ability to add items to your inventory programatically. (previously your starting inventory was only in the markup, and items could only be converted, never created.)

Next up is base item spawning (an unending supply of rocks, water, and cats!) and a trashbin.

https://github.com/mcgrue/Harvest-Soul to check out the goods.

by mcgrue at May 13, 2011 07:26 AM

May 12, 2011

Benjamin McGraw

Identity Crisis

grue@box:~$ whoami
grue
grue@box:~$ sudo whoami
root

by mcgrue at May 12, 2011 07:15 PM

May 06, 2011

Benjamin McGraw

Smashing success

Adding some features to harvest soul (mainly, the pickaxe slot.)

Fixed some bugs.

Next up is making the main inventory more dynamically generated (at present all of the starting inventory is actually in the html itself.)

https://github.com/mcgrue/Harvest-Soul to check out the goods.

by mcgrue at May 06, 2011 06:49 AM

April 29, 2011

Benjamin McGraw

Drophooks

Today’s gruedorf entry comes in the form of two hours where Jeff Lindsay and I codecasted the beginnings of a webhook system on top of dropbox.

You can watch this!

Watch live streaming video from progrium at livestream.com

Hopefully I’ll have some awesome screenshots of Harvest Soul for #screenshotsaturday, too. Also Kildorf will be on my couch this weekend, so there’s that!

by mcgrue at April 29, 2011 08:20 AM

April 22, 2011

Benjamin McGraw

Perseus

This week on gruedorf, I worked a bit on an oooold 3d trpg project with Andy. Largely the work this week involve ripping temp sprites, getting stats in place, and making some skills.

by mcgrue at April 22, 2011 11:42 AM

April 15, 2011

Benjamin McGraw

Shaking, Moving, Mixing, Smashing.

The harvest soul demo has some basic verbs implemented. I expect the full experience demo of “golem creation quest” to be implemented this weekend…

by mcgrue at April 15, 2011 10:23 AM

April 08, 2011

Benjamin McGraw

Ikon

So, I’m working on a vertical demo: a semi-polished section of a game to see if it is, in fact, any fun.

This game needs a bit of art. And Sophia sadly is being hammered at her workplace, so she wasn’t as available as she hoped.

This left me to do the prototype art.

It’s been a while since I manually arted.

Here is my artings:

The original scan, done on 1" grid paper.

The final icons (48x48), colored in photoshop via Hue/Saturation "colorize".

…oh, how the mighty have fallen. ;(

by mcgrue at April 08, 2011 08:38 AM

April 01, 2011

Benjamin McGraw

Toys and Games

I’ve been toying with a gameplay prototype for a simple UI-only game.

You can play with the prototype here: https://github.com/mcgrue/Harvest-Soul

Sadly, I am having issues finding the fun. ;(

by mcgrue at April 01, 2011 06:40 PM

March 31, 2011

Chuck Rector

Ode to a beach house

The beach is cool.
The beach is nice.
The waves on my feet
Are like sugar and spice.

*jazz hands*

by noreply@blogger.com (卡车 Chuck) at March 31, 2011 06:33 AM

March 25, 2011

Benjamin McGraw

gruedorf maintenence

This week’s update is not sexy game development, but website maintenance. I fixed problems on vrpg concerning inefficient database queries (a specific query was getting hammered too much and was better replaced with a non-db code-defined data structure), updated the files page (static content) to point to the current verge executables, and started debugging problems with the image uploader (still in progress.)

by mcgrue at March 25, 2011 06:56 PM

March 20, 2011

Chad Austin

HTML5 – New UI Library for Games [Annotated Slides]

At GDC 2011, on behalf of IMVU Engineering, I presented our thesis that HTML is the new UI library for developing desktop applications and games.

The annotated slides from our presentation are now available at the IMVU Engineering Blog.

by Chad Austin at March 20, 2011 10:14 AM

March 18, 2011

Benjamin McGraw

Menu Quest

The menus continue apace. Behold:

Now with this development comes need for concepts such as “party” and “inventory” and “strife syllabuses”.

RPGs sure are weird!

by mcgrue at March 18, 2011 03:01 PM

March 11, 2011

Benjamin McGraw

Menus, Fonts, Busy, Slow.

More menu work. Will post screenshots after SXSW. Zzz.

by mcgrue at March 11, 2011 10:08 AM

March 07, 2011

Chad Austin

In Defense of Language Democracy (Or: Why the Browser Needs a Virtual Machine)

Years ago, Mark Hammond did a bunch of work to get Python running inside Mozilla’s script tags. Parts of Mozilla are ostensibly designed to be language-independent, even. Unfortunately, even if Mozilla had succeeded at shipping multiple language implementations, it’s unlikely other browser vendors would have followed suit. It’s just not logistically feasible to have all browsers gate and care for the set of interesting languages on the client.

I can hear you asking “Why do I care about Python in the browser? Or C++? Or OCaml? JavaScript is a great language.” I agree! JavaScript is a great language. Given the extremely short timeframe and immense political pressure, I’m thrilled we ended up with something as capable as JavaScript.

Nonetheless, fair competition benefits everyone. Take a look at what’s happened in the web server space in the last few years: Ruby on Rails. Django. Node.js. nginx. Tornado. Twisted. AppEngine. MochiWeb. HipHop-PHP. ASP.NET MVC. A proliferation of interesting datastores: memcache, redis, riak, etc. That’s an incredible amount of innovation in a short period of time.

Now let’s go through the same exercise, but on the client. jQuery, YUI, fast JavaScript JITs, CSS3, CoffeeScript, proliferation of standards-compliant browsers, some amount of HTML5… Maybe ubiquitous usage of Flash video? These advancements are significant, but it’s clear the front-end stack is changing much more slowly than the back-end.

Why is the back-end evolving faster than the front-end?

When building an application backend, even atop a virtualized hosting provider such as EC2, you are given approximately raw access to a machine: x86 instruction set, sockets, virtual memory, operating system APIs, and all. Any software that runs on that machine competes at the same level. You can use Python or Ruby or C++ or some combination thereof. If Redis wants to innovate with new memory management schemes, nothing is stopping it. This ecosystem democratized – nay, meritocratized – innovation.

On the front-end, the problem boils down to the fact that JavaScript is built atop but does not expose the capabilities of the underlying hardware, meaning browsers and JavaScript implementations are inherently more capable than anything built atop them.

Of course, any client-side technology is going to rev slower simply because it’s hard to get people to update their bits. Also, users decide which client bits they like best, whether they be Internet Explorer, Chrome, or Firefox. Now the technology-takes-time-to-gain-ubiquity problem has a new dimension: each browser vendor must also decide to implement this technology in a compatible way. It took years for even JavaScript to standardize across browsers.

However, if we could instead standardize the web on a performant and safe VM such as CLR, JVM, or LLVM, including explicit memory layout and allocation and access to extra cores and the GPU, JavaScript becomes a choice rather than a mandate.

This point of view depends on my prediction that JavaScript will not become competitive with native code, but not everyone agrees. If JavaScript does eventually match native code, than I’d expect the browser itself to be written in it. It’s impossible for me to claim that JavaScript will never match native code, but the sustained success of C++ in systems programming, games, and high-performance computing is a testament to the value of systems languages.

Native Client, however, gives web developers the opportunity to write code within 5-10% of native code performance, in whatever language they want, without losing the safety and convenience of the web. You can write web applications that leverage multiple cores, and with WebGL, you can harness dedicated graphics hardware as well. Native Client does restrict access to operating system APIs, but I expect APIs to evolve reasonably quickly.

Let’s take a particular example: the HTML5 video tag. Native Client could have sidestepped the entire which-video-codec-should-we-standardize spat between Mozilla, Google, Apple, and Microsoft by allowing each site to choose the codec it prefers. YouTube could safely deploy whatever codecs it wanted, and even evolve them over time.

With Native Client, we could share Python code between the front-end and the back-end. We could use languages that support weak references. We could implement IMVU’s asynchronous task system. We could embed new JavaScript interpreters in old browsers.

Native Client is not the only option here. The JVM and CLR are other portable and performant VMs that have seen considerable language innovation while approximating native code performance.

A standardized, performant, and safe VM in the browser would increase the strength of the open web versus native app stores and their arbitrary technology limitations.

Finally, I’d like to thank Alon Zakai (author of Emscripten), Mike Shaver, and Chris Saari for engaging in open, honest discussion. I hope this public discourse leads to a better web. Either way, I hope this is my last post on this topic. :)

by Chad Austin at March 07, 2011 08:17 PM

Native Client is Widely Misunderstood (And What Google Should Do About It)

Wow. My recent post about why Mozilla should adopt Native Client stirred up quite a storm. Some folks don’t believe the web needs high-performance applications. Some are happy with whatever APIs browsers expose. I disagree with these points, but I can respect them.

Most surprisingly, several respondents had simply untrue objections to Native Client, so I’d like to clear up their misconceptions. Then I will make recommendations to the Native Client team on how to fix their perception problems.

If you want to spend some minutes and learn about Native Client and LLVM from the horse’s mouth, watch this video.

Misconceptions about Native Client

Native Client implies x86

False. Originally, Native Client was positioned as an x86 sandbox technology, but now it has a clear LLVM story, with x86-32, x86-64, and partially-implemented ARM backends. Portability is a key benefit of the web, and Google understands this.

Native Client is complicated

True, it’s certainly not a trivial amount of code. But compare the amount of code in NativeClient vs. Mozilla’s JavaScript engine:

$ wc -l native_client/src/**/*.{c,h,cc}
...
155082 total
$ find mozilla-central/js/src -path '*tests*' -prune -o \( -iname '*.c' -o -iname '*.cc' -o -iname '*.h' -o -iname '*.cpp' \) -print0 | wc -l --files0-from=-
...
363471 total

NativeClient is at least on the same order of complexity as a modern JavaScript engine, and since it already provides performance within 5% of native code, I’d guess it’s less susceptible to change.

Native Client / LLVM is not an open standard

I empathize with this concern, but Flash isn’t an open standard and it sees wide adoption. The difference between Flash and Native Client is that Native Client / LLVM is open source and could easily become an open standard.

Native Client is insecure

Native Client was designed to be a secure x86 sandbox. Under the assumption that its basic security model is sound, the question then becomes “how large is the attack surface and how likely is it to be broken?” Given the amount of code in a modern web browser and JavaScript JIT, I don’t see how Native Client is any worse.

With a little more work, JavaScript will perform at the same level as native code

I’m not informed or involved enough to claim JavaScript can never be as fast as native code. However, I have my doubts. A friend was working on a Monte Carlo Go AI, and he initially wrote his algorithm in JavaScript. Monte Carlo requires simulating a large number of game states, and a naïve port of his JavaScript to C++ gave a 100x performance improvement.

Check out my skeletal animation benchmark, where the JavaScript JITs need another 10x to compete with native code.

Even if JITs can match native code in some benchmarks (and I hope they do), performance across browsers will depend on the particulars of the JIT implementation. Native Client, at least for pure computation, would perform the same in every browser.

We can simply compile languages like Haskell, Python, and C to optimized JavaScript and let the JIT sort it out.

There are some attempts to use JavaScript as a backend for other language implementations, but they rarely perform well. For example, a CPython compiled to JavaScript via LLVM/Emscripten runs about 30x slower than a native build in Chrome, and 200x slower in Firefox 4 beta 8.

I’ve heard the argument for an RPython-like statically-analyzable subset of JavaScript that browsers could run very efficiently. This subset could operate as a defacto bytecode, and Emscripten could compile LLVM to it with minimal performance loss. It’s possible this could work, but directly exposing LLVM seems more fruitful.

Red Herring Arguments

JavaScript is easier to develop with than native languages

Sure, but that doesn’t mean native languages don’t have a purpose. My hypothesis is that there are problems for which JavaScript is not and will not be suited, and that exposing the native power of the machine is better for application developers, and thus the web.

Binaries are obscure

Minified JS isn’t human-readable either, but machines can reconstruct both. Drdaemon nails it in his comment

.

“If you want native performance, just download software or install a plug-in!”

While this sentiment reflects today’s reality, it doesn’t reflect trends on the web. Web applications continue to supplant desktop applications. Google Docs, Creately, Pivotal Tracker, Gmail, Mockingbird, and all of the games on Facebook are examples where I would have used a desktop application in the past. It seems that, whenever browsers provide new capabilities, applications consume them. Why would that trend stop now?

Recommendations to the Native Client team

  1. Get a move on! Enable it by default! More flashy demos!
  2. Reposition Native Client as a portable technology and make sure it’s clear that LLVM is key to its strategy.

Finally, NativeClient is still new. I expect it will be some time before it’s solid enough to rely on for production use. That said, it has the potential to disrupt the desktop operating system and I’m excited for a future where all software is web-based.

by Chad Austin at March 07, 2011 08:17 PM

Digging Into JavaScript Performance

While JavaScript implementations have been improving by leaps and bounds, I predict that they still won’t meet the performance of native code within the next couple years, even when plenty of memory is available and the algorithms are restricted to long, homogenous loops. (Death-by-1000-cuts situations where your profile is complete flat and function call overhead dominates may be permanently relegated to statically compiled languages.)

Thus, I really want to see Native Client succeed, as it neatly jumps to a world where it’s possible to have code within 5-10% of the performance of native code, securely deployed on the web. I wrote a slightly inflammatory post about why the web should compete at the same level as native desktop applications, and why Native Client is important for getting us there.

Mike Shaver called me out. “Write a benchmark that’s important to you, submit it as a bug, and we’ll make it fast.” So I took the Cal3D skinning loop and wrote four versions: C++ with SSE intrinsics, C++ with scalar math, JavaScript, and JavaScript with typed arrays. I tested on a MacBook Pro, Core i5, 2.5 GHz, with gcc and Firefox 4.0 beta 8.

First, the code is on github.

The numbers:

Millions of vertices skinned per second (bigger is better)

It’s clear we’ve got a ways to go until JavaScript can match native code, but the Mozilla team is confident they can improve this benchmark. Even late on a Sunday night, Vlad took a look and found some suspiciously-inefficient code generation. If JavaScript grows SIMD intrinsics, that will help a lot.

From a coding style perspective, writing high-performance JavaScript is a challenge. In C++, it’s easy to express that a BoneTransform contains three four-float vectors, and they’re all stored contiguously in memory. In JavaScript, that involves using typed arrays and being very careful with your offsets. I would love to be able to specify memory layout without changing all property access to indices and offsets.

Finally, if you want to track Mozilla’s investigation into this benchmark, here is the bug. I’m excited to see what they can do.

by Chad Austin at March 07, 2011 08:16 PM

Mozilla’s Rejection of NativeClient Hurts the Open Web

Update: To avoid potential confusion, I will plainly state my overall thesis. The primary benefit of the internet is its openness, connectedness, standardness. By not adopting a technology capable of competing with native apps on iOS, Android, Windows, and Mac, web vendors are preventing important classes of applications such as high-end games and simulations from moving to the open web.

Tom Forsyth writes that clock speeds have grown disproportionately relative to memory access, implying that dynamic languages such as Python or JavaScript, which perform more dependent memory reads, don’t reap the full benefits of Moore’s law. Tom then digs into Data-Oriented Design, whose proponents think primarily about how data is laid out in memory (physical structure) and secondarily about code’s syntax (logical structure). I would have loved to have seen Tom dig into empirical data about the performance of Python and JavaScript across a variety of architectures, especially now that memory subsystems are better and tracing JITs have caught on, but his point stands: memory analysis is critical for low-latency code on today’s architectures. Dynamic languages and virtual tables are at odds with predictable memory access patterns.

How does this apply to the web? Google has developed an x86 sandboxing technology called NativeClient which allows web pages to securely embed x86 code. NativeClient enables Data-Oriented Design on the web, bringing web applications to the same playing field as native applications, especially in domains such as 2D and 3D graphics, video encoding/decoding, audio processing, and simulation.

Mozilla publicly rejects NativeClient and its portable LLVM equivalent, PNaCl. Instead, Mozilla is choosing to invest in JavaScript improvements, predicting that JavaScript performance will come “close enough” to native code performance.

I argue that native code’s primary benefit lies in memory layout and access patterns, not instruction set benefits such as SIMD. With typed arrays, WebGL has brought some degree of explicit memory layout to JavaScript, but it’s still restrictive: typed arrays don’t provide pointers, structures, structure-of-arrays vs. array-of-structures, or variable-width records. These aren’t always easy to specify in C either, but at least NativeClient gives us the possibility to innovate on systems-level design, while preserving the convenience, security, and portability of web-based code.

Predictability is a further advantage of native code. In today’s browser climate, the JavaScript engines have sometimes wildly different performance characteristics. Even if each browser vendor implements its own x86 or LLVM sandbox, it’s unlikely that an application would run differently across browsers.

Beyond performance, NativeClient gives us the ability to target existing code written in C, C++, or even languages like Haskell, to the web. Emscripten and similar “translation taxes” are no longer necessary.

Finally, notice that web-based installation of native code is becoming more prevalent: iOS App Store, upcoming Mac App Store, Games for Windows Live, and Steam have shown it’s possible to make a seamless and compelling native code installation experience. However, these are all restrictive walled gardens! For the open web to compete, it needs a realistic answer to native code.

I believe that Mozilla’s insistence on pushing JavaScript over NativeClient hurts the open web by giving native applications an indefinite leg up. I want the web to support applications as rich as Supreme Commander, a game with thousands of units where each weapon trajectory is physically simulated. NativeClient would give us that capability.

Preemptive response: But NativeClient is x86! Basing the open web on a peculiar, old instruction set is a terrible idea! That’s why I point to LLVM and Portable NativeClient (PNaCl). It’s not a burden to target PNaCl by default, and cross-compile to x86 and ARM if they matter to you.

by Chad Austin at March 07, 2011 08:16 PM

March 04, 2011

Benjamin McGraw

HTML 5 Canvas Fonts

I took a small break from the menus to work on the fonts problem. Specifically, each browser renders the fonts in canvas differently. I attempted to solve the problem by using a pixel font (I grabbed 04b08Regular and made it into a font kit with FontSquirrel) but… apparently no dice.

It really looks like I’m going to have to do spritesheet font blitting. It’s a shame that the “HTML is awesome for UI!!!” solution doesn’t actually work for me in this case (it works when you can embed a single browser as a downloadable, not when you have to support them all ;( )

The menu work goes slowly but surely. I’m poking functions around and playing with their structure. Hopefully I stop pushing the peas around my plate sometime soon.

I met Kildorf in the flesh today. He had some excuse about a flooded house for not working on his cool mapeditor. Whatever, my Gruedorf win ratio is now 50.0%. Climbing my way back into the majority a fractional percentage point at a time.

by mcgrue at March 04, 2011 08:27 AM

March 01, 2011

Benjamin McGraw

More menu-ey stuff

Committed more menu system refactoring for spriteright to the repo. Things are going very slow. Been hella busy between being sick, going to GDC, getting ready for SXSWi etc.

The important thing is that work, albeit slow, continues. I look forward to kicking some ass on plane rides soon.

by mcgrue at March 01, 2011 05:32 AM

February 22, 2011

Benjamin McGraw

Sully Menus

I’m in the process of converting from the silly non-functional menu (summoned and dismissed with ‘M’ on http://spriteright.com) to a fully formed RPG menu that actually “shows data” and “does stuff”. It’s relatively slow going this week, mainly refactoring and a minimum of actual time put in, sadly. I think I’ve only squeezed 2 hours into gruedorf this week. Sickness and work and life, etc… :(

At any rate, I’m looking forward to getting the bare bones of a party menu, status screen, and inventory screen in. I’ll probably be working on this aspect of the game for a few weeks at least. With luck, next week’s update will have more screenshotty goodness.

by mcgrue at February 22, 2011 10:25 AM

February 15, 2011

Benjamin McGraw

Mapswitches complete.

Today’s gruedorf updat brings a full, completed mapswitch. You can leave the island and go to the undersea cavern (and back) once you complete your conversation with Crystal.

Progress is slow lately, but I’m hoping to have time over the weekend to do some impressive stuff. Map Parallax and single-layer lucency is next on my list. And finishing all of the undersea cavern scripting.

by mcgrue at February 15, 2011 06:57 AM

February 14, 2011

Joshua McKenty

Better Integration of Jpype and Log4J

We’ve been using JPype at GEM to integrate some existing legacy Java libraries with the new Python code in OpenQuake. Until today, one of the ugliest parts of this integration has been logging – although we’re using the popular Log4J library to manage log output within the Java code, the Jpype JVM has a separate file descriptor for the console from the Python environment.

To deal with this, I whipped up a quick OutputStream and matching Proxy interface:

package org.gem;
import java.io.IOException;
import java.io.OutputStream;
public class PythonOutputStream extends OutputStream {
private IPythonPipe thispipe;
public void setPythonStdout(IPythonPipe mypipe) {
thispipe = mypipe;
}
@Override
public void write(int arg0) throws IOException {
thispipe.write((char) arg0);
}
package org.gem;
public interface IPythonPipe {
public void write(char output);
}
This interface ends up having the same interface as the python sys.stdout pipe, making the connection between Java and Python as simple as the following:
mystream = jpype.JProxy("org.gem.IPythonPipe", inst=sys.stdout)
errstream = jpype.JProxy("org.gem.IPythonPipe", inst=sys.stderr)
outputstream = jpype.JClass("org.gem.PythonOutputStream")()
err_stream = jpype.JClass("org.gem.PythonOutputStream")()
outputstream.setPythonStdout(mystream)
err_stream.setPythonStdout(errstream)
ps = jpype.JClass("java.io.PrintStream")
jpype.java.lang.System.setOut(ps(outputstream))
jpype.java.lang.System.setErr(ps(err_stream))
Up next time – how we managed using the same JVM in many modules, plus some bonus notes on setting Java system settings in Jpype.
Share This

by admin at February 14, 2011 11:52 PM

Build something you care about

Many of you are probably expecting me to say something about my new venture, Piston. You might be expecting some discussion of the Rackspace acquisition of Anso Labs, or why I left Anso. In fact, you might (quite reasonably) be expecting me to talk about why I left NASA.

Too bad.

Have I lost my passion for open source, open data, open science or open government?

Certainly not.

I’m still on the OpenStack Project Oversight Committee, and I plan on kicking ass, now more than ever. At the same time, I have never seen a conflict between a capitalist agenda, and a social agenda – and I’ve now got a great occasion to prove that.

Pompeii Columns

While I was tripping around Italy over new years, I had a chance to tour Pompeii with my children. It’s a great place to pause for a moment and appreciate permanence, and the idea of building something to last.

What are *you* working on? If it was dug out of the ashes 1,000 years from now, would you feel proud of it?

Share This

by admin at February 14, 2011 11:49 PM

February 08, 2011

Benjamin McGraw

fades and mapswitches

Currently visible progress on www.spriteright.com: the first full cutscene is in place, fades and chats and art and all.

Currently readable progress in github only demonstrable if you check it out and run it locally: mapswitching. Still working a few bugs out there.

by mcgrue at February 08, 2011 05:23 PM

February 01, 2011

Benjamin McGraw

Disappearing girl

Today’s gruedorf entry brings us one step closer to finishing the first map of Sully. Now the crustal sprite can be removed fromthe scene “disappearing butler style”.

Next up is to add fading, and then mapswitching…

by mcgrue at February 01, 2011 11:49 AM

January 26, 2011

Benjamin McGraw

The problem with gtalk conversations

The problem with gtalk conversations.

by mcgrue at January 26, 2011 08:16 PM

January 25, 2011

Benjamin McGraw

Entities and Collisions

Not much new this week (been busy with family in town) but that will not cease my gruedorfian obligations!

For this week’s hourly minimum sacrifice, I present: entities that obstruct!

Next up is firing events/executing code conditionally upon that (onBump?) and maybe getting to some z-ordering with entities. Right now the player walks all over the entities. Literally.

by mcgrue at January 25, 2011 10:18 AM

January 18, 2011

Benjamin McGraw

Sliding around

There was a lot of failure earlier in this gruedorf-week attempting to import verge3′s obstruction and player movement system into spriteright. That failed a lot. The transliteration of C into javascript largely was mechanical, but the truth of the matter was there was far too much for me to convert at one time before I saw a physical return,so the bugs were legion.

I was impressed how easily I could convert C into javascript, though. And I learned that javascript had bitshifting operators!

I really didn’t expect that for some reason. Most likely because ints aren’t supported natively in the language.

At any rate I threw out most of the code and started again anew, and worked towards an obstruction system that I couldn’t walk through the walls of. And I got there tonight!

Then I started implementing an obstruction system that’d let me slide around on the diagonals. And I got there tonight!

But the hack I used to get the diagonals-sliding lets you get embedded in and/or walk through walls again. So that’s got some bugfixing in it’s future.

The important part is: it feels fun again, and I made visible progress.

I also implemented an autoexec script for maps upon load (mapinit) and shoved a few setObstructionTile calls in there to make the map easier to navigate (specifically, to make it easier to get into and out of the hut).

So that’s neat.

Press ‘O’ to see the obstructions while you play!

Go see SpriteRight in action!

Or maybe check out it’s source at github.

by mcgrue at January 18, 2011 12:18 PM

January 17, 2011

Benjamin McGraw

The universal problem-solving technique

1. Find an error message.
2. google that error message in quotes.

If 2 fails, google substrings of the error message.

Only after several failed iterations of this process should you bug your cool programmer friends. You’ll be glad you did. They’ll be glad you did.

And, if you actually needed to ask your cool programmer friends for help (or discovered the answer yourself in spite of failed searches) you should make a public blog post with the error message itself in the title and the entire problem you experienced (with solution) in the body.

In this way, you are improving the internet and making the next guy’s life easier. Also, writing about the solution will help you remember it yourself next time, so it’s a double-win.

by mcgrue at January 17, 2011 03:05 AM

January 15, 2011

Chad Austin

Holiday Report

Things I learned over the holidays:

by Chad Austin at January 15, 2011 09:11 AM

January 13, 2011

Chuck Rector

Change

Sometimes you see something

And you know it's what you want.

And that's where you'll go.

That's where you'll be.

You know it.

You've decided.

In the same way, this extends

To experiences.

To have done

Or have had the taste of something,

To know what it's like

And that it's different from your norm,

That it's better in some way,

Even if only in its difference,

Can be inspiration enough to change.

And you decide your future,

As simple as that.

It's a beautiful thing.

by noreply@blogger.com (卡车 Chuck) at January 13, 2011 06:06 AM

Benjamin McGraw

The Firefox Debugger is a Den of Lies ( Canvas.getImageData )

I’ve become accustomed to my best friend in javascript development being Firebug. Even though Firefox has become a worse browsing choice than Chrome, I always develop in it because FireBug still makes me much happier than the chrome inspector.

But last night I lost an hour or so because I forgot one of the oldest rules of javascript development: Sometimes it lies.

I was modifying a friend’s loader to get the pixel data out of an image to accept elements that were already loaded into the DOM. This is what I had:

function getImageData(img) {
    var ctx = document.createElement('canvas').getContext('2d');
    ctx.width = img.width;
    ctx.height = img.height;
    ctx.drawImage(img,0,0);
    var res = ctx.getImageData(0, 0, img.width, img.height);
    // debugger;
    return res;
}

…but when I was testing it, res there was very much an empty object. Then started the gnashing of teeth and recoding and the step-by-step debugging.

After I threw in the Towel of Pride, Daniel provided a sanity check and found that res did indeed hold the specific object I was looking for.

So what gives?

Apparently, javascript ImageData objects have a defined toString() returning “Object { }”.

I find toString overloads of this type to be a terrible throwback to the dark ages of javascript. I should’ve realized that what it said wasn’t actually an empty object (which would’ve been represented as just “{}”) but in fact was a much more insidious manifestation of the old “[object Object]” which plagued javascript alert() messages for years.

Please, developers, if you make a toString() overload for your javascript objects, make them read like they contain data. :(

by mcgrue at January 13, 2011 01:26 AM

January 11, 2011

Benjamin McGraw

Tuning Canvas Javascript in Firefox 4

My friend Kevin took an interest in why SpriteRight was performing so poorly in Firefox 4, since it has hardware acceleration.

So after cracking open his profile and bothering the firefox developers themselves a bit (Kevin takes his debugging seriously) he found that my tilesheet was stupidly formed: it was over 20,000 pixels long and so was much, much larger than a texture can be on a modern video card. This was forcing the renderer to always be in software rendering mode.

When I was dumping the data out of verge, this was not something I was thinking about.

So, at his suggestion, I made the tileset filmstrip into a more traditional atlas-style spritesheet, and after a few nips and tucks in the engine, rendering speed went from less than 10 FPS to over 70 fps on firefox 4! Yay!

(Firefox 3 gained nothing. Boo.)

Anyways, that was all well and good, until Kevin tried toggling the obstruction layer’s visibility. At which point FF4 dropped back to 10fps. This made no sense to me, because the marginal cost of rendering that layer would’ve been another 33% (it basically just added another layer to the renderer).

The trick was the obs layer rendered using the ‘lighter’ compositeOperation, which was causing the dramatic slowdown, even on accelerated cards. I’ve opted to turn that off for now.

Thanks for the help, Kevin!

by mcgrue at January 11, 2011 10:46 AM

January 09, 2011

Benjamin McGraw

Sully Chronicles in the Web!

Work continues apace over at spriteright.com on the html5 version of The Sully Chronicles. I’ve renamed the engine to remove confusion: it’s not java, it’s javascript, and it’s not verge, it’s verge-like.

Anyways.

There’s now event activation (both talking-to-things, like Sully and Sancho, and walking on things, like the door of the hut). I’m currently working on getting entities spawning and walking around at which point I’ll focus on making a more-better collision-detection system, better obstructions, and fix the music back up.

Play the demo right now

by mcgrue at January 09, 2011 02:33 AM

January 02, 2011

Benjamin McGraw

Obstruction Handling

I’m pretty sure collision/obstruction handling exists in the venn overlap between engine and game.

How you do it colors a lot of a game’s “feel”, but in initial prototypes you want it provided for you without thinking about it.

by mcgrue at January 02, 2011 08:23 AM

HTML5 and Canvas and Games

Hey look, a gruedorf entry without me working on a website!

What have I been working on this week?

Hey look: Sully in a web browser!.

I registered “javerge.com” on a whim because I have a terrible, terrible problem with purchasing domains. This is a WIP experiment that actually has little to do with VERGE structurally. It’s all javascript and html5 canvas, with a little flash for the music-playing capabilities. But it is verge-related and I’ve been making asset conversation, so whatever.

I’m still looking for a name for the engine, right now I’m thinking somethign Sully related, like Clam Engine or Lord Stan or Castle Heck.

Or “No Caves”.

Whatever. Anyways, it’s pretty early in and non-optimized, but I’m having fun banging it out. You can grab the code at https://github.com/mcgrue/js-verge.

The license will be BSD or MIT. Probably MIT because I need to show off my East Coast Cred in this here West Coast Silicon Valley. EST 4eva!!!1

by mcgrue at January 02, 2011 07:27 AM

December 29, 2010

Benjamin McGraw

“Package lua5.1 was not found in the pkg-config search path” on OSX

So I was trying to install Lua-GD on my macbook pro, but there was no OSX binary.

I figure, eh, I can build a package. So then I download the source and do the make dance. After a bit I get to:

Package lua5.1 was not found in the pkg-config search path.
Perhaps you should add the directory containing `lua5.1.pc'
to the PKG_CONFIG_PATH environment variable
No package 'lua5.1' found

And that’s where I got stuck after several googles. The internets were unkind to this problem.

So I stepped away for a bit, and when I got back to the computer I decided I was probably approaching this from the wrong angle. At which point I did a search on macports and found lua-lua-gd.

Ratholes aren’t fun. Sometimes it’s best to take a step back instead of pounding a round peg into a square hole.

by mcgrue at December 29, 2010 08:12 AM

December 26, 2010

Benjamin McGraw

Github Setup Commands

(This is mainly because I keep forgetting them and I don’t want to keep making new repositories to see these steps.)

Global setup:

Download and install Git, and then…

git config --global user.name "MY NAME"
git config --global user.email MY@EMAIL.COM

Next steps:

mkdir
cd MY-NEW-REPOSTORY
git init
touch README
git add README
git commit -m 'first commit'
git remote add origin git@github.com:GITHUB-USERNAME/MY-NEW-REPOSTORY.git
git push origin master

Existing Git Repo?

cd existing_git_repo
git remote add origin git@github.com:GITHUB-USERNAME/MY-NEW-REPOSTORY.git
git push origin master

by mcgrue at December 26, 2010 07:19 AM

verge-rpg fully converted from cakephp 1.1 to cakephp 1.3

It was ultimately a much less arduous task than I’d feared.

www.verge-rpg.com is currently live with the updated version of the codebase. r699 on that svn repo has been reached.

The last metroid is in captivity, the galaxy is at peace.

Oh, and I may be starting up a new project

by mcgrue at December 26, 2010 02:46 AM

December 21, 2010

Michael Rooney

Diving into EC2 and Auto Scaling

I'm rather new to EC2 (and as a result its CloudWatch / Auto Scaling features), so I figured I'd post about my current thoughts and see if anyone can tell me if I'm on the right track! I'm writing everything up as I go, so hopefully it will make a solid blog post for others in the future for going from zero to EC2 + Auto Scaling.

Based on my initial research, it looks like the stack will be comprised of:

Any yays or nays on this stack? One particular curiosity I had is if Mathiaz's guides would be simpler in Maverick, since they were written for Lucid and it seems like each release brings improvements for both EC2/UEC and Puppet. I'll also have to figure out how to put some of the roles behind a load balancer, but that's for later.

Thanks for any suggestions! Once I've got it all figured out I'll put up a somewhat comprehensive guide, unless one already exists that I've missed.

by Michael (noreply@blogger.com) at December 21, 2010 05:31 PM

Benjamin McGraw

.2 more cake!

In direct defiance of what I said I wouldn’t do a scant month ago, I’m currently converting the verge-rpg.com codebase from cakephp 1.1 to 1.3.

Why? Because I’m sick of dealing with the anachronisms from the original library. When I built pingpawn in 1.3 natively, every new feature effortlessly flowed into being. That’s the sort of thing I covet, especially after 2 weeks of chasing down bugfixes since the site re-launch.

I want this beastie maintainable, not to be in a de-facto feature-freeze for years like last time.

On r677 as of tonight. Whee!

by mcgrue at December 21, 2010 10:17 AM

December 15, 2010

Benjamin McGraw

Request exceeded the limit of 10 internal redirects due to probable configuration error.

In my experience, if you get the old apache default error page of :
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.

It tends to be a misconfigured .htaccess file somewhere.

www.johnweng.com was down today with that error message. JohnWeng.com is a website I host for Ustor which contains, among other things, the gruedorf competition homepage.

So the first thing I do is to shell into the server and change into the webroot directory for johnweng.com.

But there was no htaccess file.

The next most common problem I find is unix permissions-based, although you don’t tend to get 500′s in that instance. Regardless, I started asserting that ustor owned everything, www-data was the group on everything, and the proper chmod was in place.

Still nothing. Still the same error.

The next thing I do is verify that the name “johnweng.com” was actually resolving to my server. It always helps to verify that you’re looking at the right level of the problem. One traceroute later and I’ve confirmed that DNS is working as expected: the name was going to the numerical IP of the host box.

So this got curious. And increasingly frustrating.

My next step was to go to my apache log file, like the 500 error page helpfully suggested. Whenever I hit it, I’d get a the following error:
Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.

Which reinforced the initial “it has to be an .htaccess file, damnit!” reaction. Those wily mofos are always redirectin’ shit. But where could it be?

The answer is, of course, up a directory.

htaccess files can, with the proper apache server settings, listen to its parent folder’s .htaccess file if one isn’t present in the current directory. And so on, and so on. I didn’t think to look in the shared directory that all my hosted sites sit in because it’s not a webroot itself, and why would there be an htaccess file there anyway?

It turns out I accidentally checked out a copy of vrpg’s svn to that shared root directory two nights prior during some late-night hackery. This caused vrpg’s htaccess file to be one file up from where johnweng.com’s webroot was. And apparently even though johnweng.com was “the top” directory in its structure, apache still looked to its parent directory and found a htaccess file to access the rules from.

Lesson learned: apache doesn’t care about its own webroots. It just cares about folders.

As a side note, I didn’t notice that this mis-checkout happened until johnweng.com was noticed to be down because every other site I deal with has its own htaccess file in its own webroot, defining exactly the rules that vrpg’s misplaced htaccess file was. So each of those child directories was overriding the rogue parent file, and so was acting as expected.

Yay systems administration!

by mcgrue at December 15, 2010 05:29 AM

December 12, 2010

Benjamin McGraw

Galleries finished.

This week brought editing of individual gallery pages, rss for newest screenshots uploaded, and a fancy random-image-rotating box on the front page of verge-rpg.com. Go over there to check it out.

by mcgrue at December 12, 2010 01:39 PM

December 06, 2010

Benjamin McGraw

Gruedorf: gallery uploading and reordering

I was hoping to be done with the galleries feature by the end of this week, but even though there was a Super Happy Dev House at the Hacker Dojo, I wasn’t able to get my code-dick up this weekend.

I hear it happens to everyone from time to time.

Charming turns of vulgar phrase aside, I have uploads, thumbnailings, and reorderings of the galleries in now. The last thing to do is editing titles/description for individual pages, and some quick api calls so I can build a frontpage widget that cycles through screenshots all sexy-like.

by mcgrue at December 06, 2010 07:10 AM

November 29, 2010

Chad Austin

Tracing Leaks in Python: Find the Nearest Root

Garbage Collection Doesn’t Mean You Can Ignore Memory Altogether…

This post is available on the IMVU Engineering Blog.

Garbage collection removes a great deal of burden from programming. In fact, garbage collection is a critical language feature for all languages where abstractions such as functional closures or coroutines are common, as they frequently create reference cycles.

IMVU is a mix of C++ and Python. The C++ code generally consists of small, cohesive objects with a clear ownership chain. An Avatar SceneObject owns a ModelInstance which owns a set of Meshes which own Materials which own Textures and so on… Since there are no cycles in this object graph, reference-counting with shared_ptr suffices.

The Python code, however, is full of messy object cycles. An asynchronous operation may hold a reference to a Room, while the Room may be holding a reference to the asynchronous operation. Often two related objects will be listening for events from the other. While Python’s garbage collector will happily take care of cycles, it’s still possible to leak objects.

Imagine these scenarios:

To detect these types of memory leaks, we use a LifeTimeMonitor utility:

a = SomeObject()
lm = LifeTimeMonitor(a)
del a
lm.assertDead() # succeeds

b = SomeObject()
lm = LifeTimeMonitor(b)
lm.assertDead() # raises ObjectNotDead

We use LifeTimeMonitor’s assertDead facility at key events, such as when a user closes a dialog box or 3D window. Take 3D windows as an example. Since they’re the root of an entire object subgraph, we would hate to inadvertently leak them. LifeTimeMonitor’s assertDead prevents us from introducing an object leak.

It’s good to know that an object leaked, but how can you determine why it can’t be collected?

Python’s Garbage Collection Algorithm

Let’s go over the basics of automatic garbage collection. In a garbage-collected system there are objects and objects can reference each other. Some objects are roots; that is, if an object is referenced by a root, it cannot be collected. Example roots are the stacks of live threads and the global module list. The graph formed by objects and their references is the object graph.

In SpiderMonkey, Mozilla’s JavaScript engine, the root set is explicitly-managed. SpiderMonkey’s GC traverses the object graph from the root set. If the GC does not reach an object, that object is destroyed. If C code creates a root object but fails to add it to the root set, it risks the GC deallocating the object while it’s still in use.

In Python however, the root set is implicit. All Python objects are ref-counted, and any that can refer to other objects — and potentially participate in an object cycle — are added to a global list upon construction. Each GC-tracked object can be queried for its referents. Python’s root set is implicit because anyone can create a root simply by incrementing an object’s refcount.

Since Python’s root set is implicit, its garbage collection algorithm differs slightly from SpiderMonkey’s. Python begins by setting GCRefs(o) to CurrentRefCount(o) for each GC-tracked PyObject o. Then it traverses all referents r of all GC-tracked PyObjects and subtracts 1 from GCRefs(r). Then, if GCRefs(o) is nonzero, o is an unknown reference, and thus a root. Python traverses the now-known root set and increments GCRefs(o) for any traversed objects. If any object o remains where GCRefs(o) == 0, that object is unreachable and thus collectible.

Finding a Path From the Nearest Root to the Leaked Object

Now that we know how Python’s garbage collector works, we can ask it for its set of roots by calculating GCRefs(o) for all objects o in gc.get_objects(). Then we perform a breadth-first-search from the root set to the leaked object. If the root set directly or indirectly refers to the leaked object, we return the path our search took.

Sounds simple, but there’s a catch! Imagine that the search function has signature:

PyObject* findPathToNearestRoot(PyObject* leakedObject);

leakedObject is a reference (incremented within Python’s function-call machinery itself) to the leaked object, making leakedObject a root!

To work around this, change findPathToNearestRoot so it accepts a singleton list containing a reference to the leaked object. findPathToNearestRoot can borrow that reference and clear the list, ensuring that leakedObject has no untracked references.

findPathToNearestRoot will find paths to expected Python roots like thread entry points and module objects. But, since it directly mirrors the behavior of Python’s GC, it will also find paths to leaked C references! Obviously, it can’t directly point you to the C code that leaked the reference, but the reference path should be enough of a clue to figure it out.

The Code

template<typename ArgType>
void traverse(PyObject* o, int (*visit)(PyObject* visitee, ArgType* arg), ArgType* arg) {
    if (Py_TYPE(o)->tp_traverse) {
        Py_TYPE(o)->tp_traverse(o, (visitproc)visit, arg);
    }
}

typedef std::map<PyObject*, int> GCRefs;

static int subtractKnownReferences(PyObject* visitee, GCRefs* gcrefs) {
    if (gcrefs->count(visitee)) {
        Assert(PyObject_IS_GC(visitee));
        --(*gcrefs)[visitee];
    }
    return 0;
}

typedef int Backlink; // -1 = none

typedef std::vector< std::pair<Backlink, PyObject*> > ReferenceList;
struct Referents {
    std::set<PyObject*>& seen;
    Backlink backlink;
    ReferenceList& referenceList;
};

static int addReferents(PyObject* visitee, Referents* referents) {
    if (!referents->seen.count(visitee) && PyObject_IS_GC(visitee)) {
        referents->referenceList.push_back(std::make_pair(referents->backlink, visitee));
    }
    return 0;
}

static Backlink findNextLevel(
    std::vector<PyObject*>& chain,
    const ReferenceList& roots,
    PyObject* goal,
    std::set<PyObject*>& seen
) {
    if (roots.empty()) {
        return -1;
    }

    for (size_t i = 0; i < roots.size(); ++i) {
        if (roots[i].first != -1) {
            if (goal == roots[i].second) {
                chain.push_back(goal);
                return roots[i].first;
            }
            seen.insert(roots[i].second);
        }
    }

    ReferenceList nextLevel;
    for (size_t i = 0; i < roots.size(); ++i) {
        Referents referents = {seen, i, nextLevel};
        traverse(roots[i].second, &addReferents, &referents);
    }

    Backlink backlink = findNextLevel(chain, nextLevel, goal, seen);
    if (backlink == -1) {
        return -1;
    }

    chain.push_back(roots[backlink].second);
    return roots[backlink].first;
}

static std::vector<PyObject*> findReferenceChain(
    const std::vector<PyObject*>& roots,
    PyObject* goal
) {
    std::set<PyObject*> seen;
    ReferenceList unknownReferrer;
    for (size_t i = 0; i < roots.size(); ++i) {
        unknownReferrer.push_back(std::make_pair<Backlink>(-1, roots[i]));
    }
    std::vector<PyObject*> rv;
    // going to return -1 no matter what: no backlink from roots
    findNextLevel(rv, unknownReferrer, goal, seen);
    return rv;
}

static object findPathToNearestRoot(const object& o) {
    if (!PyList_Check(o.ptr()) || PyList_GET_SIZE(o.ptr()) != 1) {
        PyErr_SetString(PyExc_TypeError, "findNearestRoot must take a list of length 1");
        throw_error_already_set();
    }

    // target = o.pop()
    object target(handle<>(borrowed(PyList_GET_ITEM(o.ptr(), 0))));
    if (-1 == PyList_SetSlice(o.ptr(), 0, 1, 0)) {
        throw_error_already_set();
    }

    object gc_module(handle<>(PyImport_ImportModule("gc")));
    object tracked_objects_list = gc_module.attr("get_objects")();
    // allocating the returned list may have run a GC, but tracked_objects won't be in the list

    std::vector<PyObject*> tracked_objects(len(tracked_objects_list));
    for (size_t i = 0; i < tracked_objects.size(); ++i) {
        object to = tracked_objects_list[i];
        tracked_objects[i] = to.ptr();
    }
    tracked_objects_list = object();

    GCRefs gcrefs;
    
    // TODO: store allocation/gc count per generation

    for (size_t i = 0; i < tracked_objects.size(); ++i) {
        gcrefs[tracked_objects[i]] = tracked_objects[i]->ob_refcnt;
    }

    for (size_t i = 0; i < tracked_objects.size(); ++i) {
        traverse(tracked_objects[i], subtractKnownReferences, &gcrefs);
    }

    // BFS time
    
    std::vector<PyObject*> roots;
    for (GCRefs::const_iterator i = gcrefs.begin(); i != gcrefs.end(); ++i) {
        if (i->second && i->first != target.ptr()) { // Don't count the target as a root.
            roots.push_back(i->first);
        }
    }
    std::vector<PyObject*> chain = findReferenceChain(roots, target.ptr());

    // TODO: assert that allocation/gc count per generation didn't change

    list rv;
    for (size_t i = 0; i < chain.size(); ++i) {
        rv.append(object(handle<>(borrowed(chain[i]))));
    }

    return rv;
}

by Chad Austin at November 29, 2010 07:36 PM

Benjamin McGraw

Galleries (Viewing-only)

Well, about a 10 hour bender later and I have viewing galleries re-implemented. This would’ve gone much, much faster if I was using cakePHP1.3, but I’m not spending the time porting everything I’ve done to the new system at this time.

Examples:
http://verge-rpg.com/gallery/pistil_panik/
http://verge-rpg.com/gallery/verge3_2_release/sully_title_screen

The short story is: vrpg has galleries again, and now you can comment on them.

Next update will be about creating new galleries. >_>

I also have been slaying bugs and reporting them over at this thread: http://verge-rpg.com/forums/website-issues/the-website-bugfix-thread

Hopefully there’s only a few more full days of work in this project. The galleries are one of the biggest chunks left (that I’m aware of).

by mcgrue at November 29, 2010 10:06 AM

November 24, 2010

Chad Austin

How to Write an Interactive, 60 Hz Desktop Application

This post is available on the IMVU Engineering Blog.

IMVU’s client application doesn’t fit neatly into a single development paradigm:

Thus, let us clarify some specific requirements:

Naive Approach #1

Windows applications typically have a main loop that looks something like:

MSG msg;
while (GetMessage(&msg, 0, 0, 0) > 0) {
    TranslateMessage(&msg);
    DispatchMessage(&msg);
}

What went wrong

Using SetTimer/WM_TIMER sounds like a good idea for simulation and painting, but it’s way too imprecise for interactive applications.

Naive Approach #2

Games typically have a main loop that looks something like the following:

while (running) {
    // process input events
    MSG msg;
    while (PeekMessage(&msg, 0, 0, 0, PM_REMOVE)) {
        TranslateMessage(&msg);
        DispatchMessage(&msg);
    }

    if (frame_interval_has_elapsed) {
        simulate_world();
        paint();
    }
}

What went wrong

The above loop never sleeps, draining the user’s battery and burning her legs.

Clever Approach #1: Standard Event Loop + timeSetEvent

void runMainLoop() {
    MSG msg;
    while (GetMessage(&msg, 0, 0, 0) > 0) {
        TranslateMessage(&msg);
        DispatchMessage(&msg);
    }
}

void customWindowProc(...) {
    if (message == timerMessage) {
        simulate();
        // schedules paint with InvalidateRect
    }
}

void CALLBACK TimerProc(UINT, UINT, DWORD, DWORD, DWORD) {
    if (0 == InterlockedExchange(&inFlight, 1)) {
        PostMessage(frameTimerWindow, timerMessage, 0, 0);
    }
}

void startFrameTimer() {
    RegisterClass(customWindowProc, ...);
    frameTimerWindow = CreateWindow(...);
    timeSetEvent(FRAME_INTERVAL, 0, &TimerProc, 0, TIME_PERIODIC);
}

What went wrong

The main loop’s GetMessage call always returns messages in a priority order. Slightly oversimplified, posted messages come first, then WM_PAINT messages, then WM_TIMER. Since timerMessage is a normal message, it will preempt any scheduled paints. This would be fine for us, since simulations are cheap, but the dealbreaker is that if we fail to maintain frame rate, WM_TIMER messages are entirely starved. This violates our graceful degradation requirement. When frame rate begins to degrade, code dependent on WM_TIMER shouldn’t stop entirely.

Even worse, the modal dialog loop has a freaky historic detail. It waits for the message queue to be empty before displaying modal dialogs. When painting can’t keep up, modal dialogs simply don’t appear.

We tried a bunch of variations, setting flags when stepping or painting, but they all had critical flaws. Some continued to starve timers and dialog boxes and some degraded by ping-ponging between 30 Hz and 15 Hz, which looked terrible.

Clever Approach #2: PostThreadMessage + WM_ENTERIDLE

A standard message loop didn’t seem to be getting us anywhere, so we changed our timeSetEvent callback to PostThreadMessage a custom message to the main loop, who knew how to handle it. Messages sent via PostThreadMessage don’t go to a window, so the event loop needs to process them directly. Since DialogBox and TrackPopupMenu modal loops won’t understand this custom message, we will fall back on a different mechanism.

DialogBox and TrackPopupMenu send WM_ENTERIDLE to their owning windows. Any window in IMVU that can host a dialog box or popup menu handles WM_ENTERIDLE by notifying a global idle handler, which can decide to schedule a new frame immediately or in N milliseconds, depending on how much time has elapsed.

What Went Wrong

So close! In our testing under realistic workloads, timeSetEvent had horrible pauses and jitter. Sometimes the multimedia thread would go 250 ms between notifications. Otherwise, the custom event loop + WM_ENTERIDLE approach seemed sound. I tried timeSetEvent with several flags, but they all had accuracy and precision problems.

What Finally Worked

Finally, we settled on MsgWaitForMultipleObjects with a calculated timeout.

Assuming the existence of a FrameTimeoutCalculator object which returns the number of milliseconds until the next frame:

int runApp() {
    FrameTimeoutCalculator ftc;

    for (;;) {
        const DWORD timeout = ftc.getTimeout();
        DWORD result = (timeout
            ? MsgWaitForMultipleObjects(0, 0, TRUE, timeout, QS_ALLEVENTS)
            : WAIT_TIMEOUT);
        if (result == WAIT_TIMEOUT) {
            simulate();
            ftc.step();
        }

        MSG msg;
        while (PeekMessage(&msg, 0, 0, 0, PM_REMOVE)) {
            if (msg.message == WM_QUIT) {
                return msg.wParam;
            }

            TranslateMessage(&msg);
            DispatchMessage(msg);
        }
    }
}

Well, what about modal dialogs?

Since we rely on a custom message loop to animate 3D scenes, how do we handle standard message loops such as the modal DialogBox and TrackPopupMenu calls? Fortunately, DialogBox and TrackPopupMenu provide us with the hook required to implement frame updates: WM_ENTERIDLE.

When the standard DialogBox and TrackPopupMenu modal message loops go idle, they send their parent window a WM_ENTERIDLE message. Upon receiving WM_ENTERIDLE, the parent window determines whether it’s time to render a new frame. If so, we animate all visible 3D windows, which will trigger a WM_PAINT, which triggers a subsequent WM_ENTERIDLE.

On the other hand, if it’s not time to render a new frame, we call timeSetEvent with TIME_ONESHOT to schedule a frame update in the future.

As we saw previously, timeSetEvent isn’t as reliable as a custom loop using MsgWaitForMultipleObjectsEx, but if a modal dialog or popup menu is visible, the user probably isn’t paying very close attention anyway. All that matters is that the UI remains responsive and animation continues while modal loops are open. Code follows:

LRESULT CALLBACK ModalFrameSchedulerWndProc(HWND hwnd, UINT message, WPARAM wparam, LPARAM lparam) {
    if (message == idleMessage) {
        stepFrame();
    }
    return DefWindowProc(hwnd, message, wparam, lparam);
}

struct AlmostMSG {
    HWND hwnd;
    UINT message;
    WPARAM wparam;
    LPARAM lparam;
};

void CALLBACK timeForPost(UINT, UINT, DWORD_PTR user_data, DWORD_PTR, DWORD_PTR) {
    AlmostMSG* msg = reinterpret_cast<AlmostMSG*>(user_data);
    PostMessage(msg->hwnd, msg->message, msg->wparam, msg->lparam);
    delete msg;
}

void PostMessageIn(DWORD timeout, HWND hwnd, UINT message, WPARAM wparam, LPARAM lparam) {
    if (timeout) {
        AlmostMSG* msg = new AlmostMSG;
        msg->hwnd = hwnd;
        msg->message = message;
        msg->wparam = wparam;
        msg->lparam = lparam;
        timeSetEvent(timeout, 1, timeForPost, reinterpret_cast<DWORD_PTR>(msg), TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
    } else {
        PostMessage(hwnd, message, wparam, lparam);
    }
}

class ModalFrameScheduler : public IFrameListener {
public:
    ModalFrameScheduler() { stepping = false; }

    // Call when WM_ENTERIDLE is received.
    void onIdle() {
        if (!frameListenerWindow) {
            idleMessage = RegisterWindowMessageW(L"IMVU_ScheduleFrame");
            Assert(idleMessage);

            WNDCLASS wc;
            ZeroMemory(&wc, sizeof(wc));
            wc.hInstance = GetModuleHandle(0);
            wc.lpfnWndProc = ModalFrameSchedulerWndProc;
            wc.lpszClassName = L"IMVUModalFrameScheduler";
            RegisterClass(&wc);

            frameListenerWindow = CreateWindowW(
                L"IMVUModalFrameScheduler",
                L"IMVUModalFrameScheduler",
                0, 0, 0, 0, 0, 0, 0,
                GetModuleHandle(0), 0);
            Assert(frameListenerWindow);
        }

        if (!stepping) {
            const unsigned timeout = ftc.getTimeout();
            stepping = true;
            PostMessageIn(timeout, frameListenerWindow, idleMessage, 0, 0);
            ftc.step();
        }
    }
    void step() { stepping = false; }

private:
    bool stepping;
    FrameTimeoutCalculator ftc;
};

How has it worked out?

A custom message loop and WM_ENTERIDLE neatly solves all of the goals we laid out:

by Chad Austin at November 24, 2010 07:42 PM

November 22, 2010

Benjamin McGraw

VERGE Site Relaunch

In the past week I’ve re-launched verge-rpg.com. It’s now in a form that works more consistently across all browsers, and you can actually login and upload files consistently.

Hooray!

And then the bug reports started coming in. (Which is to be expected and respected.)

I’ve blown through 30 revisions since the launch crushing bugs, and I have a full coffer to go through for a few days yet. Hopefully I’ll have everything already-reported dealt with by this time next week.

I’m… eager… to start the next iteration of verge dev tools.

by mcgrue at November 22, 2010 06:09 PM

November 16, 2010

Benjamin McGraw

Shakeyface

I went to Fantasticfest this summer for vacation. It was fantastic.

It’s a film festival, and as such, they have photo badges.

They request that you make a shakeyface for this badge’s photo.

Sophia helped me take mine. And she just composed all of the reject shots into this gif:

by mcgrue at November 16, 2010 11:03 PM

PingPawn API

On of my side projects, a Social Quotefile named PingPawn, fuels an irc bot named sexymans. Amoung other places, it sits in my IRC channel, #sancho (irc.lunarnet.org). The denizens of that channel liked to point out that sexymans was, quite often, broken.

And broken he should be, because he was a 30-minute phenny hack until the last straw broke tonight and I tore out the old guts, built a proper json-spewing, RESTful HTTP API on the website, and made the bot consume said API.

I did this mainly because I wanted to, but slightly because after a few googles I couldn’t find an elegant way to write preparedStatements for mysql in python. Fuck the MySQLDB module.

Anyways, here’s the current API. (only GET requests as of this writing.)
http://pingpawn.com/api/rand – Get a random quote
http://pingpawn.com/api/rand/grue – Get a random quote from a specific quotefile (in this case the ‘grue’ quotefile (mine))
http://pingpawn.com/api/search?q=ATTEMPT+to+not+be+a+chump – Search for quotes that match a certain phrase (a random quote from the set of all that match).
http://pingpawn.com/api/search/grue?q=fire – Search for quotes that match a certain phrase from a certain quotefile (a random quote from the set of all that match).
http://pingpawn.com/api/search/grue/2?q=fire – Search for quotes that match a certain phrase from a certain quotefile, deterministically (this is the second quote in this quotefile that matches, and always will be.)

So that’s that.

I also deployed a new version of VERGE-RPG.com last night, and fixed up a few of the bug reports this evening. Yay Gravitars and BBCode!

As far as GrueDorf goes, I think this is my third successive night in a row of posting and of doing far more than an hour’s worth of work. Meanwhile, my arch-nemesis Kildorf appears to have failed to do a single hour’s worth of project work all week…

by mcgrue at November 16, 2010 09:56 AM

November 15, 2010

Benjamin McGraw

Creating Functions

I got the adding/editing/deleting of functions into the documentation section.

Now the documentation system has full basic functionality.

The verge-rpg.com recode is now at revision 479. It’s taken far longer than I expected actual-time-wise, but real work-time-wise is more likely less than a man month.

I’ll be giving the site a good shakedown in the meantime and patching things up before I roll it over to being the main site. After that, the next big project starts!

(also: two gruedorf posts in two days, Kildorf. You like applos?

by mcgrue at November 15, 2010 07:28 AM

November 14, 2010

Benjamin McGraw

Documentation Editing

Work continues apace on verge-rpg.com’s PHP.net-style documentation system.

Now we can add and edit sections, delete sections, and move sections to become sub-sections.

The whole process is audited, so viewing the history of a document and reverting to specific points in the document can be done.

The last thing to do with the documentation system is add/edit/remove functions from sections. This should be largely mechanical, and I may get to it later tonight…

by mcgrue at November 14, 2010 09:31 AM


Powered by Planet!
Last updated: July 07, 2011 10:46 AM