Game Optimization: A Farmville Sheep (Case) Study

A sheep stands in his field. Munching. Baa-ing. Wait, is that—spaghetti on his head? A bib around his neck? Aww, that’s the cutest sheep I ever did see! But why is that darn sucker moving so slow?

In fact, our poor wooly friend is an animation from Zynga’s game FarmVille. When we worked on this game, 140 million of you were coming back daily to plant and sow. Well, not YOU, of course! But if no one admits to it, where did all those players come from?

When optimizing art assets for Zynga’s Farmville, we noticed that some graphics had huge performance problems. In fact, some of players’ most favorite animals on the farm slowed game play to a crawl. And since Zynga depends on solid game performance to retain its user base, this was serious business.

Our first instinct was to build an art optimizer to simplify the artwork and make it faster via a process called rasterization. And it worked! Great, done—that’s all folks, the end.

BUT we all wanted to know more—why were some artists better than others at making cute little animals that would also show up on your screen with lightning speed? 

There were plenty of other sheep on the farm that looked great and didn’t slow game play, so why was our spaghetti guy causing all sorts of trouble? It turns out it wasn’t the sheep—it was the spaghetti.

Get out of the building moment

Suddenly, the problem the spaghetti bowl caused made so much sense—the artist had individually drawn each noodle in glorious pasta-riffic detail.

Displayed up close on a large-screen monitor, all that detail looked great. But in the game, the entire bowl of spaghetti was just a few pixels across, so the detail was superfluous.

Our optimizer tool allows us to easily spot animation performance problems and improve them.

At that moment, it occurred to us that the problem here wasn’t optimization, it was training.

I immediately scheduled a meeting with the FarmVille studio art director and a few of the artists on the team. They were really surprised at the impact a few extra pen strokes had on game performance.

To find performance bottlenecks in other artwork, we next built the Zynga Analyzer, a tool that integrated directly with the artist’s workflow. With each action in the authoring environment, an artist could see the impact on performance.   

The Analyzer tool helped give everyone a sense of how much drawing “budget” they had for a given character. Some drawing techniques, such as transparency and shading, had more impact on performance than others.

Everyone wanted to install it to find out what they were doing right and which art assets could be improved. Given these constraints, I saw that Zynga artists were incredibly clever at making cute illustrations that also performed well.

The Zynga Analyzer depicts how the artist has consumed their asset “budget.”

What started as an engineering optimization problem had morphed into an artist training program. The tool that exposed total impact on game performance now instilled a bit of self-reinforcing competition within the art team. Artists were spurred on to create illustrations that looked great and performed well. By exposing character performance during the creation process artists felt they had skin in the game, so to speak.

And finally, for illustrations with no fat to cut, the Optimizer tool gave an animation a bit more pep in its step. Altogether the effort once again secured each special animal’s place on the farm, including our little sheep who can now munch spaghetti till the cows come home.

Nicole Kidman’s Prosthesis Fools Better-Than-Human Face Recognition Algorithm

Last month in his keynote talk at the NVidia GPU Technology Conference, Andrew Ng (Baidu Chief Scientist, Stanford Professor, Google Brain team founder) described face comparison technology he and his team at Baidu developed. If you haven’t taken his Coursera machine learning course, that’s ok, this post doesn’t assume technical knowledge about machine learning.

The challenge is to compare two pictures containing faces and decide whether they are pictures of the same person or two different people. In the experiment, 6000 pairs of images were examined, both by humans and by algorithms from various teams around the world. Three teams, including Baidu, Google and the Chinese University of Hong Kong achieved better-than-human recognition performance. The Baidu team made just 9 errors out of the 6000 examples.

Here is a slide from Professor Ng’s talk with images of the ones they got wrong:

Face Recognition

You may notice that the top-left image pair is of movie actress Nicole Kidman, which the Baidu system incorrectly classified as two different people. What may be less obvious, is that the second image of Ms. Kidman was taken from her film, “The Hours” in which she is wearing a prosthetic nose.

Nicole Kidman in "The Hours"
Nicole Kidman in “The Hours”

Here are the overall results from implementations done by teams throughout the world. The dashed line indicates human performance at the same task. People typically think of recognizing faces as a very innate human task, but once again, we can be surprised that machine learning algorithms are now capable of equalling or outperforming humans.

More Human than Human

In his talk, Professor Ng credits two major factors as contributing to the vast improvements in machine learning technology over the past five years. As an analogy, think of launching a rocket into orbit: we need 1) both giant rocket engines and 2) lots of rocket fuel.

The rocket engine in his scenario is the incredible computational performance improvements brought to us by GPU technology. The fuel then, is access to huge amounts of data, coming to us from the prevalence of internet connected sensors, online services, and our society’s march toward digitization.

To perform this astounding face comparison judgment, algorithms are trained on facial data. The flow chart in the image above shows that if the algorithm is not performing well on the training data, we need a more powerful rocket engine.

High Performance Computing
The Talon 2.0 high performance computing system

 

The second part of the flowchart above illustrates that if the algorithm is learning the training data quite well, but performing poorly when presented with new examples, then perhaps more “rocket fuel” is required, so gathering more training data would be the logical approach to improve the system.

Professor Ng compares the benefits of high performance computing with cloud computing and states that better training performance may be achievable say with 32 GPUs in a co-located rack than by a larger number of servers running in the cloud. His reasoning is that communication latency, server downtime and other failures are more prevalent in cloud-based systems because they are spread out across more machines with more network connections.

Machine learning has been demonstrated to be good at many things. The recent improvements are not limited to face comparisons, in fact in this same talk improvements to Baidu’s speech recognition system were shown to perform well in noisy environments.