Powered by Kokoxl!

By now, you've already heard about what was announced during this morning's Apple Event: Let's Rock. New iPod nanos, new iPod touches, price drops, iPhone/iPod touch 2.1, iTunes 8, oh my! And, if you've been paying attention to Infinite Loop, you've seen our hands-on photos of the new nanos and iPod touches too. HangZhou Night Net

Unfortunately, there was no "One more thing…" at this event, but we did score a couple tidbits of info that weren't talked about during the press event. For one, the new iPod nanos now have a feature that speaks every menu and song info, which is especially nice for those who are hard of sight. The Apple rep who spoke to me said that it will tell you everything you need to know over the headphones, and if you have speakable items set up on your computer, the nano will inherit the voice you chose to use. It's unclear whether this feature will also be part of iPod touches and iPhones.

Additionally, the iPod shuffle line got a quiet update today as well. Nothing of substance is different; the storage sizes are still the same and pricing is as it was before, but the shuffles are now available in all the same colors as the iPod nano. Silver, black, indigo/purple, blue, green, yellow, orange, red, and magenta.

Finally, we inquired about the new $79 headphones that will be available for the iPod nano and iPod touch. The headphones, which let you skip/pause/play with a click just like the iPhone headphones, also let you adjust volume up or down. I asked whether they would work with the iPhone, and the Apple rep said the company doesn't claim that they're supported. "It will work with skipping, pausing, etc. but I don't think the volume will change. That's not supported on the iPhone." He pointed out that iPhone headphones will work with the iPod touch and iPod nano, however.

Update: It appears as if either the Apple rep misspoke or I misunderstood, or perhaps a combination of both, when it comes to the iPod shuffles. They did, indeed, get a color refresh today, but they're not available in every single color as the iPod nano.

HangZhou Night Net

It's great to be surprised by a game, and these days it doesn't seem like it happens very often. I had written off Pure a while ago, before I stumbled on the thread for the game in our forums. Let me give you some idea of how much people seem to have liked the demo…

"I just played the demo again, and I'm amazed at how much I'm looking
forward to this title. I also realized one other aspect of what really 'clicks'… it's reminiscent of my time with ExciteTruck in the way the jumps and airtime feel. A total sleeper out of deep left field!" "Downloaded and went through the demo today. It really does look amazing
and the controls are solid. Definitely grabbing this when it comes out.
I like the look and feel much more than MotorStorm.""I just popped in the 360 disc version I picked up over the weekend from
Gamestop… very cool demo (quick tutorial, then one race). The trick
trick/race ratio is very SSX. Definitely left me wanting more!""Played the demo, did not seem like that great of a game."

Okay, not everyone loved it, but the game got enough of a reaction out of the community that I went and downloaded the demo. I won't use exclamation points and invoke the names of other better-known racing titles, because I'm not trying to get on the damn box of the game as much as the forum apparently is, but I will say I had a great time with the one tutorial and lone race. The demo is available on both the PS3 and the 360, go give it a try. Great graphics, solid controls, over the top tricks… this is a fun arcade racer, all the way.

I can see why everyone likens it to the SSX series, and that's high praise in my book. I'm looking forward to seeing if the full game is able to keep my attention as long as the demo has, and I've just confirmed that we'll be receiving a copy to review after I pinged my source at Disney Interactive. Expect the review as soon as I get my hands on the full game.

Pure is coming to the PS3, 360, and PC on September 16.

SiSoftware Sandra 2009 has been released, and the application offers up a slew of new tests for benchmark-lovers to sink their teeth into. One of the main new additions to the application is the ability to benchmark graphics cards. HangZhou Night Net

Traditionally SiSoftware Sandra is used for benchmarking things like CPU performance, hard drive performance and flash drive performance, among other things. The new SiSoftware 2009 application still offers up the tests the old version provided for testing storage, and now it adds the ability to benchmark the GPGPU (General Purpose Graphics Processing Unit), with two new benchmarks specifically for this purpose.

SiSoftware says that its new 2009 version will tell a user how fast their GPGPU can do raw calculations. The application uses the same sort of format that it uses for benchmarking CPUs. The GPGPU computational performance benchmark uses the Mandelbrot for its workload. This allows direct comparison of the performance of the CPU and the GPGPU with the same workloads.

The second GPGPU benchmark is memory performance. This test analysis how fast data can be transferred to and from the GPGPU. SiSoftware say key feature of the new 2009 release are the support for four architectures including X86, x64/AMD64/EM64T, IA64 (enterprise version only) and ARM. The application supports six languages including English and German. Support has been added for AMD Stream Computing GPGPU engine 2.0 and later and up to eight GPGPU's are supported.

The application has benchmarks for video graphics rendering that tests for performance with video decoding and encoding as well as game performance. Also available is a video graphics memory benchmark to test the bus bandwidth. Another new benchmark in the 2009 release is a cryptographic performance benchmark that supports AES and SHA.

Some reports are saying that the new SiSoftware Sandra 2009 benchmark will replace 3DMark for testing video cards. I don't really see that happening, at least not right away. The key to video card reviews is being able to make direct comparisons from card to card with the same application. I don’t see those who have long tested with 3Dmark simply tossing it out in favor of Sandra 2009. What I see is that the two applications will be used side by side. It's possible that Sandra 2009 could replace 3DMark eventually for video card benchmarking, but it's much more likely that additional benchmarks will simply become common among writers and benchmarking enthusiasts.

If all had gone according to plan, AMD would have already launched its dual-core Phenom processors (codenamed Kuma), but Phenom hasn't gone according to plan since day one. Tidbits on Kuma are slowly making their way to the press, however, and while AMD won't specifically comment on unannounced parts or launches, sources at the company have quietly admitted to Ars that AMD will be launching new products between now and Shanghai's expected debut in December of this year. HangZhou Night Net

New pictures—and SuperPi results—are now available, courtesy of Chinese site Expreview.com. According to them, the Athlon 6500+ will run at 2.3GHz and feature the same 2MB L3 cache as Phenom. Expreview provides CPUID information for both the Athlon 64 X2 5000+ Black Edition (underclocked to 2.3GHz) and the 6500+. Both processors carry 512K of L2 cache per core; the Kuma carries 2MB of additional L3 as previously mentioned.

Unfortunately, the author chose to run SuperPi—possibly one of the most limited benchmarks ever—but we do, at least, see a notable difference between the two chips. The older Athlon X2 core takes 39.37s to finish calculating one million digits of Pi, while the dual-core Phenom finishes the exercise in just 33.43s. That's a single-core speedup of 15 percent, which, as I recall, is roughly what reviewers found when Phenom launched almost a year ago.

We can, however, glean a bit more information from CPUID. Voltage on the 2.3GHz part (at least on this sample) is 1.24v, which falls within the 1.0v-1.25v range AMD has set for its X3 8650, which also runs at 2.3GHz. Power consumption on these parts will obviously be lower than what we've seen from Phenom X4 or X3 chips. When we reviewed Toliman earlier this year, we found that whacking off a core (and 100MHz) cut max load power consumption from 265W to 198W, and while some of that is obviously due to the clockspeed drop, there's plenty of evidence that a dual-core Kuma will draw significantly less power than a triple-core or quad-core part.

If we switch gears a bit (and are willing to examine some early, possibly dubious, Deneb benchmarks at 3.4GHz), we can compare Kuma and Deneb (45nm quad-core) in SuperPi. Remember, the core count difference between the two is meaningless; SuperPi is a single-threaded test. Also remember to take the following with a mountain of salt.

So, if a 2.3GHz Kuma finishes SuperPi in 33.43s, and a 3.4GHz Deneb finishes in 20.51s, how much of Deneb's faster performance is a function of its faster clock? Deneb is clocked 48 percent faster than Kuma and finishes the test in 61.4 percent of the time it took the older 65nm processor. Scaling, therefore, isn't quite linear, but it's close. IF all of these results are actually accurate, SuperPi doesn't pick up any additional performance from whatever special sauce AMD built into the Shanghai core. Then again, only about six people in the universe actually care about SuperPi, so this is no great loss.

As for Kuma, don't expect AMD to push clockspeeds on these parts much, even if the parts themselves are capable of it. Cores count for a lot in AMD's pricing structure, and the company won't want its dual-core parts stealing sales away from higher-margin triple-core or quad-core chips. As for actual overclocking potential, if any, we'll have to see what happens when the parts hit market. At the very least, Kuma should provide an upgrade step for lower-end AM2 users who don't really want to pay a premium for a quad-core part they know they won't make much use of.

It has only been a few months since NVIDIA announced both its Tegra ARM11-based media processors and its plan to be the Next Big Thing in both mobile phones and mobile internet devices (MIDs). Today NVIDIA announced that it and Opera were teaming up to bring a full desktop-like web browsing experience to the mobile phone and MID market. HangZhou Night Net

The two firms say that the full desktop browsing experience will include support for JavaScript, vector graphics, and video content. NVIDIA says that it will offer a fully optimized version of the Opera 9.5 browser in the suit of preintegrated applications that is included with its NVIDIA Tegra line of computer-on-chip Windows Mobile and Windows CE solutions.

The new Tegra feature will support full desktop web content with hardware acceleration of rich media, image and in-page video playback. GPU acceleration with Tegra will provide panning and zooming during browsing and significantly reduce battery consumption. NVIDIA also reports that hardware-accelerated 3D touch browsing with Opera integration into NVIDIA's OpenKODE composition framework using OpenGL ES 2.0.

Jon von Tetzchner CEO of Opera Software said in a statement, "Full Web browsing is a key feature for mobile phones and mobile Internet devices today. End users are demanding a mobile-browsing experience that mirrors that of their home and work computers. Opera 9.5 combined with NVIDIA Tegra will deliver a powerful and visual browsing experience for the next generation of smartphones and mobile Internet devices."

NVIDIA says that devices using the NVIDIA Tegra hardware and using the Opera browser are expected to ship in 2009.

HangZhou Night Net

Zope 2 was a Python-based web development framework that built all its components in-house and has been superseded by Zope 3 and taken in a different direction. TurboGears is a newer Python-based web development framework that seeks out best-in-breed projects rather than roll their own. Mark Ramm is a core developer on the TurboGears project and is bothered by the fact that the Django community seems to be heading down the Zope 2 path under the auspices of the "batteries included" philosophy.

In the Python community, many see the Django community as separate from the other web development projects due to these difference in philosophies, Ramm said. He went on to further claim that Django is in some ways harming or at least doing a disservice to the the Python community as whole because much of its internal components cannot be easily used outside of the rest of the Django project. By the same token, Ramm believes that Django core developers could be spending valuable time and resources improving and integrating many existing projects, Beaker and SQLAlchemy for example, instead of reinventing their functionality. Furthermore, many of these established 3rd-party projects have long ago addressed and solved many perceived pitfalls of Django.

Mark Ramm takes exception to this oft-repeated Django-catchphrase

Ramm's central premise is in fact a balancing act that the framework seeks to maintain. The project attempts to package as much of the functionality (barring database drivers and the like) into a simple to install package that's friendly to newcomers. Constrated to TurboGears, for example, which could require a handful (or two) of dependencies to install. Which audience does Django want to maintain: one of individuals who want a tool to get things done, or an audience of bright programmers who are willing to trod through several hours of package maintenance for a slightly more robust solution? Django seems to be pushing for the former, and so far their decision has benefited them.

Another of Ramm's fears is that programmers pulled into the Django world will in effect become Django developers first and Python developers or web developers second. To a certain extent this is true, but Django developer James Bennett during the Q&A portion rebutted that rarely do developers come to the decision on a framework with SQLAlchemy, a stack of WSGI middleware, and Beaker in hand; they just want to launch a product with the least amount of friction as possible.

The pronouncements of doom and gloom do have a basis in some people's realities. It's true that Django could be putting in work to gain advanced database features offered by SQLAlchemy, but the gains really only apply to a small subset of their potential audience. Django core developers are much more supportive of making Django pluggable enough that 3rd-party developers incorporate these packages themselves. Recent changes deep inside the project (i.e. queryset-refactor) now allow developers to write or adapt just about any datastore backend (Jacob Kaplan-Moss mentions candidly that he has Django running on to of Aperture's Core Data store and a non-relational database with CouchDB).

The general feel of the day was that Django can do some things to improve their position in the Python ecosphere. Removing barriers in the core of Django to enable a wider range of extensibility looks to be a definite eventuality; and on even a few occasions, core developers mentioned they'd be looking more seriously into relying on 3rd-party libraries and packages for new functionality.

According to Hexus, AMD has launched new triple-core X3 processors. One of the new CPUs is AMD's first Black Edition triple-core, and it's called the Phenom X3 8750 Black Edition. The Black Edition CPUs traditionally differ from the non-Black parts by offering unlocked multipliers for overclocking. AMD has yet to announce the 8750 BE, but the new chip will likely drop straight into the company's current lineup, though it might carry a small premium over the standard 8750. HangZhou Night Net

Two other new CPUs were introduced as well including a Phenom X3 8450e, and Phenom X3 8250e. Both of the CPUs as designated by the "e" are energy efficient, low-power CPUs.

Pricing for the 8750 Black Edition is $134, a mere $5 more than the non-Black Edition part, according to Hexus. Pricing isn’t available on the other two CPUs, but as Hexus points out, the standard 8450 retails for $104. AMD has traditionally charged a premium for its power-efficient parts, so it's anyone's guess where this particular chip will end up.

In addition to the new processors, AMD has also announced "Optimum Platform Virtualization" for Opteron Server CPUs that allows them to run with Microsoft virtualization enabled. Opterons designed to run Microsoft Windows Server 2008 Hyper-V virtualization software are now marked as AMD-V.

AMD's Kevin Knox, VP of worldwide commercial business, said, "Now, through our continued partnership with Microsoft, AMD is expanding virtualization's reach and the benefits of resource consolidation to companies that might not have taken advantage of virtualization in the past."

It is interesting to note that the supposed AMD roadmap that I reported on earlier this week didn’t show these three X3 processors.

The fallibility of human memory was one of the first things covered in my undergraduate psychology class (or at least it was to the best of my recollection). However, the brain is a mysterious thing, and there's still much we don't fully understand when it comes to figuring out how it stores and processes information. HangZhou Night Net

The current PNAS1 features a paper from a team of scientists at MIT who have been probing the limits of visual memory. Previous studies have demonstrated that visual memory has an impressive capacity for storage; in studies where volunteers are shown 10,000 images (each for a few seconds), they were able to determine which of two images they had previously seen at a rather high level of accuracy. However, it had been thought that this visual memory was light on details, instead providing just the gist of the image.

In the PNAS study, the volunteers were shown 2,500 images, each for 3 seconds. In contrast to prior research, the images were stripped of any background details. The subjects were then shown a pair of images, one of which was previously seen and one that was new. The paired images were shown in three ways; novel, where the image was paired with an image of something from a completely different category (for example, false teeth and a DNA double helix), exemplar, where the image was paired with a different, but similar image (two slightly different starfish for example), or state, where the images were of exactly the same image, but in different conditions (such as a telephone on and off the hook).

The results of all three tests showed that visual memory is surprisingly detailed. In the novel test, subjects correctly identified the correct image 93 percent of the time. The exemplar and state test conditions were handled with slightly less accuracy but, at 87 percent and 88 percent, respectively, the margin wasn't large. The test subjects were also very accurate in their ability to detect repeated images, with 96 percent of repeat images being identified, and only a 1.3 percent false positive rate.

This work comes on the heels of some other studies on the limits of human memory, published within the last month in Nature2 and Science3. Those studies focus on the depth of visual memory when it comes to remembering data from images.

These studies suggest that, when it comes to remembering several details about an image, visual memory is dynamically allocated, in contrast to prior dogma that suggested that humans were limited to remembering only between two and five details at a glance. This form of visual memory is much like RAM though, as pointed out in a prior post. Although we can take in details and store them, that process is happening constantly as we process vision.

What these studies do all add up to is the realization that the limits of visual memory are much further than we previously thought.

1: PNAS, 2008. DOI: 10.1073/pnas.0803390105
2: Nature, 2008. DOI: 10.1038/nature06860
3: Science, 2008. DOI: 10.1126/science.1158023

Monday, antimalware developer, McAfee, released details on its new cloud-based defensive system, codenamed Artemis. As we've noted several times in the past, antimalware companies don't have an easy job, and the sheer number of virus variants that now spawn from even a single base infection threatens to overwhelm any company's ability to keep up. According to McAfee, the number of attacks observed in 2008 thus far (with 3.5 months to go) is larger than the total number of attacks in 2006 and 2007 combined. Given the financial incentives and corporate business model that has become prevalent in the malware industry, this number isn't likely to start heading downwards, either. HangZhou Night Net

One of the security industry's greatest weaknesses is that it is inherently reactive, and while this won't change anytime soon, McAfee believes Artemis will drastically reduce the current time-to-patch cycle, as illustrated in the diagram below:

McAfee states that problems are typically solved and patched 24-72 hours after the malware is initially spotted, and while that figure seems a bit optimistic, we'll go with it, given that the company says that even 24 hours is too long. When a major worm like Storm hits, the steps on this diagram actually go into a loop, as each new variant arrives, is tagged, and then blocked. Each time the loop occurs, there's a fresh window of opportunity/profit, which only encourages malware authors to crank out variants as quickly as possible.

The Artemis system theoretically accelerates the time-to-patch cycle by communicating directly with McAfee's online service whenever it encounters a suspicious file. Files are then scanned against the entire McAfee Avert Labs database for any similarities to preexisting behaviors or file signatures. If Avert Labs detects any sort of malware, the user than receives instructions on how to block or quarantine the file, just seconds after having received it. The on-site database (i.e., the program installed on the user's computer) is also updated to detect this malware variant if it shows up again. Presumably, the system has some way of recognizing if dozens of computers all start requesting data on the same suspicious bit of malware, and would trip some sort of built-in alarm to notify McAfee that a concerted attack was underway from a previously unknown source.

If it works as advertised, Artemis has the potential to substantially reduce the gap between the time malware is detected and the time a system is patched. Patching systems this quickly would all but close the profit window (defined here as the time any system spends under botnet control) and, if (really) widely deployed, might even negatively impact malware writers' profit margins. Such projections, however, assume that Artemis can deliver what it promises, and that is, by no means, guaranteed.

In order to prove itself, Artemis needs to demonstrate that it can appropriately distinguish between suspicious and unsuspicious files and retrieve the necessary (and correct) information from the Avert Labs database, and that the solutions it returns actually fix the problem in question (or appropriately prevent the problem from occurring). This is a tall order, given that AV programs still return false positives during any number of installation routines or other OS functions.

On the other hand, an antimalware product need not be perfect in order to be useful; if Artemis is right just half the time, just half the McAfee customers that would have been infected otherwise actually "catch" the bug in question for a meaningful amount of time. It's also hard to turn down free, and Artemis, or "Active Protection," in consumer products will be free in all versions of McAfee software. The service has already been incorporated into McAfee's Total Protection Service for small and medium businesses, and will be available later in the month for both McAfee VirusScan Enterprise and McAfee consumer products.

I'm not 100 percent sold on the program, and won't be until I see evidence that it's genuinely effective at stopping infections. I definitely applaud McAfee for developing a new approach to virus scanning and identification behind the "check local database" model and then making the results of that effort available to all current customers. Hopefully, the end result will be a noticeable drop in infections among McAfee customers, which would then spur the development of similar approaches across the antimalware industry.

Shades of gray

The gaming industry is in the midst of a very interesting and turbulent time. With the advent of downloadable games on consoles, game makers are opening up new frontiers of technology and design. But technology shifts, and art is a strange constant in an otherwise ever-changing medium. No matter how far the industry has come or how much things change, art forever remains an integral part of gaming, and good art is still a rare and valuable commodity. HangZhou Night Net

Pete Hayes, an artist working for Epic Games, knows this all too well. His work on Gears of War helped turn a brand new property into a gaming blockbuster. The first game in the series sold in huge numbers, for both the Xbox 360 and the PC, and Epic is poised to repeat the same success with Gears of War 2.

The work of Hayes and his colleagues is at the artistic vanguard of this entire console generation: the art style of Gears was largely the source of the "next-gen color scheme," a scheme exemplified in the gritty and dark design of Epic's original Xbox 360 killer app. But how has that often-imitated design changed going into the second game? And what of the art in the game industry in general? Ars sat down with Hayes to talk about his new game, his work in the industry, and what it takes to become a game artist.

The genesis of next-gen color

Ars Technica: Let me start with an easy question. Gears was the game that kicked off the so called "next-gen" color scheme, with browns and grays and that gritty look. Talk to me about that. Was that a conscious decision? Did you expect it to take off like it did? And how has that changed going into Gears of War 2?

Pete Hayes: Ah, yes. We get that a lot. As far as the comments regarding Gears and the color scheme, a lot of that is relevant for certain parts of the game, but we definitely thought there was diversity of the palette. As with everything in Gears 2, the theme is continuing with more of that. With the environments, we've continued to diversify the color palette, the types of environments, things along those lines. I definitely think it's a much more colorful game.

But it wasn't something that we set out as a conscious decision to counteract what some people thought about Gears 1: we made Gears 1 the way we wanted to make it, and with Gears 2 we continued to refine and polish and add to that formula. We've got these huge open vistas and beautiful sunsets and skies and different colors. There's tons of very vibrant fire and colors going on. There's definitely a much broader color range and we've tried to improve the visuals in order to make it even more beautiful.

Ars: And what of the notion that every Unreal game has a specific "Unreal" look? How do you feel about that?

PH: Frankly, I disagree [with that notion]. I don't think it's true. You look at Bioshock: there's a ton of UE3 games that have a distinct look. As far as some games looking "UT-ish" or "Gear-ish," there's definitely something within the industry and, well, everybody is inspired by everybody, artistically—especially if something is very successful. People look to that and try to capture that and bottle it and reproduce it. I think it's very flattering. But I don't think it doesn't have anything to do with UE3, it's just people's artistic styles of what they want to pursue. So that probably won't change, especially given the new enhancements for Gears 2.

Color will play a more prominent role in Gears of War 2.

Ars: Those visual enhancements have really started to take form in the multiplayer levels shown.

PH: Yea, for sure.

Ars: One level that caught my eye in multiplayer was "Avalanche." There's a blizzard going on, and there's an avalanche that comes through the level; it's very unlike what we saw with the Gears maps until "Hidden Fronts." Are all the multiplayer maps like that, with a thematic overtone like "the snowy level," "the fire level," and so forth?

PH: Yea. Each level has its own look and feel, its own vibe, its own uniqueness. That's one of the key things that we wanted to focus on was to give each one a distinctive feel, both gameplay-wise and also the color palette and the theme. Whether it's the time of day or the season, we wanted them each to feel very unique and stand alone as very individual levels.

Ars: The scale also caught my eye. Avalanche is a modestly-sized, symmetrical level, but the bigger levels and some of the single player stuff that we've seen is significantly more massive than anything in the first game. How do you work with that increase in scale, artistically? How do you go somewhat of a more directed and linear experience and open that up?

PH: Well, it starts first and foremost with the environments. Much, much larger scale environments. In Gears 1, you feel like you were part of this small squad with these little insurgent kind of skirmishes. In Gears 2, you feel like you're in a full-scale war, this huge battle. The biggest thing is that you've gotta open it up: you've got to have bigger environments, you've got to have more characters. We've also implemented more weapons, including the mortar we're showing and the Mulcher, which is the Gatling gun that you can use to mow down swarms of enemies. Artistically, it started with the environments, then how we filled the environments with more enemies, and then introducing new weapons (including the heavy weapon class) to make it possible to deal with all those enemies.

Ars: Working with the Unreal Engine all the time must make art production easier. You have that framework that lets you do what you do best. But does working with UE3 make you ever feel confined? Have you ever had to scale back on exercising your artistic freedom to make it work?

PH: There's always a constant struggle between the realities of shipping a game—independent of which platform you're developing for or what tool you're using. Obviously, as an artist, you're always wanting to put in another thousand more polys, or you're wanting that texture to be the next size bigger so your art is as perfect as it can possibly be. It's a constant balance between that fidelity and a game that runs on a disc and does all those things.