Powered by Kokoxl!

The Enforcement of Intellectual Property Rights Act of 2008, which was blasted by consumer groups and library associations this week as an "enormous gift" to the content industry, won the approval of the Senate Judiciary Committee this afternoon by a 14-4 vote. As first reported by Ars this morning, a series of amendments were added during committee mark-up, providing privacy safeguards for records seized under the law and stripping away several controversial provisions—though not the hotly contested section empowering the Justice Department to litigate civil infringement suits on behalf of IP owners. HangZhou Night Net

One significant change to the proposed legislation addressed, at least in some small measure, a concern broached by Public Knowledge and other consumer groups in a letter to the Judiciary Committee yesterday. Though the amended bill still creates expanded provisions for civil forfeiture of property implicated in an IP infringement case—potentially including servers or storage devices containing the personal data of large numbers of innocent persons—lawmakers altered the bill's language to affirmatively require a court to issue a protective order "with respect to discovery and use of any records or information that has been impounded," establishing "procedures to ensure that confidential, private, proprietary, or privileged information contained in such records is not improperly disclosed or used." They did not, however, go so far as to immunize the data of "virtual bystanders" from seizure, as the letter had requested.

The forfeiture section was also modified to exclude, as grounds for seizure, the violation of the "anticircumvention" provisions of the Digital Millennium Copyright Act. The old language would have allowed for forfeiture of tools that could be used to circumvent digital rights management software.

Excised, as well, was language that would have barred the "transshipment" through the United States of IP infringing goods. Since different countries have different IP rules, this language would potentially have defined goods that were legal in both their country of origin and their final destination—because, for instance, differences in copyright terms allowed works to fall into the public domain overseas while still under copyright in the US—as contraband.

The amendments also added a seat for a representative of the Food and Drug Administration, as well as any "such other agencies as the President determines to be substantially involved in the efforts of the Federal Government to combat counterfeiting and piracy" on the "interagency intellectual property enforcement advisory committee" that the bill would create.

Two new provisions were tacked on to the end of the law. The first directs the Comptroller General to conduct a study of the impact of piracy on domestic manufacturers and develop recommendations for improving the protection of IP in manufactured goods. (Wouldn't it be better to do this sort of thing before enacting enforcement legislation?)

The second is a nonbinding "sense of congress" resolution stipulating that, while "effective criminal enforcement of the intellectual property laws against such violations in all categories of works should be among the highest priorities of the Attorney General," the AG should give priority, in cases of software piracy, to cases of "willful theft of intellectual property for purposes of commercial advantage or private financial gain," especially those "where the enterprise involved in the theft of intellectual property is owned or controlled by a foreign enterprise or other foreign entity." Which is to say, that copy of Photoshop you pulled off BitTorrent last week isn't on the top of the Justice Department's docket… yet.

Remaining intact was language that would give the Justice Department authority to pursue civil suits against IP infringers, awarding any damages won to the patent, copyright, or trademark holders. Critics have blasted this provision as a gift of free, taxpayer-funded legal services to content owners. The bill now goes to the full Senate, and must still be recognized with its counterpart legislation in the House, which lacks the language deputizing the DoJ to bring suit on behalf of IP owners.

It's easy to develop a confusing picture of what goes on inside of multiuser virtual worlds, such as Second Life and World of WarCraft. Some reports suggest that the virtual reality enables people to escape from social interactions they otherwise find difficult; others highlight how users of virtual worlds find them satisfying because of the rich social interactions they enable. Some researchers at Northwestern University looked into just how well real-life social influences translate to the the virtual realm and discovered one that does: racism. HangZhou Night Net

The authors used two different instances of social manipulation that are known to work well in the real world. The first is the "foot in the door" (FITD) approach, in which a small, easily accomplished favor is asked. These tend to make the person who granted the favor happy about their cooperation, and more likely to agree to further requests, even if they require more effort.

The second method, called "door in the face" (DITF), accomplishes the same thing using a different approach. The initial request, instead of being easy to handle, involves an extensive effort on the part of the person asked. Usually, that request is declined, but it makes people more likely to agree to a further, less time-intensive request. Instead of being inwardly-focused, the DITF method depends largely on a person's perception of the individual or organization making the request; the more responsible and credible they seem, the more likely the second request will be agreed to.

The researchers added a second layer on top of these two methods of manipulation by using avatars with skin tones set at the two extremes of light and dark that the environment, There.com, allows. This let them check for whether another pervasive social influence, racism, holds sway in the virtual world.

The tests involved the ability of There.com users to instantly teleport to any location in the game. The control condition, and the second request for both the FITD and DITF approaches, was a teleport to a specific location to take part in a screenshot. For FITD, the first, easy request was a screenshot in place. For DITF, the initial request involved a series of screenshots around the virtual world that might take as much as two hours.

416 There.com users were approached at random. Somewhat amusingly, about 20 of those approached for each test did something unexpected. For FITD, they simply teleported away before the question could be completed. Even more oddly, over 20 people agreed to spend a few hours taking screenshots with random strangers.

It turns out that social manipulation works just as well in virtual worlds as it does in the real one, with one very significant caveat. The FITD approach, which depends on people feeling good about themselves, increased cooperation on the second request from roughly 55 percent to 75 percent. DITF did even better, boosting the fraction of those who agreed to the second request to over 80 percent—but only if the avatar making the request was white. If that avatar was black, the response dropped to 60 percent, which was statistically indistinguishable from the control.

Since the DITF method depends on subjects' perception of the one doing the asking, the obvious conclusion is that black avatars are viewed as less appealing than white ones. The virtual world not only recapitulates social manipulation, but also social problems. The judgment directed towards the avatar's color is even more surprising, given that There.com allows its users to change their avatar's appearance instantly.

The authors don't seem to know whether to celebrate the finding, since it opens up new avenues for pursuing social research, or to condemn the fact that racism has been dragged from the real to virtual worlds. The recognize that there is an alternate interpretation—namely, that people judge users for having chosen to use a black avatar, rather than for being black—but don't find that alternative any more appealing.

Social Influence, 2008. DOI: 10.1080/15534510802254087

The huggable bunch at Greenpeace have given a thumbs up to Apple's announcement this week of a refreshed line of iPods that are much more environmentally friendly than past versions. In the same breath, the organization also took the chance to add to its recyclable Christmas wish list. HangZhou Night Net

On Greenpeace's official blog, the organization patted Apple on the back by proclaiming "It's great to see Apple dropping toxic chemicals like PVC, BFRs and mercury in their latest products." As we reported in our live coverage from Apple's "Let's Rock" event, Jobs touted the new iPods as having arsenic-free glass, as well as being BFR-free, mercury-free, PVC-free, and "highly recyclable." Jobs announced last year that Apple was working towards boosting efforts in recycling by 2010.

In the same post, though, Greenpeace went on to point out that greener iPods aren't actually all that special, since companies like Nokia, Sony Ericsson, and Samsung have achieved this with small iPod-sized gadgets. "While these iPods may rock what would really shake up the computer industry is if Apple sticks to its promise and becomes the first company to make personal computers free of toxic PVC and BFR's."

In a surprisingly candid article posted to Apple's "Hot News" section last May, Steve Jobs outlined his company's plans for "A Greener Apple." Among the environmental manufacturing challenges that Jobs tackled in the piece, Jobs said that Apple plans to "completely eliminate the use of PVC and BFRs in its products," as well as eliminate the use of arsenic in all of its displays, by the end of 2008. We'll keep an eye out for new prose from Jobs, or criticism from Greenpeace, on whether these goals are met.

HangZhou Night Net

Zope 2 was a Python-based web development framework that built all its components in-house and has been superseded by Zope 3 and taken in a different direction. TurboGears is a newer Python-based web development framework that seeks out best-in-breed projects rather than roll their own. Mark Ramm is a core developer on the TurboGears project and is bothered by the fact that the Django community seems to be heading down the Zope 2 path under the auspices of the "batteries included" philosophy.

In the Python community, many see the Django community as separate from the other web development projects due to these difference in philosophies, Ramm said. He went on to further claim that Django is in some ways harming or at least doing a disservice to the the Python community as whole because much of its internal components cannot be easily used outside of the rest of the Django project. By the same token, Ramm believes that Django core developers could be spending valuable time and resources improving and integrating many existing projects, Beaker and SQLAlchemy for example, instead of reinventing their functionality. Furthermore, many of these established 3rd-party projects have long ago addressed and solved many perceived pitfalls of Django.

Mark Ramm takes exception to this oft-repeated Django-catchphrase

Ramm's central premise is in fact a balancing act that the framework seeks to maintain. The project attempts to package as much of the functionality (barring database drivers and the like) into a simple to install package that's friendly to newcomers. Constrated to TurboGears, for example, which could require a handful (or two) of dependencies to install. Which audience does Django want to maintain: one of individuals who want a tool to get things done, or an audience of bright programmers who are willing to trod through several hours of package maintenance for a slightly more robust solution? Django seems to be pushing for the former, and so far their decision has benefited them.

Another of Ramm's fears is that programmers pulled into the Django world will in effect become Django developers first and Python developers or web developers second. To a certain extent this is true, but Django developer James Bennett during the Q&A portion rebutted that rarely do developers come to the decision on a framework with SQLAlchemy, a stack of WSGI middleware, and Beaker in hand; they just want to launch a product with the least amount of friction as possible.

The pronouncements of doom and gloom do have a basis in some people's realities. It's true that Django could be putting in work to gain advanced database features offered by SQLAlchemy, but the gains really only apply to a small subset of their potential audience. Django core developers are much more supportive of making Django pluggable enough that 3rd-party developers incorporate these packages themselves. Recent changes deep inside the project (i.e. queryset-refactor) now allow developers to write or adapt just about any datastore backend (Jacob Kaplan-Moss mentions candidly that he has Django running on to of Aperture's Core Data store and a non-relational database with CouchDB).

The general feel of the day was that Django can do some things to improve their position in the Python ecosphere. Removing barriers in the core of Django to enable a wider range of extensibility looks to be a definite eventuality; and on even a few occasions, core developers mentioned they'd be looking more seriously into relying on 3rd-party libraries and packages for new functionality.

According to Hexus, AMD has launched new triple-core X3 processors. One of the new CPUs is AMD's first Black Edition triple-core, and it's called the Phenom X3 8750 Black Edition. The Black Edition CPUs traditionally differ from the non-Black parts by offering unlocked multipliers for overclocking. AMD has yet to announce the 8750 BE, but the new chip will likely drop straight into the company's current lineup, though it might carry a small premium over the standard 8750. HangZhou Night Net

Two other new CPUs were introduced as well including a Phenom X3 8450e, and Phenom X3 8250e. Both of the CPUs as designated by the "e" are energy efficient, low-power CPUs.

Pricing for the 8750 Black Edition is $134, a mere $5 more than the non-Black Edition part, according to Hexus. Pricing isn’t available on the other two CPUs, but as Hexus points out, the standard 8450 retails for $104. AMD has traditionally charged a premium for its power-efficient parts, so it's anyone's guess where this particular chip will end up.

In addition to the new processors, AMD has also announced "Optimum Platform Virtualization" for Opteron Server CPUs that allows them to run with Microsoft virtualization enabled. Opterons designed to run Microsoft Windows Server 2008 Hyper-V virtualization software are now marked as AMD-V.

AMD's Kevin Knox, VP of worldwide commercial business, said, "Now, through our continued partnership with Microsoft, AMD is expanding virtualization's reach and the benefits of resource consolidation to companies that might not have taken advantage of virtualization in the past."

It is interesting to note that the supposed AMD roadmap that I reported on earlier this week didn’t show these three X3 processors.

The fallibility of human memory was one of the first things covered in my undergraduate psychology class (or at least it was to the best of my recollection). However, the brain is a mysterious thing, and there's still much we don't fully understand when it comes to figuring out how it stores and processes information. HangZhou Night Net

The current PNAS1 features a paper from a team of scientists at MIT who have been probing the limits of visual memory. Previous studies have demonstrated that visual memory has an impressive capacity for storage; in studies where volunteers are shown 10,000 images (each for a few seconds), they were able to determine which of two images they had previously seen at a rather high level of accuracy. However, it had been thought that this visual memory was light on details, instead providing just the gist of the image.

In the PNAS study, the volunteers were shown 2,500 images, each for 3 seconds. In contrast to prior research, the images were stripped of any background details. The subjects were then shown a pair of images, one of which was previously seen and one that was new. The paired images were shown in three ways; novel, where the image was paired with an image of something from a completely different category (for example, false teeth and a DNA double helix), exemplar, where the image was paired with a different, but similar image (two slightly different starfish for example), or state, where the images were of exactly the same image, but in different conditions (such as a telephone on and off the hook).

The results of all three tests showed that visual memory is surprisingly detailed. In the novel test, subjects correctly identified the correct image 93 percent of the time. The exemplar and state test conditions were handled with slightly less accuracy but, at 87 percent and 88 percent, respectively, the margin wasn't large. The test subjects were also very accurate in their ability to detect repeated images, with 96 percent of repeat images being identified, and only a 1.3 percent false positive rate.

This work comes on the heels of some other studies on the limits of human memory, published within the last month in Nature2 and Science3. Those studies focus on the depth of visual memory when it comes to remembering data from images.

These studies suggest that, when it comes to remembering several details about an image, visual memory is dynamically allocated, in contrast to prior dogma that suggested that humans were limited to remembering only between two and five details at a glance. This form of visual memory is much like RAM though, as pointed out in a prior post. Although we can take in details and store them, that process is happening constantly as we process vision.

What these studies do all add up to is the realization that the limits of visual memory are much further than we previously thought.

1: PNAS, 2008. DOI: 10.1073/pnas.0803390105
2: Nature, 2008. DOI: 10.1038/nature06860
3: Science, 2008. DOI: 10.1126/science.1158023

Monday, antimalware developer, McAfee, released details on its new cloud-based defensive system, codenamed Artemis. As we've noted several times in the past, antimalware companies don't have an easy job, and the sheer number of virus variants that now spawn from even a single base infection threatens to overwhelm any company's ability to keep up. According to McAfee, the number of attacks observed in 2008 thus far (with 3.5 months to go) is larger than the total number of attacks in 2006 and 2007 combined. Given the financial incentives and corporate business model that has become prevalent in the malware industry, this number isn't likely to start heading downwards, either. HangZhou Night Net

One of the security industry's greatest weaknesses is that it is inherently reactive, and while this won't change anytime soon, McAfee believes Artemis will drastically reduce the current time-to-patch cycle, as illustrated in the diagram below:

McAfee states that problems are typically solved and patched 24-72 hours after the malware is initially spotted, and while that figure seems a bit optimistic, we'll go with it, given that the company says that even 24 hours is too long. When a major worm like Storm hits, the steps on this diagram actually go into a loop, as each new variant arrives, is tagged, and then blocked. Each time the loop occurs, there's a fresh window of opportunity/profit, which only encourages malware authors to crank out variants as quickly as possible.

The Artemis system theoretically accelerates the time-to-patch cycle by communicating directly with McAfee's online service whenever it encounters a suspicious file. Files are then scanned against the entire McAfee Avert Labs database for any similarities to preexisting behaviors or file signatures. If Avert Labs detects any sort of malware, the user than receives instructions on how to block or quarantine the file, just seconds after having received it. The on-site database (i.e., the program installed on the user's computer) is also updated to detect this malware variant if it shows up again. Presumably, the system has some way of recognizing if dozens of computers all start requesting data on the same suspicious bit of malware, and would trip some sort of built-in alarm to notify McAfee that a concerted attack was underway from a previously unknown source.

If it works as advertised, Artemis has the potential to substantially reduce the gap between the time malware is detected and the time a system is patched. Patching systems this quickly would all but close the profit window (defined here as the time any system spends under botnet control) and, if (really) widely deployed, might even negatively impact malware writers' profit margins. Such projections, however, assume that Artemis can deliver what it promises, and that is, by no means, guaranteed.

In order to prove itself, Artemis needs to demonstrate that it can appropriately distinguish between suspicious and unsuspicious files and retrieve the necessary (and correct) information from the Avert Labs database, and that the solutions it returns actually fix the problem in question (or appropriately prevent the problem from occurring). This is a tall order, given that AV programs still return false positives during any number of installation routines or other OS functions.

On the other hand, an antimalware product need not be perfect in order to be useful; if Artemis is right just half the time, just half the McAfee customers that would have been infected otherwise actually "catch" the bug in question for a meaningful amount of time. It's also hard to turn down free, and Artemis, or "Active Protection," in consumer products will be free in all versions of McAfee software. The service has already been incorporated into McAfee's Total Protection Service for small and medium businesses, and will be available later in the month for both McAfee VirusScan Enterprise and McAfee consumer products.

I'm not 100 percent sold on the program, and won't be until I see evidence that it's genuinely effective at stopping infections. I definitely applaud McAfee for developing a new approach to virus scanning and identification behind the "check local database" model and then making the results of that effort available to all current customers. Hopefully, the end result will be a noticeable drop in infections among McAfee customers, which would then spur the development of similar approaches across the antimalware industry.

Shades of gray

The gaming industry is in the midst of a very interesting and turbulent time. With the advent of downloadable games on consoles, game makers are opening up new frontiers of technology and design. But technology shifts, and art is a strange constant in an otherwise ever-changing medium. No matter how far the industry has come or how much things change, art forever remains an integral part of gaming, and good art is still a rare and valuable commodity. HangZhou Night Net

Pete Hayes, an artist working for Epic Games, knows this all too well. His work on Gears of War helped turn a brand new property into a gaming blockbuster. The first game in the series sold in huge numbers, for both the Xbox 360 and the PC, and Epic is poised to repeat the same success with Gears of War 2.

The work of Hayes and his colleagues is at the artistic vanguard of this entire console generation: the art style of Gears was largely the source of the "next-gen color scheme," a scheme exemplified in the gritty and dark design of Epic's original Xbox 360 killer app. But how has that often-imitated design changed going into the second game? And what of the art in the game industry in general? Ars sat down with Hayes to talk about his new game, his work in the industry, and what it takes to become a game artist.

The genesis of next-gen color

Ars Technica: Let me start with an easy question. Gears was the game that kicked off the so called "next-gen" color scheme, with browns and grays and that gritty look. Talk to me about that. Was that a conscious decision? Did you expect it to take off like it did? And how has that changed going into Gears of War 2?

Pete Hayes: Ah, yes. We get that a lot. As far as the comments regarding Gears and the color scheme, a lot of that is relevant for certain parts of the game, but we definitely thought there was diversity of the palette. As with everything in Gears 2, the theme is continuing with more of that. With the environments, we've continued to diversify the color palette, the types of environments, things along those lines. I definitely think it's a much more colorful game.

But it wasn't something that we set out as a conscious decision to counteract what some people thought about Gears 1: we made Gears 1 the way we wanted to make it, and with Gears 2 we continued to refine and polish and add to that formula. We've got these huge open vistas and beautiful sunsets and skies and different colors. There's tons of very vibrant fire and colors going on. There's definitely a much broader color range and we've tried to improve the visuals in order to make it even more beautiful.

Ars: And what of the notion that every Unreal game has a specific "Unreal" look? How do you feel about that?

PH: Frankly, I disagree [with that notion]. I don't think it's true. You look at Bioshock: there's a ton of UE3 games that have a distinct look. As far as some games looking "UT-ish" or "Gear-ish," there's definitely something within the industry and, well, everybody is inspired by everybody, artistically—especially if something is very successful. People look to that and try to capture that and bottle it and reproduce it. I think it's very flattering. But I don't think it doesn't have anything to do with UE3, it's just people's artistic styles of what they want to pursue. So that probably won't change, especially given the new enhancements for Gears 2.

Color will play a more prominent role in Gears of War 2.

Ars: Those visual enhancements have really started to take form in the multiplayer levels shown.

PH: Yea, for sure.

Ars: One level that caught my eye in multiplayer was "Avalanche." There's a blizzard going on, and there's an avalanche that comes through the level; it's very unlike what we saw with the Gears maps until "Hidden Fronts." Are all the multiplayer maps like that, with a thematic overtone like "the snowy level," "the fire level," and so forth?

PH: Yea. Each level has its own look and feel, its own vibe, its own uniqueness. That's one of the key things that we wanted to focus on was to give each one a distinctive feel, both gameplay-wise and also the color palette and the theme. Whether it's the time of day or the season, we wanted them each to feel very unique and stand alone as very individual levels.

Ars: The scale also caught my eye. Avalanche is a modestly-sized, symmetrical level, but the bigger levels and some of the single player stuff that we've seen is significantly more massive than anything in the first game. How do you work with that increase in scale, artistically? How do you go somewhat of a more directed and linear experience and open that up?

PH: Well, it starts first and foremost with the environments. Much, much larger scale environments. In Gears 1, you feel like you were part of this small squad with these little insurgent kind of skirmishes. In Gears 2, you feel like you're in a full-scale war, this huge battle. The biggest thing is that you've gotta open it up: you've got to have bigger environments, you've got to have more characters. We've also implemented more weapons, including the mortar we're showing and the Mulcher, which is the Gatling gun that you can use to mow down swarms of enemies. Artistically, it started with the environments, then how we filled the environments with more enemies, and then introducing new weapons (including the heavy weapon class) to make it possible to deal with all those enemies.

Ars: Working with the Unreal Engine all the time must make art production easier. You have that framework that lets you do what you do best. But does working with UE3 make you ever feel confined? Have you ever had to scale back on exercising your artistic freedom to make it work?

PH: There's always a constant struggle between the realities of shipping a game—independent of which platform you're developing for or what tool you're using. Obviously, as an artist, you're always wanting to put in another thousand more polys, or you're wanting that texture to be the next size bigger so your art is as perfect as it can possibly be. It's a constant balance between that fidelity and a game that runs on a disc and does all those things.

Anyone who has followed patent laws and the intellectual property system is undoubtedly aware that they have combined to create major headaches for the content and software industries, leading to widespread calls for reform. But the problems affect other industries, and a new report by a group that has studied the role of IP in the biotech industry reveals that many of the same issues (along with a few unique ones) are causing problems in that field as well. The report provides a series of recommendations, many of which could just as easily have arisen from a study of a different industry. HangZhou Night Net

Although weighing in at just over 40 pages, the report was seven years in the making. It was prepared by a panel organized by the Centre for Intellectual Property Policy at Montreal's McGill University’s Faculty of Law. In addition to the 15 members of the panel proper, over 50 individuals are credited with research and other input, suggesting that the examination of the biotech industry was fairly comprehensive.

For all that research, the panel found that hard information on the impact of the IP system is sorely lacking; they were unable to even determine if changes to patent law correlated with changes in money spent on R&D. As such, one of their recommendations is that patent authorities actually try to track the impact of changes in IP policy so that there can be a evidential basis for future analysis. In the absence of such data, it concludes that some of the more extreme claims—that strong IP protections are either essential for innovation or have prevented AIDS treatments from reaching the developing world—simply have no rational support.

IP as an end unto itself, not to foster innovation

What the panel does conclude is that most people have lost track about what IP is supposed to be all about. Innovations and new developments—especially those under the biotech umbrella such as medicine, agriculture, and energy—foster the greater public good. As such, IP and patent laws were, at least initially, intended to foster that innovation. Now, interested parties frequently present IP as an end unto itself. "People put IP on a pedestal," the report reads, "saying it is the reason why companies invest in innovation or the reason that people do not get needed drugs—rather than seeing it for what it is: a cog in a large system of innovation."

According to the authors, this IP model is dead as far as the biotechnology industry is concerned, although it "has not yet fully left the stage." They date its demise to a famous legal case, in which pharmaceutical companies sued after the government of South Africa imported unlicensed AIDS medication. The backlash was widespread and caused the reevaluation of the patent system by many countries and organizations; all that, and the companies ultimately dropped the suit.

If the old model is dying, the authors see a valuable use for it: identifying its failures in order to reform the system. These failures pervade almost every level, and have contributed to a complete lack of trust among the major players. Industry, as noted, views IP as its own end and fiercely protects it. "But such thinking has proved counterproductive to industry," the report states, "which in health fields has seen declining levels of innovation despite increasing stakes in intellectual property."

Patent laws as a barrier to research

That happens, in part, because IP barriers tend to prevent further research on the topic by the publicly-funded research community. That community tends to reside at universities that are making the problem worse by licensing developments at their institutions to industry without any conditions regarding their future use. Despite these licensing agreements, the authors cite a study that determined universities as a whole have lost money when pursuing the licensing of IP.

Governments of developed nations take a hit for their efforts to force their own IP system on developing economies, where the model may not work, and for doing so despite lack of any clear data on whether their own system is working. The governments of developing economies get taken to task for a sometimes mindlessly antagonistic approach to IP; a study presented in concert with the release of this report describes how a Brazilian program meant to protect indigenous knowledge of medicinal plants provided a mechanism for suing pharma companies, but none for negotiating development agreements with them.

Even the press get criticized for their failure to accurately present all of the fallout from these problems to the public.

The report's recommendations include a number that are designed simply to reestablish some measure of trust among these interested parties. Beyond that, it sees the future of biotech IP as residing in public-private partnerships. These include university-managed licensing agreements and research partnerships that emphasize the ability of biologists to engage in open research, even on licensed materials. They also highlight the role of NGOs such as the Gates Foundation, which has helped fund patent licensing pools that will provide the benefits of new medicines to the developing world.

Both governments and universities are called on to foster research centers in developing nations through these partnerships. This will not only give these nations a vested interest in IP, but it will help provide both academics and industry with access to the source of indigenous medicines and populations in which to pursue relevant clinical trials. This effort is already something the scientific community is engaged in.

Overall, some of the recommendations seem to be idealistic, and it's not clear how well these public-private partnerships can scale and expand. Nevertheless, the report is valuable reading for anyone interested in IP in general, as many of its salient points apply to fields beyond biotech.

Further reading:

The report is being made available through a Creative Commons License via The Innovation Partnership.

As the mobile phone and digital music distribution industries lace up their boxing gloves, manufacturers like Nokia and carriers like Verizon are unveiling new music plans tied to mobile phones. While Nokia is the latest with its soon-to-be-released Comes With Music service, troubled handset maker Sony Ericsson is drafting its own plans to join the fray with an unlimited music download service. HangZhou Night Net

Renewed competition in the mobile phone industry, coupled with economic instability around the world, has been hitting some handset makers hard. Motorola is one of the most publicly visible handset makers to take a hit, but Sony Ericsson reported its own hard luck this past summer with a continuing trend of declining sales, net income, and even the average selling price of its phones.

The fact that Sony Ericsson is getting hit from all angles, especially in markets like the US where it doesn't have a strong presence, can't be helping. Apple has a hit with its iPhone, Google has gained quite a lot of steam with its open-source mobile OS Android, BlackBerry is stepping up its product line, and even Windows Mobile saw a modest worldwide market share gain from 11 to 13 percent over the last fiscal year. With mobile music services and subscriptions—such as Nokia's (mostly) unlimited Comes With Music that is set to debut in October—being hailed as one of the new frontiers of the mobile phone industry, Sony Ericsson probably feels it doesn't have much of a choice but to compete in that space.

Despite once denouncing unlimited music subscription services as "devaluing" the music trade, Financial Times (subscription required) reports that Sony Ericsson is in talks with major and independent record labels to offer its own unlimited music downloading service on select mobile phones. "If everybody is launching 'all you can eat' services which make handsets more attractive to end users and to operators, they don't have much choice," Dan Cryan, an analyst at Screen Digest, told FT. "Especially when so much of their brand value is built around the Walkman."

Indeed, it has always seemed strange for Sony to bring the Walkman brand to its Sony Ericsson mobile phone partnership, but not leverage it for better integration and some sort of a music service. Details are slim for now as far as how Sony's plan will work, what phones will be compatible, or how much it will cost, and Sony Ericsson didn't return our request for comment on this story.

The company already offers a PlayNow arena service in Denmark, Finland, Norway, and Sweden that sells ringtones, games, and DRM-free MP3s for $0.99 and €0.99, but like most other subscription services (including Comes With Music), the songs will probably have some kind of DRM on them.

The Justice Department is signaling that it may seek to block a proposed search-advertising deal between industry titans Google and Yahoo. The government has hired veteran antitrust lawyer Sanford Litvack, who headed DoJ's Carter-era antitrust division, to review the planned partnership, which would give the firms a combined share of 80 percent of the domestic search-ad market. HangZhou Night Net

The Wall Street Journal broke the news of the hire late yesterday, but the markets may have had advance notice: Shares in Google closed down nearly 5.5 percent Tuesday, shedding another third of a percent at the end of trading today. That makes for a one-two punch this week, coming on the heels of a decision by the influential Association of National Advertisers to publicly oppose the deal. In a letter to DoJ's top antitrust attorney, ANA Presdient Bob Liodice argues that the partnership would "diminish competition, increase concentration of market power, limit choices currently available and potentially raise prices to advertisers for high quality, affordable search advertising." Individual major advertisers, including Coca-Cola and Procter & Gamble, have sent their own letters in opposition, according to press reports.

Both companies sought to minimize the news. A statement from Yahoo indicated that the firm had been informed "that the Justice Department, as they sometimes do, is seeking advice from an outside consultant, but that we should read nothing into that fact." (Read what you will into the fact that "consultant" Litvack last week resigned from the Los Angeles and New York based law firm at which he was a partner.)

Google spokesman Adam Kovacevich noted that the company had voluntarily delayed enactment of the partnership—first floated in June, after the companies tested the waters in April—to give federal regulators time to review the arrangement. "While there has been a lot of speculation about this agreement's potential impact on advertisers or ad prices, we think it would be premature for regulators to halt the agreement before we implement it and everyone can judge the actual impact," said Kovacevich. "We are confident that the arrangement is beneficial to competition, but we are not going to discuss the details of the regulatory process."

While advertisers have expressed concern over the potential market power the formidable pair might wield, both firms are quick to note that the proposed arrangement would be a nonexclusive revenue sharing deal in which Google gained rights to run ads on Yahoo sites. Yahoo and Google would continue to compete for advertising dollars, and prices would be set by auction.

Many observers see the hand of perennial search-wars Bronze medalist Microsoft looming in the background here. Redmond made its own abortive bid to acquire Yahoo outright, arguing that the union would create a counterweight to Google's dominance. During Senate hearings in July, top Microsoft attorneys argued that the search-ad competition would be crushed beneath the lumbering wheels of the Yahoogle behemoth.

Still, several analysts have indicated that any legal challenge by Justice would face an uphill battle, with courts hesitant to enjoin the partnership in the absence of any demonstrable competitive harm. Both companies have signalled that they still intend to follow through with the deal as planned, beginning next month.

You've heard the claim: the US is behind when it comes to broadband in every area that counts. In price, speed, and availability, the US is getting whooped by Japan, Korea, and 13 other countries, according to widely-quoted OECD numbers. And what's worse is the fact that the US has been dropping in the rankings over the last five years. A new report out today from the Progress & Freedom Foundation suggests that everything you know is wrong, that US broadband is (or should be) the envy of the world, and that infrastructure competition always trumps line-sharing. Also, it uses the phrase "regulatory teat." HangZhou Night Net

This isn't the sort of report worth reading if you're only interested in the conclusion, since all PFF reports have the same conclusion: regulation bad, free market good, scare quotes around things like "public interest groups" (and, occasionally, the suggestion that Lawrence Lessig is a communist). This report is true to form.

But it is a clear summation of the other side of the argument, pulling together, in 17 pages, all the data that suggests that the US broadband situation isn't as dire as some critics make it out to be. For instance, the US satisfaction score with broadband service is actually one of the highest in the world.

The basic claim is that the FCC's agenda of pushing facilities-based competition (such as that between cable companies and telcos, for instance) is the right one, and that line-sharing is only a measure of last resort to be used where facilities-based competition is hard to come by. There's some interesting data presented to support this claim, and it certainly seems true that strong infrastructure competition spurs more investment than line-sharing rules, which can dampen it in some cases.

But one of the main examples of this in action is the current round of cable upgrades to DOCSIS 3.0, which are approvingly mentioned as reactions to Verizon's FiOS (fiber) deployment. And that's certainly true, but it also illustrates the basic problem with the "infrastructure competition solves all our problems" approach: FiOS is only available in very limited areas, and Comcast is the main cable company making aggressive DOCSIS 3.0 moves.

Facilities-based competition is great when it exists, but the PFF piece is too quickly dismissive of the idea that a local telco and a cable company might find it easier not to truly challenge each other in many markets. In fact, if it weren't for FiOS—an $18 billion project that aroused tremendous analyst resistance, remember—there would be far less competitive incentive for cable operators anywhere in the US to invest in major upgrades. This is demonstrated by Time Warner Cable's decision to only deploy DOCSIS 3.0 in areas where it faces direct competition from FiOS for the time being.

The report rightly stresses the importance of wireless, though, as a force that needs to be considered in these infrastructure debates. Unfortunately, even as someone who uses and loves a Verizon EV-DO 3G wireless modem, it's clear that wireless broadband isn't ready to replace wireline links for a number of years. The new Clearwire, which has joined with Sprint's XOHM WiMAX unit, will push this business model, and we have seen encouraging signs that the cell carriers are opening up their networks to any device and any use.

These things will happen, but they will take time to become mainstream. When that day finally does come, though, robust infrastructure competition should become a reality. Wireless, after all, isn't just a new method of connection, it's many new methods, and it's far more competitive than most wireline markets (owing to no last-mile issues).

When wireless comes into its own and is explicitly marketed as a replacement for wired Internet service at home, we'll have three, four, or five competitors in each market. Wireless operators may not bring much speed pressure to bear on the wireline incumbents, but they will certainly depress prices.

Further reading:A recent report from the nonpartisan ITIF on the same subjectHigher ed IT managers think what's needed is fiber to every homeThe PFF report concentrates on debunking the OECD numbers, but the US isn't doing all that hot by many different metrics

The dynamic and global nature of the web is analogous to many of the features of human society; the web evolves, allows different levels of interactivity, and enables commerce. The transparency and quantifiable nature of the web, however, enables researchers to study phenomena that might be difficult to asses in wider society. Joseph Kong, Nima Sarshar, and Vwani Roychowdhury took advantage of the the World Wide Web to study a significant social question: does experience trump talent? HangZhou Night Net

At numerous points in our lives, we have to choose between experience and new talent. Would you choose to hire people with a full résumé, or those who are promising newcomers? Are you more likely to trust news from a seasoned reporter, or fresh recruits? What about elected officials? In all of these decisions, we must balance the needs for stability and innovation. The same issue plays an important role on the web. Is it easy for new websites to thrive in cyber-society, and how often does that success come at the expense of established sites?

Kong, Sarshar, and Roychowdhury decided that, to answer those questions, it is simplest to look at the number of links web pages receive over the course of a year. They defined "experienced" web pages as those that start out with a large number of links, while startups only begin with a few links. For them, web pages that obtain over 1,000 links over the period of a year are winners.

From June 2006 to June 2007, they found around 10 million web pages that made it through 13 months without being deleted. That's actually quite an accomplishment; they estimate that websites vanish with a minimum turnover rate of nearly 80 percent. For most of the survivors, nothing changed; the number of links to the page remained relatively static.

Entrenched web pages are not unshakable—it is not that unusual for established web pages to start off well and then fail to gain much further attention. The authors also determined that it is extremely rare for unknown web pages to reach over 1,000 links, but the probability rises steadily as the number of initial incoming links increases.

The high turnover rate also meant that there were many new sites appearing, which partially offset the rarity of success. As a result, out of all the pages that attracted over 1,000 links, 48 percent represented fresh talents.

The authors suggest there are comparisons to be made between the web and one type of idealized society. "In this regard, it is much like what we observe in high-mobility and meritocratic societies: People with entitlement continue to have access to the best resources," they write, "but there is just enough screening for ?tness that allows for talented winners to emerge and join the ranks of the leaders and gain higher revenue through online advertisements."

Although the scientists created a methodology that numerically gauges the competition between experience and talent for the World Wide Web, their methodology could easily be expanded to other systems. For instance, the network of scientific publications is worth studying, as one expects established scientists to have a significant advantage.

Proceedings of the National Academy of Sciences, 2008. DOI: 10.1073/pnas.0805921105