With headlines constantly touting the adoption of social networking, blogging, and text messaging among US teenagers, it comes as no surprise that Internet use among that demographic is statistically still on the rise. Teens are blogging, using social networks, posting photos, and sharing videos in record numbers. Examining the details in a new Teens and Social Media report (PDF) from the Pew Internet & American Life Project, however, reveals that with all this online socializing, teens are actually more conscious than ever of how and with whom they share their content.
In summary, 93 percent of US teens use the Internet now in some way. Many of the general numbers are up from Pew's 2004 study: 39 percent of these teens share their own artistic creations like artwork, photos, stories, or videos (up from 33 percent in 2004), 28 percent have created their own online journal or blog (up from 19 percent), and 26 percent remix content they find online for their own creations (also up from 19 percent). Altogether, that makes up for 64 percent of online teens, or 59 percent of all teens that are creating something on the web.
Contributing the most to these numbers are significant increases in blogging and video sharing. Though numbers all around are up, girls dominate the teen blogging boom with 35 of online teen girls blogging versus 20 percent of boys. Conversely, video sharing at sites like YouTube and Facebook is dominated nearly two to one by boys, with 19 percent of online teen boys posting videos and only 10 percent of online teen girls doing the same. Pew's numbers also note that teen girls far outrank boys in sharing pictures, though comparative numbers are not available.
As the growth of social sites and sharing tools exploded over the last half a decade, so too did a growing concern among parents and government officials over their potential dangers, even if studies show this concern to be overblown. For some time now, educational campaigns directed at this demographic have more or less warned that sharing too much on the Web is like shouting your personal details out to the entire school body and the rest of the world. While there is likely still a lot more work to do to raise teens' consciousness of what and how much to share, Pew's numbers reveal that notable progress is being made. About two-thirds (66 percent) of teens with some kind of online profile use the site's privacy features to restrict access in some way. Over half post false information, with just 11 percent sharing both a first and last name, and only 5 percent sharing a full name, photo, city, or state.
The numbers are also strong when restricting access to photos, but not nearly as much for videos. While 39 percent of teens say they restrict access to photos "most of the time," only 19 percent restrict their videos with the same frequency. Teen girls, especially older ones aged 15-17, are more likely to restrict access to their photos: 44 percent versus 33 percent of boys.
Pew's study offers far more details as to which teens are doing what on (and off) the Web, but their increased awareness in an age where anything can be shared almost anywhere should offer some relief to those watching the industry.
In May of last year, SPEC began soliciting feedback from the industry on a forthcoming slate of power efficiency benchmarks. The first of those benchmarks, SPECpower_ssj2008, is now out, but a recent report from Google indicates that the big villains in datacenter power consumption are now memory and networking components that don't provide much in the way of power optimization. In other words, not matter how power efficient an Intel or an IBM makes its chips, the likes of Seagate and Western Digital will now dominate the results of SPEC's new benchmark.
In this short analysis piece, I'll take a look at both the SPEC announcement and the Google report, and at what the two of them say about the future of the datacenter.
SPEC's new power benchmark
SPECpower_ssj2008's UNIX-like name is easily parsed: it's the 2008 release of a SPEC power benchmark that simulates typical server-side Java business applications. The workload's performance is steadily dialed down from 100 percent to idle in 10 percent increments, in order to measure the amount of power that the server (measured with a power analyzer) draws under the varied load conditions.
SPEC describes the benchmark as "scalable, multi-threaded, portable across a wide range of operating environments, and economical to run." "It exercises CPUs, caches, memory hierarchy, and the scalability of shared memory processors (SMPs)," SPEC claims, "as well as implementations of the Java Virtual Machine (JVM), JIT (just in time) compiler, garbage collection, threads, and some aspects of the operating system."
The fact that the new SPEC benchmark quite appropriately measures total system power at the wall, thereby accounting for the contributions of every part of the system to the total power draw, means that system integrators will begin putting increased pressure on component makers to offer more meaningful power optimization options. Specifically, a recent report published by two Google engineers in the IEEE's Computer magazine (see below) indicates that 2006 was the year that the CPU stopped being the majority contributor to server power draw in the search giant's massive datacenters. So as this benchmark gains traction, we'll see more and more makers of hard drives and networking equipment touting new and innovative power optimization features.
Google: What works for mobile and embedded doesn't necessarily work for servers
The authors of the Google report took a broad look at power usage in the datacenter and came away with a few key conclusions. First is the aforementioned fact that the share of total server power consumption for which the CPU is responsible dropped from over 55 percent in 2005 to under 45 percent in 2007. This means that the CPU, even when running at peak performance, may no longer be responsible for the majority of a server's power draw.
The other parts of the server, especially the hard drive and RAM, are now major power sinks. Moving off of the individual server box, networking switches are also a large contributor to datacenter power draw.
The problem with these non-CPU parts isn't so much that they draw a lot of power, but that their power draw doesn't really scale well with their usage level. In other words, a hard drive running at full tilt draws almost the same amount of power as it does running at 50 percent. Unlike modern CPUs, which use voltage and frequency scaling to offer a range of power/performance states, other system components have a much more limited ability to scale power with usage.
Most non-CPU devices approach power optimization by offering a sleep state, in which the device is inactive but can be awakened. While sleep states are fine for embedded and mobile devices, which can find plenty of opportunities to spin down the hard drive without seriously affecting the user experience, Google's profiles of server utilization indicate that these sorts of sleep-based power optimizations don't work well in the datacenter.
Google's data indicates that servers spend most of their time operating at between 10 and 50 percent utilization. So the components in these severs rarely have an opportunity to idle in an inactive sleep state. And even if they were idle, the latency penalty for waking a hard drive from sleep is too large to make sleep a viable option.
What's needed, the authors argue, are more active low-power states in hard drives, RAM, and networking equipment. Instead of just the typical "awake" and "asleep" modes, which are fine for mobile and embedded, component makers and networking equipment vendors must figure out how to offer a variety of intermediate power states that let servers dial back performance in exchange for less power.
Google estimates that if servers could achieve power efficiencies from 60 to 90 percent in the typical operating range (i.e., 10 to 20 percent utilization), then overall datacenter power usage (including secondary usage like cooling) could be cut in half.
I suspect that the advent of the SPECpower family of benchmarks will help push the industry toward this goal, because server makers like IBM, HP, and Dell who want to be able to tout their products' power efficiency in the 10-50 percent utilization range will start demanding that non-CPU components do the kind of dynamic power optimization that we've all come to expect from microprocessors.
It shouldn't come as a surprise to anyone familiar with Apple's history of high margins that Apple has another very profitable product in the iPod touch. According to iSuppli's latest cost analysis tear-down, Apple stands to make a profit of just under 100 percent (92.9) on its highest-tech iPod. The cost, $155.04, is a cumulative total of manufacturing, assembly, test expenses, and parts costs. It does not, however, take into account:
[…] software, intellectual property, accessories and packaging. The BOM figure also does not include research and development costs, because such data cannot be derived from a teardown and component analysis.
Component costs fluctuate, so iSuppli's estimates are accurate as of time of research (and not necessarily this very moment), but I can't imagine fluctuation would cause that much of a discrepancy right now.
As expected, the majority of the expenses involved in the creation of an iPod touch come from the cost of the flash memory (8GB $40.00), and the display ($21.99). The other big ticket item is the "Touchscreen Assembly and Integration" ($21.70). The battery is an 800mAh unit that costs a mere $2.35—something that will certainly annoy you when it comes time to replace it for $45.
iSuppli estimates that, like human and chimp DNA, the iPhone and iTouch are only 90 percent similar:
However, the iPod touch’s design differs from the iPhone in that it is uniquely optimized to meet its form-factor and cost requirements. To cut space usage, the iPod touch makes use of some advanced packaging for its components not seen in the iPhone, including 0201 diodes and passive components in 01005 enclosures on the touch’s WLAN module.
Other differences include a single PCB in the touch versus dual in the iPhone, and different WLAN components. Add these to what we already knew—the iPod touch has no Bluetooth and no microphone—and it is almost a surprise that they are only 10 percent different.
A handful of disgruntled (and in a couple of of cases, soon-to-be-ex-) Blockbuster customers wrote in this morning to point out that the video rental giant has hiked the prices for its Total Access mail order rental plan. The biggest change comes to the highest-tiered Total Access Premium plan, one which allows customers to have three movies out at a time and get unlimited in-store exchanges. That plan is going from $24.99 to a whopping $34.99 per month. In addition, the Total Access Premium plan is no longer listed as an option on the company's Total Access signup page.
The company announced the changes in an e-mail sent to subscribers yesterday. Telling customers that the hikes are necessary to "continue to bring you the unmatched convenience of both online and in-store DVD rentals," the company plans to make the price increase effective at the next billing cycle.
The fee hikes for the other rental tiers aren't as dramatic, but are still substantial. Two-at-a-time, Total Access Premium customers will see the cost of their service go from $21.99 to $29.99 per month. Those with plans that allow limited in-store exchanges will see a $2.00 per month price hike.
Blockbuster's rate hikes, especially those directed at the Total Access Premium tier, are likely to send some of its customers running to the arms of Netflix—and that's fine with Blockbuster. The rental chain's disappointing earnings last quarter prompted it to reexamine Total Access (among other things), concluding that it was time to shed some of the more unprofitable customers. As a result, the company is more than happy to lost customers like the man who managed to get his rental price down to 36¢ per DVD by returning 200 mail rentals to his local store for a free movie.
Blockbuster began making changes to Total Access during the third quarter. Customers who wanted to continue unlimited in-store rentals were forced to pay an additional $7 per month for the Total Access Premium plan, meaning that the heaviest users have seen the price of the service nearly double in the last six months. In addition, the company cut advertising for the service and stopped promoting it in store.
Total Access was created as a response to Netflix, and Blockbuster aggressively pursued Netflix customers by pricing its plans competitively and offering what Netflix can't: in-store rentals. The payoff—in terms of raw subscriber numbers—was good. By last spring, Blockbuster had managed to make up significant ground on its rival. But the strategy was arguably a financial disaster, with mail-order growth cannibalizing in-store traffic.
The latest round of price hikes are another indication that Blockbuster is willing to cede much of the mail order market to Netflix and concentrate on other efforts, including downloads and retail. The company is playing a significant part in Jackass 2.5's efforts to do an end-run around the box office, streaming Johnny Knoxville and the gang's latest antics for free online through the end of the year. And for Total Access Premium customers so upset by the price hikes that they cancel their plans, Blockbuster's unspoken reply is short, but sweet: "good riddance."
Rather than playing nice with Hollywood, China is taking its ball and going home, according to several execs in the motion picture industry. China has allegedly stopped granting permission for films to be shown in the country starting early next year, although the government has not acknowledged the ban. If true, the move could be a huge step backwards for both western filmmakers and China, who both stand to lose money from the move.
"It is increasingly clear that China may have instituted a block on the import of American films into their country," said MPAA CEO Dan Glickman in a statement. "Although we have not received official confirmation of such a ban from the Chinese Government or China Film, the indicators are strong that our information is correct. If such action has been taken, or is in the process of being taken, it would represent an enormous step backwards in terms of China's efforts to develop a strong and most importantly, legitimate film exhibition and distribution market."
"My understanding is that there is a suspension, which has happened in the past," US Commerce Secretary Carlos Gutierrez told the Hollywood Reporter.
China already has strict rules on which movies can be shown in Chinese cinemas, and like the Internet there, each film must go through vigorous censorship before making its way to the masses. Only a couple dozen foreign films are let through every year—naturally, foreign filmmakers would prefer that number to be higher, especially considering that their handful of films tend to account for almost half of China's box office revenues every year.
Some speculate that the decision to ban American films is a reaction to US complaints to the World Trade Organization about China. US Trade Representative Susan Schwab first brought up China's "inadequate protection of intellectual property rights" in April. In August, the USTR acknowledged that China had made considerable progress in strengthening its IP laws, but that there was still rampant commercial piracy within the country.
The Chinese government has never been fond of criticism, so it's not outside the realm of possibility that the alleged ban was instituted as retaliation against the US for lodging the complaint. A quote from China's National Copyright Administration seems to support this: "I don't deny that IPR infringement and piracy occurs in the Chinese market, but that doesn't mean the United States is founded to file complaints against China in the WTO," spokesperson Wang Ziqiang told the New York Times.
To its credit, it does seem like China has been making a bigger effort to squash copyright infringement, as evidenced by today's ruling by the Beijing court against Yahoo! China. The court found that the search engine—owned in part by the US-based Yahoo! Inc.—was facilitating mass copyright infringement by deep-linking directly to copyrighted music files. The International Federation of the Phonographic Industry (IFPI) claims that 99 percent of all music downloaded in China is pirated, and applauded the move by the court. "The ruling against Yahoo China is extremely significant in clarifying copyright rules for internet music services in China," said IFPI CEO John Kennedy in a statement. "By confirming that Yahoo China's service violates copyright under new Chinese laws, the Beijing Court has effectively set the standard for internet companies throughout the country."
There is no doubt that this holiday season, one of the hottest gifts is the Nintendo Wii. Nintendo of America's president, Reggie Fils-Aime, warned gamers months ago that supplies would be short and tried to alleviate the problem with a voucher program through GameStop. It's clear this won't be enough to meet demand, causing Nintendo to strongly urge retailers not to force consumers to buy bundles of software and accessories in order to take advantage of the shortage. At Opposable Thumbs, we ran a post about how to deal with these bundles and asked readers to contact us with their horror stories. One worker followed up quickly with his own twist on holiday price gouging: instead of selling the systems with bundles, a chain of Illinois/Missouri gaming stores called Slackers is simply dumping its stock onto eBay for the Buy It Now price of $399.99, an almost $150 markup.
"In the past year, none of the 12 [Slackers locations] have sold any Wiis except for a one-time promotional deal, where we did force customers to buy a game with it," the employee told Ars Technica. "The real crime is that we get Wii shipments regularly. In fact, right now we haveabout 20, but none of them make it to the store front. They all get put on the store's eBay site at a minimum $499.99 buying price."
Our source then told us that the price has since been lowered to $399.99, (they weren't moving at $499) and sure enough, there are three Wiis available through Slackers' eBay storefront at $399.99. Looking back in the store's history, one can find other Wii sales in its feedback, with the auction advertising "NEW WITH GAME." The game of course being the bundled Wii Sports.
Ars Technica contacted the St. Louis Slackers location for confirmation of the practice. When asked if the allegations were true, there was a long silence. "That is something you'll have to speak with the owner about," we were told. We have since attempted to contact Slackers' ownermultiple times, but have been unsuccessful. Nintendo has also not responded to our requests for comment on this story.
There are a couple of reasons Nintendo—and every other console manufacturer—is so strict on keeping one price point. Raising the price in this way hurts Nintendo's ability to position the Wii as the low-cost system, and it also cuts Nintendo out of ashare in the higher profits. At the same time, Nintendo keeps retailers from offering the systems as low-priced loss leaders, dropping the console below the suggested retail price to get customers into the store. Nintendo wants to be sure it controls the pricing, and Fils-Aime has talked in the past about the power of the Wii's low price.
While dropping systems on eBay mightseem like a quick and easy way for retailers like Slackers to make money, raising the price on Nintendo's system forone's own profit is a surefire way to get cut off from future shipments of games, systems, and accessories. "We don't have to remind retailers of the strength we have right now," Fils-Aime said in a recent interview with Reuters. "We are simply making an observation and that reinforces our point quite nicely with retailers."
Nintendo does indeed have the strength right now, and our source also told us he has reported his employer's practices toNintendo. Unfortunately, it's unlikely that Slackers' customers hungry for the system are aware that Wiis are apparently being stockpiled in the store room for eBayers willing to pay $400 instead of on store shelves at Nintendo's MSRP of $249.
The holidays: they can be stressful for everyone, even local TV news producers who need to fill that two-minute gap between the waterskiing squirrel story and the house fire in the next state that injured no one. You could assign "reporters" to dig up some local "news" of actual community value, but that takes time and money, and frankly, who wants to watch anything that might make them think at 10 PM? Much easier just to let industrysend the news in premade packets. This Christmas season, the RIAA has a present for local news divisions: a video news release about music piracy, complete with exhortations to buy iTunes gift cards and cell phone ringtones.
An anonymous reader, who claims to work for the company that distributed the video package, has posted the alleged video news release online. The video is shockingly bad—the narrator talks too slowly, the pacing is poor, and the "fly-in" bullet points look like they were produced in Windows Movie Maker.
Still, for the first half of the clip, it's generally accurate information about recent busts at duplication facilities. And then come the bullet points. "Watch for compilation CDs that could only exist in the dreams of a music fan," viewers are warned, a statement that only serves to highlight the fact that pirates do a better job of providing what music lovers want than the industry does. Whoops.
Beware the bullets
Then there's this gem: "Audio quality on pirated CDs is usually atrocious." Someone alert the RIAA to how digital copying actually works, please.
From there,the clip moves into straight-ahead advertising. "Make sure the music you buy is legitimate," says the narrator. How? Simple! Just use the "cool, innovative ways to get your favorite music" that the industry offers. The video then shows iTunes digital album gift cardsand a cell phone, for which you can buy Christmas-themed ring tones.
The production values of the video initially led us to suspect it of being a fake, but the leaker has provided Ars with a copy of an alleged press advisory that went out promoting the clip. It's directed to "news assignment desk/consumer reporters," who are more likely to use the footage and basic "storyline" themselves than to simply run the unedited report. The RIAA has not yet responded to our request for authentication of the video.
Lending credence to the video,though, is the fact that it follows a recent RIAA press release almost exactly. Though that release says nothing about a video news feed, it does mention that the RIAA is launching a "holiday anti-piracy campaign" that "offers shoppers innovative gift ideas and tips for avoiding pirate product." The campaign is set to focus on 15 cities with "exceptionally high piracy rates" (every major US city, apparently).
For an industry already the target of so much consumer suspicion, feeding misleading claims and self-serving footage to ostensibly objective "news" outlets just doesn't seem like a great idea. Yes, piracy is bad; yes, we should shut down illegal commercial stamping operations. But trying to turn the news into such an explicit commercial? Unhelpful.
There are so many major, seismic shifts in the computing industry happening at the 32nm process node that it's hard for me to get my mind around it all. I've been covering the story of x86's journey into the ultramobile and embedded space, a journey that starts at 45nm and really gets interesting at 32nm, but that tale is only one thread in a much larger epic that's emerging bit by bit in one press release and news story after another.
For instance, take this week's announcement that Toshiba is now joining the parade of semiconductor companies who've looked at the $4 billion or higher cost of a 32nm fab and decided against going it alone. The Japanese semiconductor giant will be joining IBM's fab alliance at the 32nm node, bumping the number of alliance members (excluding IBM) up to six. IBM and Toshiba had previously been cooperating (along with Sony) on research for the 32nm node, so the pair's newly announced agreement to join forces on 32nm bulk CMOS fabrication is really just an extension of their previous research partnership. Nonetheless, it's an agreement that takes one more major party out of the running at 32nm.
For the real scoop on this IBM-Toshiba announcement and what it means for the semiconductor industry, there's no way that I can top Dave Manners's blog entry on the topic, so I won't even try. I do, however, want to zoom in on one fascinating part of the post, which explains quite a bit of what's driving Intel and others into commercial competition with the OLPC Project.
Finally, for everyone with a 32nm fab there's going to be a new problem. If 450mm wafers are adopted, and the companies which buy most of the world's manufacturing equipment are pushing hard for 450mm manufacturing equipment to be developed, then there's the problem that only seven fabs will be needed to make the world's total demand for transistors.
Manners develops this point in terms of its implications for fab equipment buyers, but I want to take it in a different direction and dwell for a moment on what it means for an Intel or an IBM if only seven fabs can meet the world's (presumably current) demand for transistors. (I'm not sure where Manners got this number, but it sounds feasible and I trust that it's legit.)
If the combination of a 32nm feature size and a 450mm wafer size increases fab output to the point that only seven fabs are needed to meet the total world demand for transistors, there's only one way for the semi industry to see growth in such a scenario: increase demand. This is why Intel would like to see every school-age child, farmer, factory worker, day laborer, and so on from San Francisco to Siberia suddenly discover a pressing need for lots and lots of transistors.
The vast bulk of the history of computing up until the present day has been about moving semiconductors from the server room to the business desktop, then from the business desktop to the first class cabin, and then from first class into coach. At this point, everyone who can afford a cheap plane ticket or a pair of Nikes is already wired to the gills with transistors, and the major bottlenecks in getting those folks to buy even more of them are mostly out of Intel's control (i.e., screen size/quality, battery life, connectivity, usability).
To see real growth in the coming decades, Intel, AMD, and the rest of the semi industry must focus on markets where an iPhone would cost a month's income, and then on markets where it would cost a year's income. This is the reality behind Intel's and AMD's interest in the device category that OLPC represents. It's the reason why Intel has teams of anthropologists running around rural China, and why AMD launched its 50×15 plan.
So when you're trying to imagine how the computing industry will look in ten to fifteen years, you have to forget about the BlackBerry set almost entirely. The bulk of the market will shift from those people, who will remain a very profitable niche, to consist of people who aren't currently all that wired.
Or, to put it another way, when I was out in rural San Salvador two years ago, every cinderblock home had a transistor radio and a color TV. If I make that trip again in 10 years, I'll find that that color TV has been replaced by a device that has at least the horsepower, connectivity, and functionality of the MacBook Pro from which I'm filing this report. And depending on how "green" that ubiquitous, post-32nm computer is, we're either headed for a networked Nirvana or an ecological nightmare. Hence the focus from semi companies on "green technology," a focus that goes hand in hand with selling transistors to the great, unwired masses.
Valve, feeling Team Fortress 2 just wasn't good enough, has decided to push out an update for The Orange Box's multiplayer title. This is likely the patch we heard about a week ago.
As the patch contains a total of 34 bullet point fixes, the full details arefar too long to reproduce here, but the major fixes include:
Sudden Death mode is now a server option (a convar) and defaults to OFFThe Medic's Medigun now charges at an increased rate during Setup time, to remove the need for self-damage grindingFixed exploit where the Medigun UberCharge wouldn't drain if you switched weaponsAdded effects to players when they earn an achievement, visible to other players nearbyOn Dustbowl, Fixed gaps in stage gates that allowed snipers to kill defenders during setupOn Dustbowl, Prevented Demomen being able to launch grenades into the stage three alleys while standing at the final cap point
After trolling the ArsClan message boards last night to take the temperature of one segment of the Team Fortress 2 community, it would seem that Sudden Death defaulting to off is making a lot of people unhappy, whilea minority of gamers are quite pleased with it. The Medigun charge buff seems to rank #2 on the community's list, though I predict that nobody will miss the obnoxious self-exploding during setup.
The Dustbowl exploits won't be missed, and I have to give it to Valve for getting those fixed so quickly.
I haven't seengraphics forsomeone else earning an achievement, but I imagine "gratz" and "woot" are going to become a part of the Team Fortress 2 lexicon very soon, if they aren't already.
The Protocol Freedom Information Foundation (PFIF), a nonprofit organization that is affiliated with the Software Freedom Law Center, inked an agreement with Microsoft to obtain protocol documentation under the terms established by the European Union's 2004 antitrust ruling. The documentation will benefit projects like Samba that seek to implement support for Microsoft's protocols in order to bring Windows interoperability to open source operating systems.
Microsoft finally achieved full compliance with the antitrust ruling earlier this year after a court rejected the company's appeal. In addition to significant fines, the ruling required Microsoft to make protocol documentation available to competitors.
The terms of the protocol documentation availability and the quality of the documentation were contentious issues. After much bickering and discussion, Microsoft eventually proposed terms that were deemed acceptable by the EC. Developers can obtain the full documentation by paying a one-time licensing fee of €10,000.
The PFIF has paid the fee to Microsoft, ensuring that the Samba developers will have access to the protocol specifications. The developers will have to sign nondisclosure agreements in order to gain access to the documentation.
Although Microsoft will also supply a complete list of patents held by the company which pertain to the protocol, the company will not provide licenses for those patents. Under the terms of the agreement, Microsoft is barred from asserting patents that aren't on the list against any implementation that is based on the purchased documentation. The availability of the patent list will enable the developers to create an implementation that does not infringe on Microsoft's intellectual property. The end result is that Samba source code will be unencumbered and entirely suitable for downstream distribution. This differs significantly from the agreement established between Microsoft and Novell, which only provided patent protection to a select group of downstream entities.
"We are very pleased to be able to get access to the technical information necessary to continue to develop Samba as a Free Software project," said Samba creator Andrew Tridgell in a statement. "Although we were disappointed the decision did not address the issue of patent claims over the protocols, it was a great achievement for the European Commission and for enforcement of antitrust laws in Europe. The agreement allows us to keep Samba up to date with recent changes in Microsoft Windows, and also helps other Free Software projects that need to interoperate with Windows".
The only aspect of the deal which could potentially incite controversy is the use of NDAs. Although NDAs are not unknown for certain kinds of open-source software development (particularly hardware driver programming) a small handful of vocal extremists vehemently condemn developers for signing such agreements. The deal will also likely receive some criticism from factions that categorically oppose creating open source implementations of Microsoft technologies.
Despite the minor concerns that some will likely express about the NDAs, this deal between the PFIF and Microsoft is of significant value to the Samba community and will lead to tangible improvements in network interoperability between Windows and open source operating systems. The agreements also indicates that at least one of the antitrust remedies imposed on Microsoft by the EC has the intended result.