One of the incredibly useful things about Mac OS X in general is the potential for integration between applications. Type the name of an Address Book contact in Gus Mueller's VoodooPad and it'll link the name, offering a useful contextual menu. Collect icons in CandyBar? Right click one and you can set it as your iChat avatar with the option of applying any of Leopard's new image effects. And let's not even get started on the power that AppleScript and its mortal-friendly Automator enable for moving and manipulating data between applications.
It is with this integration in mind that some new features in a couple of Mac OS X e-mail clients deserve a highlight, as they're fairly game-changing developments for those who have to work with mail on a regular basis. First is the discovery of Leopard Mail's support of message URLs, explored in-depth by John Gruber at Daring Fireball. Though the new feature is strangely undocumented by Apple, users have discovered that Mail now supports a system of URLs (yes, URLs can do more than point to porn) that allow you to link specific messages in other applications. For example, you could include links to a couple Mail messages from coworkers alongside notes, pictures, and web links in OmniOutliner or Yojimbo documents. This opens up a whole new productivity world, allowing you to bring your e-mail into other applications that aren't specifically designed to integrate with Leopard's Mail.
To help make it easy for users to harvest these message links (as of 10.5.1, Mail doesn't provide an option, and not all applications create the proper Mail message URL from a simple drag and drop yet), Gruber includes the code for a simple AppleScript at the end of his post. Save that script with Script Editor (found in /Applications/AppleScript/) and call it via any number of methods, such as Mac OS X's own AppleScript menubar item, Red Sweater's FastScripts, or launchers like Quicksilver and LaunchBar. The newest Leopard version of indev software's MailTags plug-in for Mail also provides a dedicated menu option for copying a message URL.
If this integration has your productivity gears turning, but Gmail is your client of choice, Mailplane could offer a nice compromise. As a desktop Gmail browser that allows for things like drag-and-drop file attachment and even an iPhoto plug-in for e-mailing photos, Mailplane is more or less a bridge between the convenience of webmail and the integrated power of desktop clients.
New in the most recent private betas of Mailplane (1.55.4 and above) is a similar URL system for Gmail messages which appears to work on both Leopard and Tiger. Complete with an Edit > Copy as Mailplane URL option, this option allows users to paste custom mailplane:// URLs in other applications to bring mail out of Gmail and into their productivity workflows. Remember, though, that Mailplane is still a browser for Gmail, albeit with the aforementioned modifications and other useful things like Growl notifications and support for multiple accounts (including Google Apps). Since it isn't an offline mail client, you'll still need to be online for a Mailplane URL to connect to its corresponding Gmail message.
Still, these new message URL features in two useful Mac e-mail clients will likely see some official integration love from other third-party apps in the near future. Aside from DIY AppleScripts, apps like Yojimbo and TextMate can only benefit from being able to include e-mail in the productivity mix. Knock knock third parties—how's about it?
Google's Chinese business has been consistently questioned and criticized, but now a Chinese company has taken issue simply with its name.
Beijing Guge Sci-Tech is suing Guge, or Google China, claiming that the Internet search giant is tramlping on its good and, perhaps most important, registered Chinese Mandarin business name. Guge Sci-Tech registered its name in 2006, a few months before Google did. Now the tech company wants Google to change the name of its Chinese branch and pay an undisclosed sum to cover all its legal fees.
Beijing Guge Sci-Tech registered its name at the Beijing Municipal Industrial and Commercial Bureau on April 19, 2006, and Google followed with registering "Guge" on November 24 that same year. This similarity in names, Beijing Guge Sci-Tech argues, has confused the public and damaged its business.
However, if Google was considering the use of the word before Beijing Guge Sci-Tech registered, it could work in Google's favor as the company has clearly registered its name in good faith. And, for all we know, the Chinese-based Guge may be little more than a trademark registration or a cybersquat, as information on the company is extremely hard to come by. Google China suggests that Guge Sci-Tech is indeed looking for an easy payout, perhaps having picked up on Google's plans by paying attention to Western media. Everyone knew Google would be changing its name in China, as "Goo-Gol" means "old" or "lame dog."
Also at issue between the companies is the definition of guge, which is not a normal Chinese word. Google says its a combination of Chinese characters that mean "valley" and "song"—a reference to Google's Silicon Valley ties. Beijing Guge Sci-Tech disagrees, stating the word means "a cuckoo singing in the spring, or the sound of grain singing during the harvest autumn time."
At FOSSCamp in October, skilled eye-candy expert Mirco Müller (also known as MacSlow) hosted a session about using OpenGL in GTK to bring richer user interfaces to desktop Linux applications. Building on the technologies that he presented at FOSSCamp, Müller recently published a blog entry that demonstrates his latest impressive experiments with OpenGL, GTK, and offscreen rendering.
Müller is currently developing the GDM Face Browser, a new login manager for Ubuntu that will include enhanced visual effects and smoothly animated transitions. To implement the face browser, he will need to be able to seamlessly combine OpenGL and conventional GTK widgets. Existing canvas libraries like Pigment and Clutter are certainly robust options for OpenGL development, but they do not offer support for the kind of integration that he envisions.
The solution, says Müller, is to use offscreen rendering and the texture_from_pixmap OpenGL extension. In his experimental demo programs, he loads GTK widgets from Glade user interface files, renders them into offscreen pixmaps, and then uses texture_from_pixmap to display the rendered widgets in a GTK/GLExt drawing area, where they can be transformed and manipulated with OpenGL. Müller has created several demo videos that show this technique can be used to apply animated transitions and add reflections to widgets. The visual effects implemented by Müller with GTK and OpenGL do not require the presence of a compositing window manager.
We talked to to Müller to get some additional insight into the challenges and benefits of incorporating OpenGL into desktop applications. One of the major deficiencies of the current approach, he says, is that users will not be able to interact with GTK widgets while they are being animated with OpenGL—a limitation that stems from lack of support for input redirection at the level of the toolkit.
"Interaction will only be possible at the final/original position of the widget," Müller told us, "since gtk+ has no knowledge of the animation/transformation taking place. I consider it to be much work to get clean input-redirection working in gtk+. There might be some ways to achieve it using hacks or work-arounds, but that should be avoided."
Eye candy or UI improvement?
Although some critics might be inclined to prematurely deride Müller's work as indulgent eye-candy, he primarily envisions developers adopting OpenGL integration to tangibly improve the user experience by increasing usability. "I would like to see [OpenGL] being used in applications for transition effects," he says. "We can right now improve the visual clues for users. By that I mean the UI could better inform them what's going on. Widgets [shouldn't] just pop up or vanish in an instant, but gradually slide or fade in. These transitions don't have to take a lot of time. As a rule of thumb half a second would be sufficient."
In particular, Müller would like to experiment with adding animated transition effects to the GTK notebook and expander widgets. He also has some creative ideas for applying animations to the widget focus rectangle in order to make its movement more visible to the user. Müller also discusses some applications that would benefit from OpenGL-based transitions. In programs like the Totem video player, he says, the playback controls in fullscreen mode could slide in and out rather than just appearing and disappearing. Alluding to the webcam demo that he made for FOSSCamp, he also points out the potential for incorporating OpenGL effects into video chat programs like Ekiga. Müller has also long advocated using visual effects to create richer and more intuitive user interfaces for file and photo management software—ideas that he has put into practice with his brilliant LowFat image viewer.
"The kind of effects possible if you can render everything into a texture, map it to a rectangle or mesh and then do further manipulations with GL (or shaders) are next to limitless," says Müller. "Just look at what Compiz allows you to do to windows now. Imagine that on the widget-level."
We also asked Müller to explain the performance implications of using OpenGL in GTK applications. "The memory-overhead is directly linked to the size of the window, since all the rendering has to happen in an OpenGL-context filling the whole window. The bigger the window, the bigger the needed amount of video memory," Müller explains. "The load caused on the system is directly linked to the animation-refresh one chooses. 60Hz would be super smooth and very slick. But that's a bit of an overkill in most cases. One still gets good results from only 20Hz."
There are still some performance issues with the texture_from_pixmap that are actively being resolved. "Due to some issues in Xorg (or rather DRI) there are still too many copy-operations going on behind the scenes for GLX_EXT_texture_from_pixmap," says Müller. "There are also locking issues and a couple of other things. At the moment I cannot name them all with exact details. But more importantly is the fact that there's currently work being done—in the form of DRI2—by Kristian Hoegsberg (Red Hat) to fix these very issues on Xorg. I cannot stress enough how important this DRI2 work is!"
Although using OpenGL incurs additional memory consumption and system load, Müller says that the impact is small enough to make his current approach viable for projects like the Ubuntu Face Browser.
Although individual developers can use Müller's boiler-plate code to incorporate OpenGL integration into their own GTK programs, Müller suspects that support for this technique will not be included directly in GTK at any time in the near future, which will make it harder for developers to adopt. "Right now it is all happening in application-level code and not inside gtk+," Müller explains. "Since I use GLX_EXT_texture_from_pixmap to achieve efficient texture-mapping out of the wigets' pixmap it is currently a X11-only solution. Therefore I imagine they might want to see a more platform-independent solution to this first." The GTK offscreen rendering feature that Müller uses also currently lacks cross-platform support.
Despite the challenges and limitations, Müller's creative work opens the door for some profoundly intriguing user interface enhancements in GTK and GNOME applications. "There are more things possible than you think," says Müller in some concluding remarks for our readers. "Don't have doubts, embrace what's available. X11 and gtk+ are pretty flexible. People who just don't seem motivated to explore ideas and test their limits (or the limits of the framework), should remember that this is Open Source. It lives and thrives only when people step up and get involved. Just f*****g do it!"
The NPD Group has released its sales figures numbers for November, and it's notable yet again the sheer amount of games and systems consumers are buying this year. November saw a 52 percent gain in gaming sales over last year, to $2.63 billion. "If the year had ended on December 1st, 2007 would be up 5 percent versus last year," said NPD analyst Anita Frazier in a statement. "With the biggest month of the year yet to go, total industry sales are on track to achieve between $18 billion and $19 billion in the US."
Let's take a look at what people were buying in November.
The Nintendo DS had an amazing November, with 1.53 million units sold. That makes the Wii's sales of 981,000 units look almost quaint in comparison. To put this in perspective, the Nintendo DS outsold every single Sony system combined in November.
The Wii did move a good amount of software in November; Super Mario Galaxy took the number two slot with 1.12 million units sold, and the always popular Wii Play came in fifth place with 564,000 units sold. The Wii version of Guitar Hero 3 was a popular title, coming in at number eight with 426,000 units sold. The extra controllers are also a large profit center for Nintendo. "4 of the 5 best-selling accessories for the month were Wii controllers," Frazier noted. "The Wii Zapper, which debuted in November, sold 232,000 units. The second-best selling accessory for the month was the PS3 wireless controller at 282,000 units."
The Xbox 360 had its second best month on record, trailing only last December in terms of sales. The system, with its myriad configurations, sold a respectable 770,000 units. Microsoft has proven yet again that 360 owners are a hungry lot; four of the top ten best-selling games were on Microsoft's platform.
Call of Duty 4 took the number one slot overall this month, with 1.57 million units sold on the 360. The 360 version of Assassin's Creed came in at number three with 980,000 units, Mass Effect was number six with 473,000 units, with Halo 3 at number nine with 387,000 units. Multiplatform games perform incredibly well on the 360: Call of Duty 4 on the 360 outsold its PlayStation 3 counterpart by 1.1 million units, and the 360 version of Assassin's Creed outsold the PS3 version by 603,000 units.
"The combination of the price cut and seasonal lift gave the PS3 the biggest October to November sales increase of any hardware platform," Frazier said of the PS3. Unfortunately, it is the one of the few positive things to be said about Sony's performance in November. Sony's three systems took the rear of the sales charts with the PlayStation 3 selling 466,000 units, the PlayStation 2 continuing to sell well with 496,000 units, and the PSP selling 567,000 units of hardware.
Software was a mixed bag. The PS3 versions of Call of Duty 4 and Assassin's Creed came in at the number seven and 10 slots respectively, but sold far less on the PS3 than they did on the Xbox 360. Uncharted: Drake's Fortune also failed to chart, despite being one of the best games of the year.
A few notes about November: Assassin's Creed was the best-selling new IP in gaming history, with Gears of War coming in at number two. Despite mixed reviews, the game did very well at retail. Call of Duty 4 was another monster hit; the game has only been available for one month, but it's already the number four title of the year. The $170, four-player Rock Band sold 382,000 units; the NPD Group notes that sales may pick up as word of mouth spreads.
November was a huge month for both hardware and software sales; December sales should be positively insane.
Geez, when Apple finally gives in to user's demands, it sure can be quiet about it. Yesterday's QuickTime and GarageBand updates brought more than just a plug for that RTSP security flaw that Symantec found. GarageBand 4.1.1 can now create custom ringtones for the iPhone too. Officially-sanctioned ringtones.
It's not like Apple released a PR for this or anything though; Apple slipped in a support doc on the topic after the update went out. Evidenced by a new "Send Ringtone to iTunes" option from GarageBand's Share menu, about the only thing missing here is support for turning iTunes Store-purchased tracks into ringtones (though iTunes Plus tracks should work just fine). Unfortunately, GarageBand will still deny importing said tracks, presenting a warning about stealing food from the dinner plates of wealthy executives' children. While the iTunes Store ringtone process is definitely slick and convenient, it looks like you'll have to keep paying double the price just to use a song you already own as a ringtone instead of an iPod track.
Naturally, the process of creating a ringtone from non-iTunes Store songs is exceedingly simple, but that aforementioned support doc elaborates, just in case. You can simply select a cycle region of a GarageBand project (which means you don't actually need to chop the project down) that's 40 seconds or less in length, then hit the magic button. Prepare for an onslaught of Snow and Heat Miser ringtones boys and girls.
For those who aren't carrying Apple's smudge-loving phone, the company even linked another support doc on using GarageBand to create custom ringtones for your third-party mobile phone.
On a conference call today, Nintendo of America President Reggie Fils-Aime expressed regret at the hard-to-find nature of the Nintendo Wii, noting that it has been a sell-out since launch. Agreeing that shortages help "no one", he then detailed some of Nintendo's plans to fight the effects of such high demand.
The first news is that there will be a large push of Nintendo Wiis to store shelves this weekend, with advertising in the circulars of big box retailers. While Wal-Mart doesn't publish a weekly circular, Fils-Aime did say that the retailer would have large amounts of stock in its stores. Get ready for the lines, in other words.
The second plan is to introduce rain check vouchers through GameStop. These vouchers will guarantee you a system before January 29, but to get one, you'll have to prepay for the system in full. Reggie noted there will be "tens of thousands" of these rain checks available, and that we could expect a press release from GameStop explaining the program in greater detail. The vouchers will be sold on December 20 and 21.
During the call Reggie mostly repeated things we knew before: production is at 1.8 million units a month, the shortages won't be over any time soon, and consumers need to be patient—trying to hold out instead of buying at inflated prices from resellers. With Nintendo hardware selling in vast numbers, and the holiday crush just beginning for parents and grandparents looking for the elusive Wii, a voucher program and sales numbers may not be enough to sooth irate consumers. It's been estimated that Nintendo may be losing over $1 billion by not being able to meet the current demand for the system.
J. Craig Venter is famous within the biological community for both his development of the "shotgun" method of genome sequencing and behavior that some view as egomaniacal. In a number of cases, he's partially sequenced his own genome (or his pet's), declared victory over public sequencing efforts, and moved on to other projects, leaving others to finish off the work. In recent years, his focus has shifted to synthetic life—cells directed by simplified genomes engineered to perform useful tasks such as fuel production or drug synthesis. But, in pursuing that goal, he's accumulating a patent portfolio that may inhibit biotech research in general.
The concept of synthetic life may be a bit of a misnomer. Venter's approach has involved characterizing bacteria that have radically reduced genome sizes in order to determine the minimum set of genes necessary to support life in a favorable environment. With that list of genes in hand, researchers could then create DNA encoding them from scratch, using machines that can chemically manufacture short stretches of DNA. Those stretches can be assembled into longer pieces (Venter's coworkers have patented an assembly method), eventually producing a genome that's never actually seen a living organism.
DNA on its own can't accomplish much; it needs to be processed by proteins, supported by a variety of chemical compounds, and separated from the environment by a membrane. All of those could possibly be arranged, but it's far simpler to just take an existing cell, wipe out its genome, and replace it with the synthesized one. Whether this represents synthetic life or an engineered variation of existing life is largely a matter of philosophy and public relations. In either case, rumors have circulated that Venter's group has already achieved this, but are awaiting the publication of a scientific paper before formally announcing it.
If there were any doubts regarding these rumors, patent applications by Venter and his associates should put them to rest. In November, applications were filed, entitled Synthetic genomes and Installation of genomes or partial genomes into cells or cell-like systems, that describe this process in detail. But, in typical patent fashion, they also generalize the process in such a way that it applies to a wide variety of other potentially useful processes. The latter claims a patent on, "obtaining a genome that is not within a cell; and introducing the genome into a cell or cell-like system." A further 17 clauses expand that claim to nearly every potentially imaginable form of DNA or membrane-contained space. The former spends five clauses just to expand the claim about DNA to a huge range different sized DNA molecules, including just about any size that's convenient for biologists to work with.
The ETC Group, which has been agitating against Venter, suggests that this is an attempt to dominate synthetic life, calling it an attempt to create a "Microbesoft" like monopoly. "It appears that Craig Venter's lawyers have constructed a legal rats' nest of monopoly claims that may entangle the entire field of synthetic biology," ETC's Jim Thomas said in a statement.
The key question will be whether these new applications will survive the more stringent test of obviousness that resulted from a recent Supreme Court decision. Although the patents suggest a few clever twists on work that's already being done in many labs, most of what's described appears to be an extension of commonly utilized lab techniques to organismal genomes (researchers commonly do this work with viral genomes already). In fact, due to the vague phrasing, the patents would actually cover somatic cell nuclear transplant, a technique for creating embryonic stem cells that's been in the news recently. Given the overlap with existing work, it seems doubtful that these patents would survive the barrage of lawyers that universities and the biotech industry could subject them to if those groups run afoul of licensing issues.
This week Harmonix claimed that the reason there is no patch to allow Guitar Hero 3 guitars to work on the PlayStation 3 version of Rock Band is Activision's meddling;now Activision has fired back at the Rock Band developer. "The recent announcement by MTV Games/Viacom's Harmonix division that Activision is blocking Sony from releasing a patch and their plea to enable Rock Band software to work with Guitar Hero hardware paints a very misleading picture," Activision states.
ButActivision never says that the allegations of patch-blocking are false, either. "In fact, Harmonix and its parent company MTV Games/Viacom recently declined Activision's offer to reach an agreement that would allow the use of Guitar Hero guitar controllers with Rock Band," the statement continues. "We have been and remain open to discussions with Harmonix and MTV Games/Viacom about the use of our technology in Rock Band. Unfortunately for Rock Band users, in this case Harmonix and MTV Games/Viacom are unwilling to discuss an agreement with Activision."
Reading between the lines, it sounds like Activision asked for a sack of cash and Harmonix said no. While the companies bicker amongst themselves it's the gamers who suffer; with no first or third-party Rock Band guitars on store shelves for the PlayStation 3, rhythm gamers were looking forward to using their existing guitars on the game. Unfortunately, it doesn't look like we're any closer to that being a reality.
In the meantime, this may be costing Activision business. "Simply put: GH3 is off my Christmas list," one reader commented yesterday. Another reader noted that Rock Band would have made him more likely to buy Guitar Hero 3: "Activision has eliminated me as a potential customer. I was otherwise unlikely to buy the game, but because of Rock Band and the extra guitar, I would have."
As if there needed to be another reason to be wary of chat rooms geared toward meeting people and having flirtatious, cyber-relations with them, doing so can now put you at increased risk of identity theft. CyberLover.ru, a new site out of Russia, boasts that buyers of its software will be able to trick unsuspecting marks into handing over their personal information. CyberLover.ru's sexy bot can allegedly drum up salacious conversations using 10 different personalities that are so life-like that the victims will hand over their photos, phone numbers, and more at the drop of a negligee. The program can also be tailored towards either gender, and be used to obtain other forms of data, says the company.
Security software company PC Tools warns that the bot can easily be used for malicious purposes. The company said that the program's ability to mimic human behavior to dupe chatters is worrisome, and could readily be used to collect all manner of information. "As a tool that can be used by hackers to conduct identity fraud, CyberLover demonstrates an unprecedented level of social engineering," said PC Tools senior malware analyst Sergei Shevchenko in a statement. "CyberLover has been designed as a bot [robot] that lures victims automatically, without human intervention. If it's spawned in multiple instances on multiple servers, the number of potential victims could be very substantial."
The bot is able to simulate a number of different personalities, ranging from "romantic lover" to—this is not a joke—"sexual predator" (Mmm, I know that one really gets me going on those cold, lonely Friday nights). Once it collects the personal data of whoever it is chatting with, the bot then stores it all and sends it to its owner. PC Tools warns that the bot also lures lonely users to visit a website or blog, "which could in fact be a fake page used to automatically infect visitors with malware."
The creators of CyberLover, however, deny that the bot is intended for anything other than providing lonely chatters with a few thrills. "The program can find no more information than the user is prepared to provide," an employee identified as Alexander told Reuters. "If you have someone who is ready to hand over secret information to the person they are chatting to after having known them for all of five minutes, then in that case a leak of information is possible."
Alexander may have a point, but that doesn't make the data collection any less disconcerting. Regular chatroom-goers should protect themselves by using aliases online and avoiding giving out too much personal information to strangers—no matter how dirty they talk to you.
Google has surely noticed that much of its search traffic is directed to Wikipedia, which regularly has an entry in the top five search results for any particular term. If Google could steer all that traffic toward its own properties instead, and if those properties contained Google ads, and if Google split its revenue with the article creators… well, it's not hard to see why this would start to look pretty good to both Google and content creators, and whysuch an initiativecould ramp up quickly.
Udi Manber, Google's VP of Engineering, announced just such a planlast night, a program that (in his words)will make it easier for those with knowledge to share it with the world. The system is called "Knol"—which refers to a "knowledge unit"—and it will let anyone create, edit, and profit from creating a page packed with information on a specific topic. In other words, Google doesn't just want to link Wikipedia, it wants to be Wikipedia.
For a company that got its start by bowing at the Altar of the Algorithm, bringing human-created content in-house is the most recent manifestation of a paradigm shift that has been in the works for the last few years now, one that hasn't been happening without controversy. With the announcement of Knol, Google is already inviting questions about whether its reach has now extended too far.
Land of the knols
The basic point behind the knol system is to highlight (and provide incentives for) authors—a direct shot at the anonymity of Wikipedia and other Web 2.0 systems that don't allow experts to stress their own credentials when posting.
Each knol (it's the name of both the pages and the service) is just a web page hosted by Google. It has a special layout, one generated by Google-supplied tools, that includes content, links, and an author biography.
A sample knol page
Each knol is controlled by the author who creates it. While strong community tools for suggesting changes, making comments, and ranking knols will exist, it's up to each knol's author to control the contents of the page.
Google will host the content but will not attempt to edit or verify it, instead trusting that the best knols will naturally rise to the top (a single topic can have multiple knols, each competing for higher placement in Google's search results.)
Essentially, Google is offering to let people rebuild Wikipedia, and it seems to be targeting two classes of users: 1) experts who may not all feel welcome in Wikipedia, where their actions carry no special weight, and 2) those who aren't keen on spending their free time contributing to Wikipedia without compensation. While Wikipedia itself is diverse enough to survive, smaller projects like Citizendium could find the going much tougher.
You say you want a revolution? Well…
The Knol project is, in one sense, as nonrevolutionary as they come. Making information pages simple to develop? Ranking those pages? Monetizing those pages? Google itself does all three things already on the web through tools like Blogger, Google Search, and AdSense. Essentially, Google is just rolling out a new set of web page creation tools with a single template to work on.
Google's professed interest in making it easy for people to put information on this thing called "the Internet" might have rung true in 1998, but that simply can't be the reason for Knol in 2007. It's already too easy. Wikipedia makes it simple. So do blogging tools.
Instead, Google wants to mount a direct challenge to various social knowledge sites. Although it won't have an exclusive license to the content created for Knol, and though it will offer Knol pages to be indexed by all search engines, it's clear that Google really wants to be in control of a vast, Wikipedia/Citizendium knowledge store. And it can offer something that Wikipedia, et al., cannot: cash.
AdSense and its discontents
The revenue sharing bit is one of the keys to the whole project. Google is going to let authors choose if they want to include Google ads on their knols. The truly altruistic might say no. Most people will say yes.
And that's where things could get ugly. The lure of filthy lucre is likely to force several changeson the community model of current social knowledge projects. For one, it will break the community-oriented, we're-all-working-on-this-together spirit of sites like Wikipedia. With Knol, we're not in this together; we're in competition. Writing a knol on a popular topic could become a cash cow, as Google promises to split ad revenue with the author.
Many different authors can take a shot at creating a knol on the same topic, which should allow the best pages to claw their way to the top in a sort of survival of the fittest. But the thing about intellectual Darwinism is that it can be vicious, and we expect the same to be true of competition for the top knol spots.
Will Google be the one to police the inevitable claims of plagiarism? Will it do anything when a knol rips off pictures from another knol? What happens when Wikipedia gets ripped off or rewritten? Google is famously loathe to intervene manually, but when the company is creating an ecosystem that rewards individuals and puts so much cash on the table, problems are sure to result.
Maybe Google can be evil
The blogosphere reaction has already been electric. Even those likely to give Google the benefit of the doubt when it comes to not being evil are having second thoughts. What possible reason does the company have for moving beyond indexing and into the hosting and control of this sort of content?
Actually, Google has been making these moves for years. Google Book Search, Google Video, and YouTube are only the highest-profile examples of the way that Google has moved far beyond its roots in pointing people to other places on the 'Net.
Social knowledge, as exemplified by the high search placement of Wikipedia articles and the growth of sites like Mahalo, has been high-profile for long enough to earn a spot on the Google strategic radar screen. Despite the idealistic sentiments about ease of knowledge production, Knol looks more like an attempt to kneecap various sites that now command a good chunk of Google's outgoing search result links.
With Google having a vested interest in knols, but also being the main search engine that will index and rank those links, many people already suspect a conflict of interest. While we suspect Google will be careful not to give a special boost to knol results (at the risk of ruining user confidence in its results), others aren't so sure. At the very least, it will create suspicion.
Om Malik argues that this is just "Google using its page rank system to its own benefit. Think of it this way: Google's mysterious Page Rank system is what Internet Explorer was to Microsoft in the late 1990s: a way to control the destiny of others."
TechCrunch wonders if this is "a step too far." Knol "brings the power of Google into a marketplace that is already rich with competition," writes Duncan Riley, "and a marketplace where Google can use its might to crush that competition by favoring pages from Knol over others, on what is the world's most popular search engine."
And Danny Sullivan of Search Engine Land says, "It begins to feel like the knowledge aggregators are going to push out anyone publishing knowledge outside such aggregation systems."
This can't be the reaction that Google was hoping for with its announcement, but it may not matter. The naysayers can do their naysaying, but we suspect that the prospect of cash, combined with the competition for top spots in the Knol hierarchy, will lead to plenty of quality content at a rapid clip. Whether that's a positive development for the web is another question.
Pulse~LINK, one of the many entrants in the wireless HD technology race, has announced a new, ultra-wideband-based chipset that it claims can outdo the competition. According to an independent performance comparison (PDF) conducted by the EE Times and released by Pulse~LINK, the company's UWB implementation, called CWave, delivers sustained close-range performance that's more than 20x higher than its next-closest competitor. Specifically, Pulse~LINK promises between 480Mbps and 890Mbps, depending on transmission range.
Pulse~LINK offers a partial explanation of this performance gap between its own CWave product and its WiMedia-based compeitors, claiming that while other manufacturers chose to use the PC-centric wireless USB (W-USB) protocol and moved away from the goal of offering HD content over a wireless connection, Pulse~LINK stayed focused exclusively on wireless HD. If successful, the CWave chipset could find a home in consumer electronics, thereby enabling an HD DVD or Blu-Ray player to wirelessly transmit a signal in HD resolution to a television or display device located in a different area of the home or even in another room.
According to the performance comparison, the CWave is capable of delivering up to 890Mbps of throughput at very close range, dropping to around 480Mbps at a distance of eight feet. That's still much faster than the study's reference wired USB transfer rate of ~160Mbps, but the CWave's speed continued to drop rapidly as range increased until the two devices were approximately 12 feet apart. At that point, Pulse~LINK's chipset maintained a stable transfer rate just below 120Mbps all the way out to 35 feet. Performance begins to dip at the 40-foot mark, but the study's authors say they were out of room at that point, and were unable to test greater ranges. By comparison, the top-rated devices from CWave's competitors topped out at 50Mbps and saw even that transfer rate fall as devices approached the 30 foot mark.
Pulse~LINK has yet to state when we might see shipping products based on CWave, how much they might cost, and how difficult it would be for a standard home user to deploy the company's UWB technology across a home. Building a standard that can interface over existing wire actually eliminates some of the range concerns surrounding UWB implementations, but devices will have to price comparably against 802.11n products in order to catch the eye of the mass market.
The number of companies interested in HD-over-wireless is significantly larger than Pulse~LINK implies, and the study the company quotes doesn't actually compare CWave against any products from these other companies. TZero and Analog Devices partnered to launch a 480MBps-capable wireless HDMI system in September 2006, Samsung is developing a range of 50" and 58" wireless HDTV's that broadcast at 1080p using 802.11n, and a number of industry players including LG, Sony, Samsung, and Toshiba formed the WirelessHD Consortium back in October of 2006, with the goal of developing and marketing a wireless HD solution. Pulse~LINK's CWave may actually be a superior solution, but the field is considerably more crowded than the company's comparison study indicates.
The release candidate of Vista SP1wasreleased to the general public just a few days ago, and many fans are still in the process of downloading and installing it on their systems. As expected, there areplenty of bug fixes and general improvements bundled into Vista SP1, butseveral interesting features have been lost in the shuffle. One such feature is the enabling of support for what Microsoft terms "hotpatching".
Hotpatching is a process in which Windows components are updated while still in use by a running process. As you can imagine, this handy feature eliminates the need to reboot, maximizing system uptime and minimizing user headaches. According to Microsoft, hotpatch-enabled update packages are installed in a similar manner to standard update packages. The company's description of the feature seems to suggest that this ability is inherent to Vista, but was previously disabled or incomplete.
Hotpatching is something that system administrators will love when faced with a slate of PCs in need of reformatting and restoration to a usable state.Not everyone has the capability or the inclination to create a custom slipstreamed image, and we know that the number of reboots required to get a basic Windows system properly patched up with some productivity software can be frustrating at times.
There are a host of other improvements with SP1, all of which should also help make your Windows Update experience more rock-solid than it already is:
Improved patch deployment by retrying failed updates in cases where multiple updates are pending and the failure of one update causes other updates to fail as well. Optimizing OS installers so that they are run only when required during patch installation. Having fewer installers operating results in a more robust and reliable installation. Improves robustness during the patch installation by being resilient to transient errors such as sharing violations or access violations. Improves robustness of transient failures during the disk cleanup of old OS files after install. Improves overall install time for updates by optimizing the query for installed OS updates. Improves the uninstallation experience for OS updates by improving the uninstallation routines in custom OS installation code. Improves reliability of OS updates by making them more resilient to unexpected interruptions, such as power failure.
TechNet has a handy list of the notable changes in the SP1 release candidate if you're looking for more dirt on what Vista SP1 will bring to the table.
To understand this post, you're going to need to know a little bit about gene regulation. The location on the DNA where the a gene's messenger RNA starts is called a promoter—it contains binding sites for proteins that help start the RNA-making process. For many genes, that's all that's needed. But for those with complex regulatory control—and that includes most of the genes involved in embryonic development—other sequences are needed to ensure that a promoter is only active in the right tissues at the right time. These sequences, called enhancers, can be more or less anywhere, even hundreds of kilobases distant from the promoter they regulate.
This raises a couple of obvious questions: how do they ever find the promoter? If they can work at such large distances, why don't the enhancers just activate all the genes nearby? To give you a sense of the scale of the problem, I'll draw an analogy based on the Bithorax gene complex that's the subject of a paper from the most recent issue of Development. Two genes and an enhancer (among other things) reside in a region that's the DNA equivalent of over 100 miles long. The promoters of the genes are only about 100 feet long. Somehow, the enhancer not only finds a promoter to regulate, but it find the right one.
The new paper helps describe how this happens. It turns out that the promoter has a sequence just next to it that helps specifically attract the enhancer to that promoter—the researchers called it a tether. Delete the tether, and the enhancer will regulate whichever gene is closest. Add the tether to an unrelated gene, and the enhancer will regulate that. You can even stick other genes or DNA insulating sequences between the enhancer and its tether, and the enhancer will ignore them all and work with the promoter next to the tether. To draw another analogy, the tether acts like a postal code to help the enhancer find the right neighborhood.
The authors cite an earlier paper that found something similar in a completely different complex of genes and regulatory elements, suggesting that tethering represents a general mechanism for gene regulation. With dozens of fly genomes now available, it's possible that a few more examples of this will be enough to let bioinformatics gurus fish out tethering sequences, so that the biochemists can tell us how they work.
Development, 2007. DOI: 10.1242/dev.010744