As if there needed to be another reason to be wary of chat rooms geared toward meeting people and having flirtatious, cyber-relations with them, doing so can now put you at increased risk of identity theft. CyberLover.ru, a new site out of Russia, boasts that buyers of its software will be able to trick unsuspecting marks into handing over their personal information. CyberLover.ru's sexy bot can allegedly drum up salacious conversations using 10 different personalities that are so life-like that the victims will hand over their photos, phone numbers, and more at the drop of a negligee. The program can also be tailored towards either gender, and be used to obtain other forms of data, says the company.
Security software company PC Tools warns that the bot can easily be used for malicious purposes. The company said that the program's ability to mimic human behavior to dupe chatters is worrisome, and could readily be used to collect all manner of information. "As a tool that can be used by hackers to conduct identity fraud, CyberLover demonstrates an unprecedented level of social engineering," said PC Tools senior malware analyst Sergei Shevchenko in a statement. "CyberLover has been designed as a bot [robot] that lures victims automatically, without human intervention. If it's spawned in multiple instances on multiple servers, the number of potential victims could be very substantial."
The bot is able to simulate a number of different personalities, ranging from "romantic lover" to—this is not a joke—"sexual predator" (Mmm, I know that one really gets me going on those cold, lonely Friday nights). Once it collects the personal data of whoever it is chatting with, the bot then stores it all and sends it to its owner. PC Tools warns that the bot also lures lonely users to visit a website or blog, "which could in fact be a fake page used to automatically infect visitors with malware."
The creators of CyberLover, however, deny that the bot is intended for anything other than providing lonely chatters with a few thrills. "The program can find no more information than the user is prepared to provide," an employee identified as Alexander told Reuters. "If you have someone who is ready to hand over secret information to the person they are chatting to after having known them for all of five minutes, then in that case a leak of information is possible."
Alexander may have a point, but that doesn't make the data collection any less disconcerting. Regular chatroom-goers should protect themselves by using aliases online and avoiding giving out too much personal information to strangers—no matter how dirty they talk to you.
Google has surely noticed that much of its search traffic is directed to Wikipedia, which regularly has an entry in the top five search results for any particular term. If Google could steer all that traffic toward its own properties instead, and if those properties contained Google ads, and if Google split its revenue with the article creators… well, it's not hard to see why this would start to look pretty good to both Google and content creators, and whysuch an initiativecould ramp up quickly.
Udi Manber, Google's VP of Engineering, announced just such a planlast night, a program that (in his words)will make it easier for those with knowledge to share it with the world. The system is called "Knol"—which refers to a "knowledge unit"—and it will let anyone create, edit, and profit from creating a page packed with information on a specific topic. In other words, Google doesn't just want to link Wikipedia, it wants to be Wikipedia.
For a company that got its start by bowing at the Altar of the Algorithm, bringing human-created content in-house is the most recent manifestation of a paradigm shift that has been in the works for the last few years now, one that hasn't been happening without controversy. With the announcement of Knol, Google is already inviting questions about whether its reach has now extended too far.
Land of the knols
The basic point behind the knol system is to highlight (and provide incentives for) authors—a direct shot at the anonymity of Wikipedia and other Web 2.0 systems that don't allow experts to stress their own credentials when posting.
Each knol (it's the name of both the pages and the service) is just a web page hosted by Google. It has a special layout, one generated by Google-supplied tools, that includes content, links, and an author biography.
A sample knol page
Each knol is controlled by the author who creates it. While strong community tools for suggesting changes, making comments, and ranking knols will exist, it's up to each knol's author to control the contents of the page.
Google will host the content but will not attempt to edit or verify it, instead trusting that the best knols will naturally rise to the top (a single topic can have multiple knols, each competing for higher placement in Google's search results.)
Essentially, Google is offering to let people rebuild Wikipedia, and it seems to be targeting two classes of users: 1) experts who may not all feel welcome in Wikipedia, where their actions carry no special weight, and 2) those who aren't keen on spending their free time contributing to Wikipedia without compensation. While Wikipedia itself is diverse enough to survive, smaller projects like Citizendium could find the going much tougher.
You say you want a revolution? Well…
The Knol project is, in one sense, as nonrevolutionary as they come. Making information pages simple to develop? Ranking those pages? Monetizing those pages? Google itself does all three things already on the web through tools like Blogger, Google Search, and AdSense. Essentially, Google is just rolling out a new set of web page creation tools with a single template to work on.
Google's professed interest in making it easy for people to put information on this thing called "the Internet" might have rung true in 1998, but that simply can't be the reason for Knol in 2007. It's already too easy. Wikipedia makes it simple. So do blogging tools.
Instead, Google wants to mount a direct challenge to various social knowledge sites. Although it won't have an exclusive license to the content created for Knol, and though it will offer Knol pages to be indexed by all search engines, it's clear that Google really wants to be in control of a vast, Wikipedia/Citizendium knowledge store. And it can offer something that Wikipedia, et al., cannot: cash.
AdSense and its discontents
The revenue sharing bit is one of the keys to the whole project. Google is going to let authors choose if they want to include Google ads on their knols. The truly altruistic might say no. Most people will say yes.
And that's where things could get ugly. The lure of filthy lucre is likely to force several changeson the community model of current social knowledge projects. For one, it will break the community-oriented, we're-all-working-on-this-together spirit of sites like Wikipedia. With Knol, we're not in this together; we're in competition. Writing a knol on a popular topic could become a cash cow, as Google promises to split ad revenue with the author.
Many different authors can take a shot at creating a knol on the same topic, which should allow the best pages to claw their way to the top in a sort of survival of the fittest. But the thing about intellectual Darwinism is that it can be vicious, and we expect the same to be true of competition for the top knol spots.
Will Google be the one to police the inevitable claims of plagiarism? Will it do anything when a knol rips off pictures from another knol? What happens when Wikipedia gets ripped off or rewritten? Google is famously loathe to intervene manually, but when the company is creating an ecosystem that rewards individuals and puts so much cash on the table, problems are sure to result.
Maybe Google can be evil
The blogosphere reaction has already been electric. Even those likely to give Google the benefit of the doubt when it comes to not being evil are having second thoughts. What possible reason does the company have for moving beyond indexing and into the hosting and control of this sort of content?
Actually, Google has been making these moves for years. Google Book Search, Google Video, and YouTube are only the highest-profile examples of the way that Google has moved far beyond its roots in pointing people to other places on the 'Net.
Social knowledge, as exemplified by the high search placement of Wikipedia articles and the growth of sites like Mahalo, has been high-profile for long enough to earn a spot on the Google strategic radar screen. Despite the idealistic sentiments about ease of knowledge production, Knol looks more like an attempt to kneecap various sites that now command a good chunk of Google's outgoing search result links.
With Google having a vested interest in knols, but also being the main search engine that will index and rank those links, many people already suspect a conflict of interest. While we suspect Google will be careful not to give a special boost to knol results (at the risk of ruining user confidence in its results), others aren't so sure. At the very least, it will create suspicion.
Om Malik argues that this is just "Google using its page rank system to its own benefit. Think of it this way: Google's mysterious Page Rank system is what Internet Explorer was to Microsoft in the late 1990s: a way to control the destiny of others."
TechCrunch wonders if this is "a step too far." Knol "brings the power of Google into a marketplace that is already rich with competition," writes Duncan Riley, "and a marketplace where Google can use its might to crush that competition by favoring pages from Knol over others, on what is the world's most popular search engine."
And Danny Sullivan of Search Engine Land says, "It begins to feel like the knowledge aggregators are going to push out anyone publishing knowledge outside such aggregation systems."
This can't be the reaction that Google was hoping for with its announcement, but it may not matter. The naysayers can do their naysaying, but we suspect that the prospect of cash, combined with the competition for top spots in the Knol hierarchy, will lead to plenty of quality content at a rapid clip. Whether that's a positive development for the web is another question.
Pulse~LINK, one of the many entrants in the wireless HD technology race, has announced a new, ultra-wideband-based chipset that it claims can outdo the competition. According to an independent performance comparison (PDF) conducted by the EE Times and released by Pulse~LINK, the company's UWB implementation, called CWave, delivers sustained close-range performance that's more than 20x higher than its next-closest competitor. Specifically, Pulse~LINK promises between 480Mbps and 890Mbps, depending on transmission range.
Pulse~LINK offers a partial explanation of this performance gap between its own CWave product and its WiMedia-based compeitors, claiming that while other manufacturers chose to use the PC-centric wireless USB (W-USB) protocol and moved away from the goal of offering HD content over a wireless connection, Pulse~LINK stayed focused exclusively on wireless HD. If successful, the CWave chipset could find a home in consumer electronics, thereby enabling an HD DVD or Blu-Ray player to wirelessly transmit a signal in HD resolution to a television or display device located in a different area of the home or even in another room.
According to the performance comparison, the CWave is capable of delivering up to 890Mbps of throughput at very close range, dropping to around 480Mbps at a distance of eight feet. That's still much faster than the study's reference wired USB transfer rate of ~160Mbps, but the CWave's speed continued to drop rapidly as range increased until the two devices were approximately 12 feet apart. At that point, Pulse~LINK's chipset maintained a stable transfer rate just below 120Mbps all the way out to 35 feet. Performance begins to dip at the 40-foot mark, but the study's authors say they were out of room at that point, and were unable to test greater ranges. By comparison, the top-rated devices from CWave's competitors topped out at 50Mbps and saw even that transfer rate fall as devices approached the 30 foot mark.
Pulse~LINK has yet to state when we might see shipping products based on CWave, how much they might cost, and how difficult it would be for a standard home user to deploy the company's UWB technology across a home. Building a standard that can interface over existing wire actually eliminates some of the range concerns surrounding UWB implementations, but devices will have to price comparably against 802.11n products in order to catch the eye of the mass market.
The number of companies interested in HD-over-wireless is significantly larger than Pulse~LINK implies, and the study the company quotes doesn't actually compare CWave against any products from these other companies. TZero and Analog Devices partnered to launch a 480MBps-capable wireless HDMI system in September 2006, Samsung is developing a range of 50" and 58" wireless HDTV's that broadcast at 1080p using 802.11n, and a number of industry players including LG, Sony, Samsung, and Toshiba formed the WirelessHD Consortium back in October of 2006, with the goal of developing and marketing a wireless HD solution. Pulse~LINK's CWave may actually be a superior solution, but the field is considerably more crowded than the company's comparison study indicates.
The release candidate of Vista SP1wasreleased to the general public just a few days ago, and many fans are still in the process of downloading and installing it on their systems. As expected, there areplenty of bug fixes and general improvements bundled into Vista SP1, butseveral interesting features have been lost in the shuffle. One such feature is the enabling of support for what Microsoft terms "hotpatching".
Hotpatching is a process in which Windows components are updated while still in use by a running process. As you can imagine, this handy feature eliminates the need to reboot, maximizing system uptime and minimizing user headaches. According to Microsoft, hotpatch-enabled update packages are installed in a similar manner to standard update packages. The company's description of the feature seems to suggest that this ability is inherent to Vista, but was previously disabled or incomplete.
Hotpatching is something that system administrators will love when faced with a slate of PCs in need of reformatting and restoration to a usable state.Not everyone has the capability or the inclination to create a custom slipstreamed image, and we know that the number of reboots required to get a basic Windows system properly patched up with some productivity software can be frustrating at times.
There are a host of other improvements with SP1, all of which should also help make your Windows Update experience more rock-solid than it already is:
Improved patch deployment by retrying failed updates in cases where multiple updates are pending and the failure of one update causes other updates to fail as well. Optimizing OS installers so that they are run only when required during patch installation. Having fewer installers operating results in a more robust and reliable installation. Improves robustness during the patch installation by being resilient to transient errors such as sharing violations or access violations. Improves robustness of transient failures during the disk cleanup of old OS files after install. Improves overall install time for updates by optimizing the query for installed OS updates. Improves the uninstallation experience for OS updates by improving the uninstallation routines in custom OS installation code. Improves reliability of OS updates by making them more resilient to unexpected interruptions, such as power failure.
TechNet has a handy list of the notable changes in the SP1 release candidate if you're looking for more dirt on what Vista SP1 will bring to the table.
To understand this post, you're going to need to know a little bit about gene regulation. The location on the DNA where the a gene's messenger RNA starts is called a promoter—it contains binding sites for proteins that help start the RNA-making process. For many genes, that's all that's needed. But for those with complex regulatory control—and that includes most of the genes involved in embryonic development—other sequences are needed to ensure that a promoter is only active in the right tissues at the right time. These sequences, called enhancers, can be more or less anywhere, even hundreds of kilobases distant from the promoter they regulate.
This raises a couple of obvious questions: how do they ever find the promoter? If they can work at such large distances, why don't the enhancers just activate all the genes nearby? To give you a sense of the scale of the problem, I'll draw an analogy based on the Bithorax gene complex that's the subject of a paper from the most recent issue of Development. Two genes and an enhancer (among other things) reside in a region that's the DNA equivalent of over 100 miles long. The promoters of the genes are only about 100 feet long. Somehow, the enhancer not only finds a promoter to regulate, but it find the right one.
The new paper helps describe how this happens. It turns out that the promoter has a sequence just next to it that helps specifically attract the enhancer to that promoter—the researchers called it a tether. Delete the tether, and the enhancer will regulate whichever gene is closest. Add the tether to an unrelated gene, and the enhancer will regulate that. You can even stick other genes or DNA insulating sequences between the enhancer and its tether, and the enhancer will ignore them all and work with the promoter next to the tether. To draw another analogy, the tether acts like a postal code to help the enhancer find the right neighborhood.
The authors cite an earlier paper that found something similar in a completely different complex of genes and regulatory elements, suggesting that tethering represents a general mechanism for gene regulation. With dozens of fly genomes now available, it's possible that a few more examples of this will be enough to let bioinformatics gurus fish out tethering sequences, so that the biochemists can tell us how they work.
Development, 2007. DOI: 10.1242/dev.010744