苏州美甲店排行榜

Universal, XM settle suit over receiver’s ability to record

Posted on 27/01/2019 by

Universal Music Group and XM Radio have settled a lawsuit filed last year over the XM Inno, a portable XM Radio receiver capable of recording up to 50 hours of programming. The Big Four record labels sued XM Radio in May 2006, accusing the satellite radio company of copyright infringement. 苏州美甲

The problem, from the standpoint of the record labels, is that the Inno can play back the programming it records in a different order. Consumers call that convenient, but to an industry hounded by shrinking revenues, that feature equates to a free version of iTunes. "XM wants to offer listeners what is essentially a free version of iTunes without paying the music companies for the right to sell their songs," said RIAA CEO Mitch Bainwol when the suit was first filed (Bainwol and the RIAA declined to comment on the settlement).

Sirius, which makes a similar device, got a pass from the record labels because it negotiated a fee structure before launching the product. That's apparently what XM Radio did with Universal. Although the two companies have not disclosed the terms of the deal, XM Radio is joining Sirius in paying Universal a licensing fee—one that covers all future XM devices with similar functionality.

"We are pleased to have resolved this situation in an amicable manner," said Universal CEO Doug Morris. "We pride ourselves on empowering new technology and expanding consumer choice. And XM is providing a new and exciting opportunity for music lovers around the world to discover and enjoy our content, while at the same time recognizing the intrinsic value of music to their business and the need to respect the rights of content owners."

To its credit, Universal has done an about-face on the DRM issue, and some of its other moves show that the label is serious about exploring uncharted territory in its attempts to fix what ails the music industry. In particular, its plans for what it calls "Total Music," a service would give buyers of mobile devices a free subscription to much of the UMG library, has the potential to shake up the online music market.

For the time being, the other three major labels will press on with their lawsuit against XM Radio. It wouldn't be surprising if the agreement between XM and Universal serves as the basis for a pact with the remaining plaintiffs; indeed, Reuters is already reporting that Warner Music Group and XM are in settlement talks.

Feds readying for 2008 IPv6 deadline, but not for actual use

Posted on 27/01/2019 by

The US government is facing a June 2008 deadline to comply with an Office of Management and Budget requirement to turn IPv6 on in its networks. However, the directive only specifies that IPv6 has to be available on routers. It doesn't say anything about actually using the new protocol. And apparently, this is exactly what the various agencies are now doing: they're upgrading their routers to support the new protocol, ping their ISP's router over IPv6, and calling it a day. Actual network usage is firmly sticking to IPv4. 苏州美甲

In 2003, the Department of Defense started an effort to adopt IPv6 by 2008. Check out the transcript of the announcement briefing:"if the commercial world's going to go to IP 6, we're not going to stay on IP 4. That would be silly," said John Stenbit of the Networks and Information Integration office.

And when could that possibly be? "Well, my best guess is it's going to happen commercially before 2008." It looks like the Office of Management and Budget copied this deadline when they mandated the adoption of IPv6 by the federal government by 2008. At the time, in 2005, the thinking was along the lines of "Once the network backbones are ready, the applications and other elements will follow."

On the commercial Internet, the adoption of IPv6 seems to be happening in the opposite direction: starting at the edges and moving inward. Obviously, there is a great deal of software out there that only works over IPv4, but everyone who has bought a computer or installed a new OS in the last four years is probably using an IPv6-capable web browser such as Internet Explorer, Safari, or Firefox. Software like Windows Media Player, QuickTime and iTunes can also work over IPv6. If you run Windows Vista or Mac OS X with Apple's Airport Extreme 802.11n base station, you are automatically connected to the IPv6 Internet through 6to4 tunneling. Cooperation from ISPs isn't required for this.

For application writers—especially those who use high-level tools—adding IPv6 to an application is almost a no-brainer: some customers are already asking for it (hey, Apple—where's my IPv6-enabled iChat?), and certainly more customers are going to ask for it in the future. Unless the protocol needs to do some deep IP-related magic, the required changes are quite minor. Make them once, and every user can run the application over IPv6.

It's much more difficult to add IPv6 to a service, however. Not only do you need to add IPv6 connectivity to the local network and the machines running the service, you also need to upgrade and/or reconfigure firewalls and load balancers. This needs to happen for each individual service. So "other elements will follow" is probably a tad on the optimistic side.

That said, it's a bit soon to accuse the feds of "dropping the ball on IPv6," as Network World does in a long article about the matter. It's not like these federal agencies need move to IPv6 because of a lack of IPv4 addresses. The US government is the single largest holder of IPv4 address space in the world: a quick glance at the master list of IPv4 address blocks shows that various government agencies together hold at least 10 blocks of 16.8 million IPv4 addresses. For comparison: the top-3 IPv4 address using countries in the world are the US with 1.4 billion addresses, Japan with 141 million and China with 135 million.

But there are other reasons to switch to IPv6 wholesale, and one of those—enhanced security—should be of interest to the Department of Defense in light of recent attacks. Traditionally, encryption of communication over the (IPv4) Internet happens with SSL/TLS: the encryption layer that puts the S in HTTPS. However, SSL/TLS runs on top of TCP, which makes it vulnerable to all attacks on both TCP and IP. Along with IPv6, the IETF developed IPsec, a mechanism that can encrypt and authenticate each individual IP packet. This makes it impossible to attack the TCP segments in IPsec-protected packets successfully.

For a variety of reasons, including IPv6's autoconfiguration feature—which can be made to run securely—and the fact that IPsec "protects" your packets from changes by Network Address Translators by not working through them, it's easier to run IPsec end-to-end between two random IPv6 hosts than between the same hosts running IPv4. That's something the DoD finds very interesting, but on the commercial Internet, where most types of communication are client/server and attacks happen at the browser level rather than the packet level, these considerations don't carry as much weight.

Analysts exurberant about Macworld Expo

Posted on 27/01/2019 by

Does anyone else remember a time when the days leading up to Macworld would be filled by web sites and message boards gossiping about fantastic new products from Apple? It's still that time, but now instead of some guy sitting in his underwear and posting from his parents' basement, the lead rumor mongers are CNBC reporters with slick hair and six-figure financial analysts in corner offices. 苏州美甲

AppleInsider has the latest research note from Piper Jaffray analyst Gene Munster, who is now predicting… well, everything.

"We believe the timing is right for Apple to update most of its Macs. Some models may only see minor specification upgrades to newer, faster Intel processors.

The Mac Pro will likely get Penryn CPUs from Intel and maybe BTO Blu-ray drives, while the laptops will likely get more storage and a CPU bump. Don't count on Apple giving up its industry-trailing position as the only OEM to sell a $1099 laptop without a DVD writer anytime soon, though. Speaking of the MacBook, Munster believes the pseudo-subnotebook will be an expansion of the consumer line.

"That said, we continue to expect the 'ultra-portable' MacBook to be Apple's thinnest and lightest ever. While it may not be as small as we originally expected, we believe this could be the most consumer-friendly way to expand its current lineup of MacBook portables," Munster told clients. "It will likely be priced between the $1,099 consumer level MacBook and the $1,999 MacBook Pro."

Setting aside the hilarity of the pricing guesstimate, one wonders how Munster squares "consumer" with a laptop sporting NAND flash-based drives and LED-backlit displays. That's not to mention the possibility of "a unique touchpad, possibly using the same multitouch technology used in the iPhone and iPod touch."

If all that weren't enough, Munster thinks the iTunes Store may finally begin renting movies, assuming some kind of deal has been made with the studios. Maybe NBC Universal is just playing hard to get.

Amazingly, Munster is not predicting a 3G iPhone or iTablet, but there may be new games for the iPhone—and is anyone else wondering how Steve can possibly meet the expectations for the Keynote next month? And what happens if he doesn't? Here's one possibility.

AAPL between Keynotes: 2005 – 2006

Starting in 2005, we saw a run up in the perceived value of the company that peaked with Macworld Expo 2006 and approximately $85 per share.

AAPL between Keynotes: 2006 – 2007

This was immediately followed by a decline in stock price that reached a low of around $50 in June. It took nearly a year for the stock to recover to Macworld 2006 levels from Macworld 2005.

AAPL between Keynotes: 2007 – 2008

In contrast, 2007 did not see a drop, but a steady—some would say freaking amazing—climb, no doubt at least in part due to the iPhone. AAPL will likely break $200 per share by Macworld 2008, and the question then becomes one of what it takes to keep moving higher in 2008. Let's hope the analysts and anonymous nerds on the Internet got it right about their fantasies predictions.

.Mac browser hole: web coders collectively slap forehead

Posted on 27/01/2019 by

While I was a fan of .Mac back when it was iTools, these days, I am less allured by its now pay-for services. There was a stint when I did use .Mac: I received a free account for completing something or other through the Apple Sales Web back when I worked for an Apple authorized retailer. It was handy (particularly the iDisk feature), but it wasn't worth the premium to me. 苏州美甲

Fast forward a few years and .Mac has matured; you are now hard-pressed to use Apple's operating system without seeing some mention of the service. I dare say that the rise in popularity is most likely due to the presence of Apple retail stores and sales people asking just about anyone who makes a purchase if they would like .Mac with that.

One of the more useful features of .Mac is the ability to access an iDisk from a browser. This mean that users can share 10GB worth of information with the public, or they can chose to password-protect their iDisks and possibly still share 10GB worth of information with the public. Nope that isn't a typo; your iDisk might not be as secure as you'd like to think, and for a pretty stupid reason.

According to one Slashdot reader, there is no way to log out of an iDisk in a browser, meaning that another user can access everything on your iDisk using the browser's history feature. The individual is then apparently free to view and or delete your files. Not good. Not good at all.

Thus far, bug reports have been unanswered:

This seems like a minor security flaw via bad interface design, and podcaster Klaatu (of thebadapples.info) posted this on the discussion.apple.com site, only to have his post removed by Apple. Furthermore, feedback at apple.com/feedback has gone unanswered.

There are precautions that can be taken, however, if you are using a public computer and have to access your iDisk. First off, you can always delete your browser's history, making it that much harder for prowling eyes to accidentally come across your open disk. If that isn't enough for you, and it shouldn't be, you can delete your browser's cookies and the browser history. If that still isn't enough, we recommend that you delete your browser history, delete your cookies, delete your browser cache, and then stop using iDisk on public terminals.

Intel introduces tiny flash drive for mobile devices

Posted on 27/01/2019 by

Intel launched its new Z-P140 PATA drive today as part of the company's push to advance mobile Internet device technology. The Z-P140 SSD (solid state disk) is impressive, with a total area smaller than a fingertip (as pictured) and available capacities of 2GB and 4GB. At 0.6 grams, the Z-P140 is 75x lighter than a standard 1.8" drive, while occupying only 1/400 of the volume. Intel claims that the Z-P140 class of drives can be expanded up to 16GB in future iterations. 苏州美甲

It might be tiny, but Intel claims its new SDD is no slouch in the performance department. The Z-P140 is rated at a 40MBps read speed and 30MBps write speed. Power draw (or lack thereof) is also impressive; the Z-P140 draws 300mW under load and just 1.1mW in sleep mode. The current drive only supports the PATA (Parallel ATA) interface, but future models will be SATA compliant. The Z-P140 is meant to supercede the USB-based Z-P130, though it's not clear if the P140 will replace the P130 entirely at this point.

Intel's decision to focus on this type of SSD design only makes sense when the company's Menlow platform is considered as a whole. Intel has been talking up Menlow (and MIDs in general) since it launched the concept back in April of this year. Whereas the first-generation of UMPC/MID devices were based on a platform code-named McCaslin, and built around the Intel A100/A110, the 945GU Express chipset, and the ICH7U southbridge, Menlow will incorporate Intel's new Silverthorne CPU and will run on the Poulsbo chipset. Intel hasn't published exact specifications on Paulsbo as of yet, but the chipset is expected to debut with support for 802.11n and WiMax.

Ars discussed Intel's Silverthorne design last week, while our senior CPU editor, Jon Stokes shared some thoughts on how the x86 ISA might be expected to perform, in general, in the mobile segment back in September. Intel's new PATA SDD launch might seem unrelated to its activities in the CPU sector, but for a company dedicated to building platforms (and that's the business Intel is pushing these days), all of these products directly tie together. Intel has previously stated that Menlow will draw far less power than previous UMPC/MID devices, and the company is obviously targeting all of its platform components to hit these targets without compromising on performance.

Update: I previously referred to Menlow as Merom (another Intel mobile microarchitecture) at two points within this story. The error has been corrected.

NVIDIA launches its nForce 700 chipset

Posted on 27/01/2019 by

NVIDIA's new nForce 700 series chipset hit the market today with several new features and improved Penryn support. The 780i is meant to go head-to-head with Intel's X38, and matches up well against that chipset on paper; both chipsets offer 32 lanes of PCIe 2.0 support, eight-channel HD audio, and a huge number of USB 2 ports (12 for the X38, 10 for the 780i). Although the 780i only officially supports DDR2-800, Tech Report's chipset review states that NVIDIA's memory controller is actually capable of running at DDR-1200 speeds. As an added bonus, the 780i supports 3-way SLI configurations, though there may be a few caveats attached to performance scaling, as we'll discuss.苏州美甲

In terms of its actual design, however, the nForce 700 series looks like a bit of a kludge. The chipset's PCIe 2.0 support is delivered by the new nForce 200 bridge chip, which sits between the 780i SPP and the PCIe 2.0 slots. While the nForce 200 chip provides a full 32GBps of bandwidth to its two PCI-Express 2.0 x16 lanes, however, it's connected to the 780i SPP via a single x16 lane for a total of 14.4GBps of bandwidth. According to NVIDIA, this bandwidth discrepency is not an issue, thanks to the nForce 200's ability to handle GPU-to-GPU communication without transferring data back to the north bridge.

Drop in a third video card, and things get more complicated. The third PCIe slot on the 780i chipset only supports a standard PCI-Express interconnect, and hangs off the 780MCP, rather than the nForce 200. This means that the third card in a 780i SLI rig is restricted to half the bandwidth of the other two cards. The third card will also take a latency hit compared to the first two, given that all communication must be passed across the MCP, SPP, and nForce 200 before returning via the same path. At this point, NVIDIA's tri-SLI looks less like a genuine feature, and more like a capability NVIDIA included to please the marketing department.

According to Tech Report's review, the 780i SLI SPP may actually be a 680i SLI SPP validated to maintain a higher interconnect speed, while the 780i SLI MCP is actually the same nForce 570 south bridge that NVIDIA's been relying on for the past 18 months or so. I recommend you read TR's coverage for more information, but from my perspective, the 780i SLI is a solution that gets the job done, but doesn't offer anything compelling against the X38 (with the obvious exception of SLI support). NVIDIA's Penryn compatibility issues were obviously enough reason for the company to push a chipset refresh out the door, but at this point I'm more interested in what the Next Big Thing from NVIDIA might be—the 780i seems to be an attempt to shoe-horn certain capabilities into an older platform, rather than a genuine attempt to launch something new.

Court: Privacy no defense when Circuit City finds child porn

Posted on 27/01/2019 by

When you drop your PC off at Circuit City for a hardware upgrade (and you do use Circuit City for all your hardware upgrades, don't you?), you probably don't expect the techs to rummage around your hard drive, dredging up "questionable" files and showing them to law enforcement. But that's exactly what happened to Kenneth Sodomsky in 2004 at a Pennsylvania Circuit City. Now, the Superior Court of Pennsylvania hasconcluded that the child porn found on Sodomsky's computer by Circuit City techs could in fact be used against him at trial. 苏州美甲

The video in question, a clip of "the lower torso of an unclothed male," was discovered on Sodomsky's computer when he took it in for the installation of a new DVD burner. After installing the drive, Circuit City techs routinely install any included software and then test the whole arrangement to see if it works.

When doing said testing, the tech in question pulled up Windows XP's search dialog and scanned the whole hard drive for video files. As the list populated, the tech noted that "some of the files appeared to be pornographic in nature" due to theirfilenames, with some names indicating that they showed 13- or 14-year old boys. The tech clicked the first one, and immediately stoppedthe videowhen a hand appeared in the frame, moving toward the unclothed boy.

Cops were called, an arrest was made, Sodomsky was charged. His "life was over," he said. But in the court case, he argued that the techs had violated his privacy illegally by looking at the files and that the evidence could not be used against him. A court initially agreed, but a recent appeal to the state Superior Court has now overturned that decision.

The Superior Court opinion, a copy of which was seen by Ars (and first spotted by Cnet), decrees that Sodomsky didn't have an expectation of privacy in this particular case because he had granted the techs permission to install the drive and test it. Because they were only attempting to test the drive and ran only a general search for video files, what they found was permissible.

"We also find it critical to our analysis that when the child pornography was discovered," said the court, "the Circuit City employees were testing the DVD drive's operability in a commercially-accepted manner rather than conducting a search for illicit terms."

The case has been sent back to a lower court, which will now consider the evidence in the case against Sodomsky.

Unanswered by the opinion is the question of what, exactly, the Circuit City techs thought they were testing. According the court, "the playing of videos already in the computer was a manner of ensuring that the burner was functioning properly." Note that the techs weren't burning a disc, nor reading from a disc; they were playing a video file from the hard drive. How this proves a DVD burner is installed correctly is unclear.

The opinion does provide some guidelines for thinking about search-and-seizure issues and privacy concerns, though. Not only did Sodomsky surrender at least a bit of his privacy when he asked the techs to work on his machine, but the police who arrived and viewed the video in question didn't violate any rules. Because Circuit City employees invited them into the repair area and showed them the clip, the police didn't violate the Fourth Amendment.

Bali summit deal reached; tears and recriminations begin

Posted on 27/01/2019 by

This weekend saw the conclusion of the United Nations Framework Convention on Climate Change in Bali, Indonesia, a day later than planned. The summit, designed to put in place a global plan to tackle climate change once the Kyoto Protocol expires in 2012 was highly fractious, and intransigence by a number of nations resulted in the threat of a failure to come to an agreement. At the last minute, however, a certain measure of consensus was achieved, and an action plan was approved by the member nations. 苏州美甲

As we've reported before, the summit's aim was to take over from the Kyoto Protocol once it expires in 2012, in light of the reams of data contained within the Fourth Assessment Report (4AR), the recent four-part study concluded by the Intergovernmental Panel on Climate Change (IPCC). The 4AR has painted a much bleaker picture of the world's climate than before, with all signs pointing to an accelerated degree of disruption to the world's climate than previously agreed upon. Some of these changes are now underway, but there is still a window for action, albeit short, to prevent some of the worst effects.

Running over its schedule by a day, the Bali conference agreed on a roadmap on Saturday that puts in place a two-year process to attempt to agree on widespread reductions on anthropogenic climate emissions. Two days earlier, the European Union was highly critical of the US' continued intransigence on the issue. The mood could be summed up by an impassioned plea from the Papua New Guinea's representative, telling the US, "If you're not willing to lead, get out of the way." The plea was effective, as the US agreed to support the roadmap

However, although an agreement was arrived at, one has to question its worth. The US opposition centered around the the EU and China's proposal for a reduction in emissions to 25-40 percent below 1990 levels by the developed nations, and a lack of any concrete demands on the developing world. As a result, the EU's targets have been omitted to be replaced by a commitment to "deep cuts," and the US is already seen by many to be backtracking on the plan.

The failure to address the developing world's emissions in Kyoto has been used by politicians of all flavors in the US over the past 16 years to stall any meaningful action on the national level, and it's hard to envision the response to Bali being much different. New York's Mayor, Michael Bloomberg, spoke to a fringe meeting and pointed to the problem: "…Congress. They're unwilling to face any issue that has costs or antagonises any group of voters," and this surely will.

In light of this, one ought to greet the news of a signed agreement with rather cautious optimism. The need to act, and act quickly, is paramount, and in the immortal words of 24's Jack Bauer, "We're running out of time." But actual implementation will not be easy. Although the current crop of Democratic Presidential hopefuls all see tackling climate change as a priority, unless Congress can be persuaded to take action, those promises will be meaningless.

NFL calls on Ticketmaster to stiff-arm scalpers

Posted on 27/01/2019 by

Ticketmaster, whose onerous fees are all but impossible to avoid if you want to see a live event, has inked an agreement with the National Football League to handle a new ticket resale program for the league. Dubbed the "NFL Ticket Exchange," the service will be operated and hosted by Ticketmaster, and will also be accessible from NFL.com. Ticketmaster and the NFL say that the service, which will launch in time for the 2008 season, will allow ticket holders to unload their tickets in a "secure and reliable way." 苏州美甲

Ticketmaster currently runs exchange sites for 18 of the NFL's 32 teams. On team sites, tickets are sold at face value, with the company taking a cut at each end of the transaction (i.e., convenience fee, processing fee, printing fee, fee calculation fee). It's an attractive alternative to scalpers for those looking to get seats to an enticing match-up, but those who are looking to sell their tickets will certainly be tempted by the higher prices they can get from a site like StubHub, eBay, or Craigslist.

It's not known whether season ticket holders, who purchase the vast majority of tickets in the NFL, will be forced to use the NFL Ticket Exchange. Some of the teams that currently run similar sites prohibit season ticket holders from reselling their ducats—much to the chagrin of those who have paid thousands of dollars for personal seat licenses on top of the tickets.

One team has even gone to court in an attempt to hunt down and punish season ticket holders who have resold their tickets at a profit. This past October, the still-perfect New England Patriots obtained a court order forcing StubHub to cough up the names of team's ticket holders who have sold tickets on the site. Superior Court Judge Allan van Gestel wrote that the team's desire to be good corporate citizens and report "to authorities those customers that they deem to be in violation of the Massachusetts antiscalping law" was a factor in his ruling.

That said, it's disingenuous to suggest that the Patriots—and other teams—are trying to take the high moral ground in the fight against scalpers. Yes, teams have an interest in making sure fans can afford to attend a game without taking out a second mortgage (although said mortgage will certainly come in handy when it's time to buy beer, hot dogs, a program, and a souvenir jersey).

What the teams are really after is complete control of the ticketing pie. Teams haven't historically profited from ticket resales, and in a market where teams are looking to extract every last bit of revenue possible, selling the same tickets twice looks awfully attractive. The Chicago Cubs have been doing it since 2002, allowing ticket holders to resell their passes on the team's site in exchange for a cut of the profits.

A decade ago, services like the NFL Ticket Exchange and StubHub were not possible, but the Internet and the way it breaks down market barriers has made ticket resale services a profitable market. Depending on the ticketing technology used, it's theoretically possible to track a single ticket from the point of issue to the stadium gate—especially in the case of e-tickets. That presents an opportunity for sports leagues and Ticketmaster—and a threat to resellers like StubHub.

How well the NFL Ticket Exchange is received will depend on a single factor: whether or not the league tries to make it the sole outlet for resale. With some teams having waiting lists in the tens of thousands for season tickets, the threat of losing the coveted tix will certainly steer many season ticket owners toward the league-sanctioned site. On the other hand, the dreaded Ticketmaster Tax might not make scalpers look that bad.

Developing apps for Google Android: it’s a mixed bag

Posted on 27/01/2019 by

The software development kit for Google's Linux-based Android mobile phone operating system has been out in the wild for a over a month now, plenty of time for developers to form opinions of the platform and assess the capabilities of the API. The verdict from seasoned mobile software programmers is somewhat mixed; some are even expressing serious frustration. 苏州美甲

I put Android to the test myself in an attempt to see how bad the situation really is. What I discovered is a highly promising foundation that isplagued by transitional challenges and a development process that needs more work. Android has manybugs, some of which are impeding development. Unfortunately, Google's QA infrastructure for the platform is completely inadequate for a project of Android's scope and magnitude.

There is no public issue-tracking system for Android. Instead, users post information about the bugs they encounter in the Android Developer Google group and hope that one of Google's programmers sees it so that it can be added to Google's private, internal issue-tracking system. Users have no way to track the status of bugs that they have reported, and they never know whetherthe issue is being addressed at all until after it is resolved, at which point it is mentioned in the release notes for a new SDK release.

"Unfortunately there is currently no externally-accessible issue-tracking system," wrote Google developer Dan Morrill in response to complaints about the bug reporting situation. "We are considering how we might implement such a system, but we don't have an answer yet. The biggest snag is simply keeping our internal issue tracker in sync with an external one. So, it's a process problem, rather than a technical problem."

Companies like Skype, Nokia, and Trolltech all have public issue tracking systems for their software, so one has to wonder why Google, with all of its resources, can't do the same for Android. This is a pretty clear symptom of a dysfunctional development process. In an effort to minimize the frustration of not having centralized issue tracking, users have started to independently catalog known bugs at an unofficial Android wiki.

Another major problem with Android is lack of documentation. The API reference material doesn't provide enough information and one sometimes has to experiment (that is,guess) to figure out what the parameters for various methods actually do. In many cases, I found the developer discussion group to be far more informative than the API documentation. I also grew frustrated with some of the inconsistencies in the API naming conventions, an issue that other developers have complained about as well.

Working with the layout model for the Android user interface can also be frustrating. The code samples mostly emphasize the XML-based user interface description language, so there aren't enough examples that demonstrate programmatic layout techniques.

A recent article in the Wall Street Journal illuminates problems encountered by other developers who are attempting to build applications for Android. "Functionality is not there, is poorly documented or just doesn't work," MergeLab mobile startup founder Adam MacBeth told the WSJ. "It's clearly not ready for prime time."

The strength of an android

Although Android has its share of problems, there are some places where it really shines. The Eclipse plug-in isquite effective and provides excellent integration with the Android emulator. Every time you initiate the debugging process in Eclipse, it will start up your program inside the emulator and connect it to the Eclipse debugger automatically. You don't even have to close the emulator between tests; you can just modify the code and run the debug process again and it will restart your program in the currently running emulator instance. The seamless support for breakpoint debugging is so effective that it feels like developing a regular desktop application.

The setup is also surprisingly easy if you already have Eclipse installed. You just unzip the SDK, install the Eclipse plug-in, tell it where you put the SDK, and you are good to go. Downloading the SDK to compiling my first Hello World programtook me less than ten minutes. In that respect, Android provides a much better experience than Maemo, for instance, which is a real pain to set up.

There are a few other places where Android surprised me with goodness. The ScrollView widget appears to support kinetic scrolling right out of the box, which means that developers won't have to implement that at the application level like they do with Maemo. I was also impressed by Android's seamless support for screen rotation and multiple resolutions. The emulator comes with several skins that you can use to test your application at various screen sizes and in different orientations. My applications worked fine in both horizontal and vertical orientations without requiring any custom programming. The user interface changed to accommodate horizontal orientation in much the same way a regular desktop application changes when it is resized.

To test out the API, I wrote a few experimental Android programs, including a Twitter client. The API is moderately conducive to rapid application development, but there are still some gaps. Although the API offers a lot of really nice functionality for animated transitions, alpha transparency, and other similar visual effects, it doesn't make it easy to create applications that have a really polished look and feel. For my Twitter application, for instance, I wanted to put a nice picture in the background and have a transparent, rounded rectangle with a border behind each tweet, but those kinds of embellishments end up being way more trouble than they are worth in Android. By comparison, getting the same effect with XUL only requires a few trivial lines of CSS.

I had a hard enough time getting the basic layout to look right even without thinking about embellishments. Android would benefit greatly from a drag-and-drop design utility that provides an interactive approach to layout and exposes all of the widget attributes in aclear and expressive way.

Despite some of the bugs and limitations in the API, it is definitely a viable and effective platform for application development. My Twitter client was only about 130 lines of code, which isimpressive. That said, I could stillcrank out applications faster with Java FX Script. In general, I think that Java FX Mobile is probably the competing platform that is most analogous to Android.

The inevitable comparisons between Android and the iPhone platform seem a bit misguided now that I've really worked with Android. The iPhone platform seems to be tailored to a very specific kind of user experience that is particular to the hardware. Apple has always been good at leveraging the tight coupling between its hardware and software, and the iPhone is no exception. With the iPhone, Apple has sacrificed the potential for hardware diversity but gained in the process the ability to make innovative technologies like multitouch a ubiquitous part of the user experience. Android, on the other hand, has to be designed from the ground up to support an extremely diverse range of hardware devices with vastly different capabilities.

Android's design seems to be heavily focused on making as few assumptions as possible about the kinds of devices on which the software will run. And seriously, some of those devices are going to be monstrously ugly clunkers compared to the iPhone.

It's important to remember that Android is still in early stages of development and that its present weaknesses aren't indicative of failure. Devices with Android won't even start to hit the market until later next year, so this is like a pre-release aimed at spurring early development so that a healthy ecosystem of third-party software applications is available at launch.

Despite pre-release status, some of Android's weaknesses areindefensible. Google's Android teamneeds to get its act together and figure out how to interact with a rapidly growing community of professional and enthusiast developers. The "release early and often" strategy is generally a good thing, but it utterly fails when infrastructure isn't in place to facilitate proper handling of user feedback. Google has a habit of embracing the early release philosophy with a little too much enthusiasm, and the current situation with Android is emblematic of that approach.