Powered by Yzesqiao!

New URL features can make your e-mail productive again

Posted on 27/01/2019 by

One of the incredibly useful things about Mac OS X in general is the potential for integration between applications. Type the name of an Address Book contact in Gus Mueller's VoodooPad and it'll link the name, offering a useful contextual menu. Collect icons in CandyBar? Right click one and you can set it as your iChat avatar with the option of applying any of Leopard's new image effects. And let's not even get started on the power that AppleScript and its mortal-friendly Automator enable for moving and manipulating data between applications.HangZhou Night Net

It is with this integration in mind that some new features in a couple of Mac OS X e-mail clients deserve a highlight, as they're fairly game-changing developments for those who have to work with mail on a regular basis. First is the discovery of Leopard Mail's support of message URLs, explored in-depth by John Gruber at Daring Fireball. Though the new feature is strangely undocumented by Apple, users have discovered that Mail now supports a system of URLs (yes, URLs can do more than point to porn) that allow you to link specific messages in other applications. For example, you could include links to a couple Mail messages from coworkers alongside notes, pictures, and web links in OmniOutliner or Yojimbo documents. This opens up a whole new productivity world, allowing you to bring your e-mail into other applications that aren't specifically designed to integrate with Leopard's Mail.

To help make it easy for users to harvest these message links (as of 10.5.1, Mail doesn't provide an option, and not all applications create the proper Mail message URL from a simple drag and drop yet), Gruber includes the code for a simple AppleScript at the end of his post. Save that script with Script Editor (found in /Applications/AppleScript/) and call it via any number of methods, such as Mac OS X's own AppleScript menubar item, Red Sweater's FastScripts, or launchers like Quicksilver and LaunchBar. The newest Leopard version of indev software's MailTags plug-in for Mail also provides a dedicated menu option for copying a message URL.

If this integration has your productivity gears turning, but Gmail is your client of choice, Mailplane could offer a nice compromise. As a desktop Gmail browser that allows for things like drag-and-drop file attachment and even an iPhoto plug-in for e-mailing photos, Mailplane is more or less a bridge between the convenience of webmail and the integrated power of desktop clients.

New in the most recent private betas of Mailplane (1.55.4 and above) is a similar URL system for Gmail messages which appears to work on both Leopard and Tiger. Complete with an Edit > Copy as Mailplane URL option, this option allows users to paste custom mailplane:// URLs in other applications to bring mail out of Gmail and into their productivity workflows. Remember, though, that Mailplane is still a browser for Gmail, albeit with the aforementioned modifications and other useful things like Growl notifications and support for multiple accounts (including Google Apps). Since it isn't an offline mail client, you'll still need to be online for a Mailplane URL to connect to its corresponding Gmail message.

Still, these new message URL features in two useful Mac e-mail clients will likely see some official integration love from other third-party apps in the near future. Aside from DIY AppleScripts, apps like Yojimbo and TextMate can only benefit from being able to include e-mail in the productivity mix. Knock knock third parties—how's about it?

GOOG vs. Guge: Chinese company sues Google over name

Posted on 27/01/2019 by

Google's Chinese business has been consistently questioned and criticized, but now a Chinese company has taken issue simply with its name. HangZhou Night Net

Beijing Guge Sci-Tech is suing Guge, or Google China, claiming that the Internet search giant is tramlping on its good and, perhaps most important, registered Chinese Mandarin business name. Guge Sci-Tech registered its name in 2006, a few months before Google did. Now the tech company wants Google to change the name of its Chinese branch and pay an undisclosed sum to cover all its legal fees.

Beijing Guge Sci-Tech registered its name at the Beijing Municipal Industrial and Commercial Bureau on April 19, 2006, and Google followed with registering "Guge" on November 24 that same year. This similarity in names, Beijing Guge Sci-Tech argues, has confused the public and damaged its business.

However, if Google was considering the use of the word before Beijing Guge Sci-Tech registered, it could work in Google's favor as the company has clearly registered its name in good faith. And, for all we know, the Chinese-based Guge may be little more than a trademark registration or a cybersquat, as information on the company is extremely hard to come by. Google China suggests that Guge Sci-Tech is indeed looking for an easy payout, perhaps having picked up on Google's plans by paying attention to Western media. Everyone knew Google would be changing its name in China, as "Goo-Gol" means "old" or "lame dog."

Also at issue between the companies is the definition of guge, which is not a normal Chinese word. Google says its a combination of Chinese characters that mean "valley" and "song"—a reference to Google's Silicon Valley ties. Beijing Guge Sci-Tech disagrees, stating the word means "a cuckoo singing in the spring, or the sound of grain singing during the harvest autumn time."

Doing it with style: bringing more bling to GTK with OpenGL

Posted on 27/01/2019 by

At FOSSCamp in October, skilled eye-candy expert Mirco Müller (also known as MacSlow) hosted a session about using OpenGL in GTK to bring richer user interfaces to desktop Linux applications. Building on the technologies that he presented at FOSSCamp, Müller recently published a blog entry that demonstrates his latest impressive experiments with OpenGL, GTK, and offscreen rendering. HangZhou Night Net

Müller is currently developing the GDM Face Browser, a new login manager for Ubuntu that will include enhanced visual effects and smoothly animated transitions. To implement the face browser, he will need to be able to seamlessly combine OpenGL and conventional GTK widgets. Existing canvas libraries like Pigment and Clutter are certainly robust options for OpenGL development, but they do not offer support for the kind of integration that he envisions.

The solution, says Müller, is to use offscreen rendering and the texture_from_pixmap OpenGL extension. In his experimental demo programs, he loads GTK widgets from Glade user interface files, renders them into offscreen pixmaps, and then uses texture_from_pixmap to display the rendered widgets in a GTK/GLExt drawing area, where they can be transformed and manipulated with OpenGL. Müller has created several demo videos that show this technique can be used to apply animated transitions and add reflections to widgets. The visual effects implemented by Müller with GTK and OpenGL do not require the presence of a compositing window manager.

We talked to to Müller to get some additional insight into the challenges and benefits of incorporating OpenGL into desktop applications. One of the major deficiencies of the current approach, he says, is that users will not be able to interact with GTK widgets while they are being animated with OpenGL—a limitation that stems from lack of support for input redirection at the level of the toolkit.

"Interaction will only be possible at the final/original position of the widget," Müller told us, "since gtk+ has no knowledge of the animation/transformation taking place. I consider it to be much work to get clean input-redirection working in gtk+. There might be some ways to achieve it using hacks or work-arounds, but that should be avoided."

Eye candy or UI improvement?

Although some critics might be inclined to prematurely deride Müller's work as indulgent eye-candy, he primarily envisions developers adopting OpenGL integration to tangibly improve the user experience by increasing usability. "I would like to see [OpenGL] being used in applications for transition effects," he says. "We can right now improve the visual clues for users. By that I mean the UI could better inform them what's going on. Widgets [shouldn't] just pop up or vanish in an instant, but gradually slide or fade in. These transitions don't have to take a lot of time. As a rule of thumb half a second would be sufficient."

In particular, Müller would like to experiment with adding animated transition effects to the GTK notebook and expander widgets. He also has some creative ideas for applying animations to the widget focus rectangle in order to make its movement more visible to the user. Müller also discusses some applications that would benefit from OpenGL-based transitions. In programs like the Totem video player, he says, the playback controls in fullscreen mode could slide in and out rather than just appearing and disappearing. Alluding to the webcam demo that he made for FOSSCamp, he also points out the potential for incorporating OpenGL effects into video chat programs like Ekiga. Müller has also long advocated using visual effects to create richer and more intuitive user interfaces for file and photo management software—ideas that he has put into practice with his brilliant LowFat image viewer.

"The kind of effects possible if you can render everything into a texture, map it to a rectangle or mesh and then do further manipulations with GL (or shaders) are next to limitless," says Müller. "Just look at what Compiz allows you to do to windows now. Imagine that on the widget-level."

We also asked Müller to explain the performance implications of using OpenGL in GTK applications. "The memory-overhead is directly linked to the size of the window, since all the rendering has to happen in an OpenGL-context filling the whole window. The bigger the window, the bigger the needed amount of video memory," Müller explains. "The load caused on the system is directly linked to the animation-refresh one chooses. 60Hz would be super smooth and very slick. But that's a bit of an overkill in most cases. One still gets good results from only 20Hz."

There are still some performance issues with the texture_from_pixmap that are actively being resolved. "Due to some issues in Xorg (or rather DRI) there are still too many copy-operations going on behind the scenes for GLX_EXT_texture_from_pixmap," says Müller. "There are also locking issues and a couple of other things. At the moment I cannot name them all with exact details. But more importantly is the fact that there's currently work being done—in the form of DRI2—by Kristian Hoegsberg (Red Hat) to fix these very issues on Xorg. I cannot stress enough how important this DRI2 work is!"

Although using OpenGL incurs additional memory consumption and system load, Müller says that the impact is small enough to make his current approach viable for projects like the Ubuntu Face Browser.

Although individual developers can use Müller's boiler-plate code to incorporate OpenGL integration into their own GTK programs, Müller suspects that support for this technique will not be included directly in GTK at any time in the near future, which will make it harder for developers to adopt. "Right now it is all happening in application-level code and not inside gtk+," Müller explains. "Since I use GLX_EXT_texture_from_pixmap to achieve efficient texture-mapping out of the wigets' pixmap it is currently a X11-only solution. Therefore I imagine they might want to see a more platform-independent solution to this first." The GTK offscreen rendering feature that Müller uses also currently lacks cross-platform support.

Despite the challenges and limitations, Müller's creative work opens the door for some profoundly intriguing user interface enhancements in GTK and GNOME applications. "There are more things possible than you think," says Müller in some concluding remarks for our readers. "Don't have doubts, embrace what's available. X11 and gtk+ are pretty flexible. People who just don't seem motivated to explore ideas and test their limits (or the limits of the framework), should remember that this is Open Source. It lives and thrives only when people step up and get involved. Just f*****g do it!"

When consoles and PCs collide: Unreal Tournament 3 reviewed

Posted on 08/08/2019 by

When men were men and mice had balls

Unreal Tournament 3
Developer: Epic
Publisher: Midway
Platform: PC, PlayStation 3, (Xbox 360 in the future)
Price: $49.99 PC (Shop.Ars) $59.99 PS3 (Shop.Ars)
Rating: M (Teen) HangZhou Night Net

Unreal Tournament 3 feels like an anachronism. Back in the glory days of competitive first-person shooters, it was all about speed, reflexes, and having the best route through each map. Get the health, get the best weapons, and be able to nail a moving target with a headshot. The days of Quake and Unreal were great for those of us with gaming ADD; I used to put on headphones and blast my favorite metal songs while blowing away my friends. I could get all out my aggression in about a half an hour playing those games.

Then something happened. Things slowed down. Vehicles were put in. People were expected to play in large teams, hang back, use their "classes." The change that games like Battlefield 1942 brought to multiplayer games was an interesting shake-up, but then everything went that way. It's been a while since we've had a game that was all reflex, that was a mad dash to your favorite weapon, where you could download a new map or gametype every day if you wanted to.

I'm too awesome to notice that Darkwalker behind me.

Unreal Tournament 3, even with its odd name (it comes after Unreal Tournament and apparently combines Unreal Tournament 2003 and Unreal Tournament 2004 as Unreal Tournament 2, while leaving the single-player games aside), doesn't really have to do much as a game for it to get a hearty "buy" recommendation. With any of the Unreal games you're basically buying the latest iteration of the Unreal engine, with a few examples of what it can do. Sure, the out-of-the-box experience usually delivers some amazing thrills, but the real value is what amateur coders and enthusiasts will do with the game using the robust editors and tools. You'll be playing free, high-quality Unreal Tournament 3 content for at least the next two years; if the game came with an empty box and an engine license, it would still be a good deal.

But we can't review based on potential, even ifamazing stuff is already online. We're going to look at what the game is like now, as well as look at how the game is blazing its own trail on consoles. On the PC, this is an incredibly strong, almost old-school shooter. On the console, it's breaking down barriers.

Everyone moves faster, everything is slower

Ironically enough, those of us who spent way too much time playing Unreal Tournament 2004 will have a harder time getting a feel for UT3 than newcomers. The flack cannon isn't nearly as powerful or as fast. Neither is the rocket launcher. The minigun feels more powerful. Watching how easily nimble opponents can double-tap the A or D button to jump out of the way of the slower rockets while pumping you full of enforcer rounds is a humbling experience. It's still a twitch fest, but spamming rockets isn'tthe easyroad to success.

You'll have plenty of time to figure all this out in the campaign mode, a single-player experience that takes you through the new game modes and maps while explaining what's going on. While this is a good way for rookies and veterans alike to get proficient in the game, the dogged determination to explain why a war is being fought like a first-person shooter is campy at first, then just mildly annoying. The "Field Lattice Generator" that's so important? Yeah, it's the flag. FLaG. Get it? I don't know why the game can't just calm down and be a game, but it's at least amusing in its earnestness.

Still, it's worth it to go through at least a few of the missions in the campaign mode just to figure out what's up. To make the process a little bit more tolerable, you can play with others against the PC-controlled bots in a kind of single-player co-op mode. This can be a good way to learn some tactics while not at the mercy of another human team, and it's a very welcome feature, especially considering how the teammate AIfalls apart in objective-based maps.

At first,Unreal Tournament 3might seemjust a cosmetic upgrade to an already old formula, but once you start to really dig in, you get a sense for how much things have changed and how different the playing field now is.

AMD’s Barcelona, Phenom suffer early setbacks

Posted on 08/08/2019 by

It has been over four months since AMD launched the quad-core Barcelona server processor that was to turn the company around, and a little over three weeks since the launch of Phenom, the desktop derivative of Barcelona. So how is the new quad-core part doing? Not so well, thanks to an odd error that has limited the clockspeeds of the new chips at precisely the moment when AMD needs all the GHz it can get to keep up with Intel. HangZhou Night Net

Those of you who've been following the TLB erratum saga on Kit will know much of the following story, but if you haven't been keeping track, here's a recap to bring you up to speed.

Slow launch speeds point to design defect

Phenom came out of the gate last month with an underwhelming launch performance that left it looking like an also-ran against Intel's Kentsfield-based 2.4GHz Q6600. With Yorkfield-based quad-core parts already available, AMD badly needed to establish its quad-core part as superior to the Q6600 and failed to do so with Phenom. At 2.3GHz, Phenom still trails the Q6600, and while bumping the chip up to 2.4GHz does shrink Intel's lead, it does not eliminate it.

Shortly after launch day, it emerged that Phenom's slow launch speed was merely a symptom of a deeper problem, and one that also affects Barcelona. AMD announced the existence of an error in Barcelona's and Phenom's TLB (translation lookaside buffers), and that revelation has tainted the current version of the core. Despite AMD's promise that the error is extremely rare, consumers have historically reacted poorly to hardware that they perceive as defective. In this case, the defect might never appear over the entire lifetime of a desktop chip like Phenom—but it's still enough to push away potential buyers.

Though the problem may be rare for Phenom customers, the problem is apparently more acute for Barcelona customers. Server chips generally see more intense cache usage, so Barcelona users are more likely to see TLB-related problems than Phenom users. This being the case, AMD has allegedly stopped shipping Barcelona to customers, although there's some dispute over whether or not there has really been a change in the company's shipping patterns.

Fixing the problem

AMD has created a solution to the flaw that Phenom board manufacturers are required to include, but the fix itself knocks anywhere from 5 percent to 20 percent of performance off the chip. That's not good news for a processor that's already lagging behind its closest Intel counterpart, and it will turn off potential Phenom customers even further.

AMD fixed the TLB erratum in the upcoming "B3" version of Phenom and Barcelona. Unfortunately, that core revision isn't expected until mid-to-late Q1, and its not clear how willing AMD will be to scale the K10 architecture before the new core arrives. Currently, AMD's roadmap projects the arrival of a 3GHz Phenom in the second quarter of 2008, but that's a very slow ramp considering that Intel's quad-core Yorkfield is already available at 3GHz.

AMD did the right thing by being upfront and honest about K10's TLB erratum, and that may help the company's sales in the longer term by generating consumer goodwill. Regardless of whether or not this occurs, however, Sunnyvale's next few quarters aren't going to be pretty. Even once the processor's TLB issue is corrected, the new quad-core line's projected scaling isn't aggressive enough to close the gap between itself and Intel's Penryn offerings.

Some good news on the desktop front

While Barcelona's future is still looking murky, Phenom has a few bright spots amidst the current clouds. The immediate good news is that chips and boards are available in the retail channel. NewEgg currently carries both the Phenom X4 9500 (2.2GHz) and the Phenom X4 9600 (2.3GHz) at prices of $239.99 and $275.99 respectively. Current Phenom parts obviously aren't going to break any sales records, but AMD is at least shipping the part. The company is also confident that end-users will never experience the TLB bug—so much so, in fact, that it intends to offer end-users the chance to turn the required BIOS fix on and off at will. Future versions of AMD Overdrive will include such a toggle, thus allowing enthusiasts to choose for themselves which mode to run in. AMD plans to announce a multiplier-unlocked "Black Edition" Phenom in the near future. The chip will run at the same 2.3GHz as the current 9600 Phenom X4, and will retail for the same $275.99. There's no word on what kind of ceiling these chips might have—launch processors tended to top out at 2.8GHz or so, but AMD has stated that their current K8 65nm production is running very well. That's possibly indicative of a general improvement across all AMD product lines, but it may also apply specifically to K8 production.

With Phenom unlikely to catch Core 2 Duo in terms of raw performance, AMD will have to find other areas in which to compete with Intel. Strong execution from ATI, hopefully culminating in the company's return to the high-end market, would certainly help, as would further refinements of the 790 chipset. The company has also given itself some room to cut prices on Phenom and could could potentially squeeze further performance or lower power consumption out of K8. There's also the introduction of Tri-core and native dual-core products yet to come, both of which could potentially improve AMD's outlook.

Bali climate meeting update: limits out, mitigation aid in (updated)

Posted on 08/08/2019 by

The Bali negotiations, meant to lay the groundwork for a successor to the Kyoto Protocols, are nearing their end, and the picture of what will emerge has become more clear. Despite an impassioned plea by the Secretary-General of the UN and strong backing from the nations of the European Union, the US and China have succeeded in preventing any agreements on hard targets for future carbon reductions. Progress has been made, however, in addressing some of the concerns of the developing world. HangZhou Night Net

Although the Bali negotiations began last week, many high-level officials are arriving now, after the preliminary work has been completed. Ban Kai-Moon, Secretary-General of the UN, spoke to the gathered delegates earlier today, calling climate change, "the moral challenge of our generation." He presented inaction as a threat to future inhabitants of the planet: "Succeeding generations depend on us," he said. "We cannot rob our children of their future."

To a large extent, however, chances for any definitive action had already passed by the time he spoke. The US delegation entered these negotiations with the intention of blocking any hard limits on carbon emissions, and they carried the day over the objections of the European Union and most developing nations. This position is in keeping with the Bush administration's promotion of long-term aspirational goals, rather than strict limits.

When talking to reporters after his speech, Ban apparently accepted that setting limits at this stage, "may be too ambitions," according to the Associated Press. "Practically speaking," he said, "this will have to be negotiated down the road."

With no movement on that front, attention has shifted towards a second major goal of the Bali meeting, aid to the developing world. The New York Times is reporting that more significant progress has been made on that front. Kyoto had set up a mechanism for funding adaptation work in the developing world, but the effort was chronically underfunded and developing nations have found it difficult to get projects approved. The new agreement streamlines the approval process and levies a tax on carbon trading markets to fund the program. Although this is still unlikely to cover anywhere close to the predicted costs the developing world will face due to climate change, the Times suggests that this agreement is being viewed by those nations as a sign that their concerns are at least being treated seriously.

Given the stated positions of major participants going into these talks, these results are fairly predictable. Much of the hard work involved in forging a binding agreement has been put off to the future. Perhaps the most significant consequence of the Bali negotiations is that any agreement on binding emissions limits is unlikely to be completed prior to the end of the Bush administration. Many of the presidential candidates appear far more willing to consider mandated limits on carbon emissions, raising the possibility that the negotiations that have been put off to the future will be less contentious when that future arrives.

UPDATE: The BBC is reporting that the European Union is threatening to retaliate for the US' intransigence at Bali by boycotting future climate talks hosted by the White House. These talks are intended to set the aspirational goals for emissions curbs favored by the US; the absence of major emitters would make them largely pointless. The report suggests that this threat may simply be an EU negotiating ploy, but many have interpreted the Washington talks as little more than an attempt to undermine Bali in the first place.

A first look at KDE 4.0 release candidate 2

Posted on 08/08/2019 by

HangZhou Night Net

The KDE development community has been working on KDE 4, a major KDE revision that brings in numerous new technologies, andthe official releaseis scheduled for next month. My colleague Troy has provided in-depth overviews of previous betas, but today we're going to take a lookat the second KDE 4 release candidate, which was made available for download earlier this week.

For my tests, I used the KDE 4 RC 2 Live CD image proved by the KDE community. The Live CD is based on OpenSUSE and provides easy access to a complete KDE 4 environment.

Although the release candidate has many rough edges and is significantly lacking in polish, the underlying technologies are all in place and deliver some very impressive functionality. KDE 4 offers some unique architectural advantages over its predecessor, including a completely redesigned desktop and panel system called Plasma, an improved version of the KWin window manager with advanced compositing features, a new modular multimedia framework called Phonon, and a sophisticated hardware API called Solid.

KDE 4 uses a completely new visual style and icon set called Oxygen. The Oxygen widget theme uses a lot of subtle gradients, light shadows, round edges, and background highlighting. Generally,the Oxygen style manages to be attractive without becoming a distraction, butthere are still some placeswhere additional work is needed. This is most evident in a handful of preference dialogs where elements are cut off or notrendered correctly. In most of the major desktop applications, Oxygen looks nice. I was also impressed with the flexibility of Oxygen. The appearance preferences dialog gives users extensive control over the colors of specific elements. Oxygen looks great with bothdark and light color schemes.

In current versions of KDE, Konqueror is both a web browser and a file manager. In KDE 4, the focus for Konqueror has shifted towards browsing, whileDolphin has become the default file manager. When I first looked at Dolphin back in April, I really liked what I saw. The version of Dolphin included in the release candidate still serves up all the same great features and adds a few more nice ones, like a new column view that is similar to the Mac OS X Finder. Dolphin is powerful, intuitive, and a pleasure to use.

I tested several other applications as well, like Amarok 2 and KWord 2, both of which are impressive despite some rough edges. KWord 2's somewhat unusual user interface isintriguing, but clearly requires more polish to be truly useful. KWord 2 pushes most of the formatting features into a vertical dock that runs along the side of the window, which seemspractical for users with widescreen monitors. The Amarok 2 user interface has a lot of nice little aesthetic flourishes and iseasily the most polished of the KDE 4 applications.

Currently, the most significant weaknesses in KDE 4 arein the Plasma desktop layer. The Plasma components used for the desktop and the panel areundercooked. One of the primary features of Plasma is support for desktop-embedded widgets—called plasmoids—that are similar in nature to SuperKaramba widgets, Windows Sidebar gadgets, or Mac OS X Dashboard widgets. I tested several of the plasmoids that ship with the release candidate, including a clock, battery meter, calculator, and dictionary. On two occasions, adding a plasmoid to the desktop caused the screen to go black and the computer to become unresponsive, requiring me to forcibly kill and restart Xorg in order to continue with my tests. The plasmoids themselves are also somewhat buggy.

The panel at the bottom of the screen, which is also provided by Plasma, includes a menu button, task list, and notification area. I was unable to find any way to configure the panel, which is significantly less functional than the panel in the current version of KDE.

The second KDE 4 release candidate illuminates the extent to which KDE 4 has matured since the earlier betas, but a massive infusion of debugging and polish is needed before the release next month. Heavy development on KDE 4 will obviously continue after the KDE 4.0 release, so whatever pieces are still missing are sure to befilled in eventually. Some critics point to the deficiencies of KDE 4 and argue that drastic reinvention of basic desktop components might not have been a good idea. After experiencing KDE 4 myself, I have to disagree.

Transitions are always hard, butwhen the dust settles, a clean break between versions and an opportunity to introduce some innovative new ideasshouldlead to a stronger user experience. After years of development, unnecessary cruft builds up and things tend to get disorganized. The KDE 4 transition, though it will definitely be rocky at first, givesdevelopers the ability to cut awaythe cruft and reorganize codein a manner that makes the whole environment more future-proof and easier to maintain.

Evidence of the advantages might not be readily apparent to end users yet, but there are plenty ofsweet improvementsunder the hood in KDE 4. Developer-oriented changes like the much-needed migration from Autotools to CMake, the shift from Qt 3 to Qt 4, or the move from DCOP to D-Bus, for instance, all offer very real advantages.

My experience with KDE 4 revealed a set of hairy,nasty warts thatneed to be resolved, but it also shonesome light on some impressive improvements. Although I'm skeptical that all of the problems with the release candidate can be fixed in only one month, a rough first releaseseems a small price to pay for the significant long-term advantages offered by the transition.

Microsoft expands XNA development platform with Live functionality

Posted on 08/08/2019 by

Running with the success of XNA, Microsoft's open-ended development platform for use with PC and Xbox 360 game development, the company has moved the suite to the next phase with the launch of the XNA Game Studio 2.0. The new version allows developers to leverage the proprietary technologies of Xbox Live and Games for Windows Live in their creations. The new software is now available for download from the XNA Creator's Club web site. HangZhou Night Net

The headlining feature of the latest build is the addition of networking support to the XNA Framework API for both Xbox Live and Games for Windows Live. The cross-platform networking protocol enables "developers to create multiplayer games in which players on separate machines can compete with each other or play cooperatively."

Also included in the update are a variety of equally significant features, including a new high-level "Game" application model and the opening of the "Guide" class to developers. As Xbox 360 users will know, the Guide is part of the underlying Xbox 360 operating system which allows manipulation of the device, including text entry, storage drive selection, and more. The updated API also includes new methods for detecting alternative input mediums, including the Xbox 360 chatpad.

The XNA platform launched publicly in March of 2006 and was an unusual step forward for the company. The typically closed-platform nature of console development was opened slightly for independent and hobbyist developers who were interesting in developing for the increasingly popular platform in a cost-effective and relatively simple way, and cross-platform development was made easier for companies looking to release Xbox 360 and PC software as cost-effectively as possible.

The platform itself leverages a variety of other Microsoft technologies, including Visual Studio and subsequently C#. In fact, the only real downside (from an end-user perspective, anyway) is the $99 annual fee required to have access to the developer network: the one stipulation that prevents the service from being truly open to the masses.

Nevertheless, the platform has been enjoying success in both real-world and academic applications, and this latest advancement will only serve to further the proliferation of Microsoft's development platform.

Sun to give $1 million to open-source software developers

Posted on 08/07/2019 by

Sun Microsystems has announced a new program in which the company intends to donate a million dollars to independent software developers who contribute to Sun's open source software projects. The Open Source Community Innovation Awards Program was announced by Sun's chief open-source officer, Simon Phipps, at FOSS.IN, an open-source software summit that took place in Bangalore last week. HangZhou Night Net

Although some are comparing Sun's award program to Google's Summer of Code initiative, there are some significant differences. Unlike Google's Summer of Code program—which focused on increasing student participation in open-source software development—Sun's program is open to everyone.

Another crucial difference is that Sun isn't establishing the terms under which the resources are distributed. Instead of creating specific guidelines for individual contributors, Sun will permit participating open-source projects to determine individually how they plan to distribute the award funds allocated to them by Sun. This approach is highly advantageous, because it will allow individual projects to award contributors in a manner that is most appropriate for their development model and goals. For instance, the open-source communities that receive resources from Sun could potentially choose to use the money to set up bounties on specific bugs and features, reward notable longtime contributors, or set up paid internships.

Another major difference between Sun's program and Google's Summer of Code initiative is the nature of the participating projects. Google's Summer of Code program involved a highly diverse assortment of independent open-source projects that aren't directly affiliated with Google. Sun's program, on the other hand, is currently limited only to Sun's own open-source products, including OpenSolaris, GlassFish, OpenJDK, OpenSPARC, NetBeans, and OpenOffice.org. Google's initiative is clearly focused more on giving back to the broader open source community, whereas Sun's program is specifically about rewarding contributors who participate in Sun's communities.

In recent years, Sun has been working hard to build robust open-source software communities around many of its flagship products. For instance, the company has attempted to build mindshare around OpenSolaris by turning it into a more practical platform for day-to-day use with Project Indiana. Sun's new award program isn't charity in any sense of the word, but it does seem like another pragmatic way for Sun to invest in its own community.

Cinema Displays to receive Macworld update? We’re not so sure

Posted on 08/07/2019 by

As one of Apple's products that has gone the longest without an update of any kind, the Cinema Displays are definitely beginning to look a bit long in the… cable. Introduced in June 2004, the Cinemas gained DVI inputs and a new aluminum design to match Apple's Pro machines. The lineup has more or less remained dormant since then. HangZhou Night Net

However, while Apple added iSight cameras to its other machines, it quietly discontinued the standalone version last year. This strange retirement refreshed rumors of a change in the iWind for Cinema Displays, but alas, plenty of announcement opportunities and events have come and gone with nary a word from Apple.

Now, ZDNet is shaking the rumor tree again with an observation that Apple has moved the Cinema Displays from its online store's main page during a supposed minor design shuffle. You can, however, still get to the displays by clicking Mac Accessories > Displays in the left-hand navigation, and they still ship within 24 hours. Typically, if a product in Apple's online store is on the verge of a revision, shipping times shoot through the roof. This reorganization of Apple's online store is likely more about optimizing for holiday shopping than preempting a new display.

If the Cinema Display is, in fact, about to receive a redesign, I doubt that it'll happen at Macworld '08. Like we've said before, outside of a few exceptions (like the new Intel-based MacBook Pros accompanying iMacs at Macworld '06), pro hardware usually doesn't have much of a place at Macworld. That stuff is mostly reserved for other more pro-friendly events like WWDC, NAB, or simply Apple's own press events scattered throughout the year.

Still, there's no arguing that the Cinema Displays need a makeover—particularly one that includes an iSight camera and HDMI support. That said, I think the overall design is looking just fine, and it still perfectly complements the aesthetic of Apple's pro lineup. If anything is actually in store for Apple's displays, we probably still have to wait a bit longer.

The Kohnke affair: PR company admits to pushing positive reviews

Posted on 08/07/2019 by

It's been just a few short weeks since the Gamespot-Gerstmann debacle passed and, though the fallout was beginning to fade a bit, it has kicked back up again in full force. Its revival comes thanks to a lawsuit involving the large game industry public relations company Kohnke Communcations. The full suit includes explicit confirmation that the company admits to convincing "reviewers to write positive reviews" about games. HangZhou Night Net

The suit itself is between Kohnke, which prides itself as being the "premier public relations agency for interactive entertainment companies," and developer Perpetual Entertainment. The dispute centers on various issues related to the developers' recently canned project Gods & Heroes as a result of interest shifting towards the company's current project, the MMO Star Trek Online. Kohnke claims to be owed a balance of $10,675 in "outstanding invoices" for the PR services rendered unto Perpetual to promote the game.

What's more interesting than the simple suit, though, is the information that has been revealed about the business practices of Kohnke Communications in its wake. In a copy of the filing seen by Ars, the company admits to a few interesting "PR-related activities" that seem pretty questionable:

Kohnke's public relations campaign was successful in creating pre-release 'buzz' around Gods & Heroes, and in convincing reviewers to write positive reviews about the game. In addition, on information and belief, Perpetual had signed up more than 100,000 beta testers for Gods & Heroes, a large number for an unreleased MMO. These early, pre-launch successes indicated that Gods & Heroes would be a great success upon launch, and that Kohnke would receive an incentive compensation payment of up to $280,000 after the launch of Gods & Heroes.

While working to promote a game is certainly the expected practice of a PR firm, the admission of successfully "convincing reviewers to write positive reviews about the game"—and the subsequent call for reward—is certainly suspect, especially at such a sensitive time in the world of game journalism. Kohnke Communications is a significant player in the game industry, working with many of the biggest players and acting as the face of countless companies at the big trade shows across the country.

We'll be keeping our ear to the ground for more details about the original case, and the reaction that this revelation will likely produce.

Congressman Hollywood: It’s time to revisit the DMCA

Posted on 08/07/2019 by

Rep. Howard Berman (D-CA), also known as Congressman Hollywood, is one of the most powerful members of the House when it comes to intellectual property issues, so when he muses aloud about "revisiting" the DMCA, people listen. Unfortunately, Berman wants to reform the DMCA because it doesn't go far enough, and his ideas sound like they're ripped right from the pages of the Big Content playbook. HangZhou Night Net

Berman chairs the House Subcommittee on Courts, the Internet, and Intellectual Property, and this morning oversaw a hearing on the PRO-IP Act, a bill that could boost statutory damages for copyright infringement and create a special IP enforcement office in the executive branch as well as a new IP division at the Department of Justice. Before witness testimony got underway, Berman mused aloud about things the bill did not contain but which he would like to revisit in the future.

Berman believes that the DMCA, in particular, needs reforming, but not in the ways that consumers have clamored for. Instead, the congressman wants to look again at the issue of "safe harbor" provisions currently extended to ISPs for infringing content flowing across their networks. He wants to examine the "effectiveness of takedown notices" under the DMCA, and he'd like to take another look at whether filtering technology has advanced to the point where Congress ought to mandate it in certain situations.

The ideas could not be more pleasing to companies like Viacom, which is currently suing YouTube over the issue of takedown notices, claiming that simply adhering to the DMCA takedown notice system is not good enough. The MPAA, which has been pushing for ISPs to adopt video filtering on their networks, should also be thrilled.

Big Content has been touting fingerprinting and filtering technologies as the solution to the problem of having their copyrighted content posted online. In October, Viacom and a handful of other companies issued a set of principles governing how user-generated video content should be handled. Signatories to the manifesto would be forced to beyond the boundaries of the DMCA—in the same direction Rep. Berman wants the DMCA to go, in fact.

Gutting the Safe Harbor provision of the DMCA, as Berman appears to be advocating, would also provide a massive boost to the rights-holders. Conversely, it would have a chilling effect, not only on the likes of YouTube, but on any site that hosts any sort of user-generated content. The Safe Harbor is arguably one of the very few worthwhile provisions of the DMCA. Rewriting it to favor the interests of Big Content would be a gigantic mistake.

Eric Bangeman contributed to this story.

Jackass 2.5’s biggest stunt: skipping the box office

Posted on 08/07/2019 by

Reinventing the term "straight to DVD," Paramount and MTV have decided to try a new concept: "straight to the Internet." With the upcoming Internet-only premiere of Jackass 2.5, the two have also managed to remove the stigma associated with skipping the movie theater and going straight to the small-screen by making a big deal of the venture. The 64-minute feature-length film's debut will be the first of its kind from big studios. If it proves successful, Jackass 2.5 could open the door to more 'Net-only debuts, or, more importantly, simultaneous releases of movies that won't be as monumentally bad as this one is sure to be. HangZhou Night Net

Most of you are probably already familiar with Jackass, the show from MTV that highlighted the injury-prone, dangerous, and otherwise stupid antics of Johnny Knoxville and friends. The empire has grown since its humble beginnings, and box offices have already welcomed the original and sequel Jackass movies to the tune of $64 million and $73 million, respectively. But Jackass 2.5, which only cost $2 million to create (I mean, how much do some skateboards, 80 dozen eggs, a broom, three razors, and a live pig cost anyway?), is apparently just the thing to test out Internet-only distribution with—at least at first.

The plan goes like this: on Wednesday, December 19, Blockbuster will present the movie at http://www.blockbuster.jackassworld.com, which Internet users will be able to stream for free. Ads will be placed before and after the film, and presumably on the web page surrounding the embedded video. The ad-supported, streaming version of the movie will be available through the end of the year, but on December 26, those crazy enough to want to pay for a copy of it will be able to purchase Jackass 2.5 through some of their favorite digital video stores (which includes iTunes and Amazon), as well as on DVD. The movie will go for between $10 and $15 online, and a whopping $30 on DVD (which will include an additional 45 minutes of extras).

This scene from Jackass 2 tells
us what to expect from Jackass 2.5.

On January 1, ad-supported services like Joost will begin offering the movie (or clips thereof) for free, and later in the year, customers will be able to watch it through on-demand services. At that time, the website (jackassworld.com) will be turned into a portal of stupidity jackassery "all things jackass," including things like interviews and blog posts.

This isn't the first time a movie has gone 'Net-only. Independent filmmaker Edward Burns decided to debut his $4 million film Purple Violets exclusively through the iTunes Store last month, and larger studios like Fox and Sony have made their own short, promotional films available online. But this will be the first full-length movie, backed by a major studio, that will be introduced online before being sold on physical media. Paramount appears to be confident that the venture will easily pay for itself—it's not as if $2 million is a particularly far-reaching goal. "If this works, it could open up and really change the game about additional content that studios can create," Paramount president Thomas Lesinski, told the New York Times.

Lesinski is apparently referring to the type of game-changing content that contains "more vomiting, nudity and defecation," according to one exec speaking anonymously to the New York Times. That's, um, nice and all, but what we would really like to see are movies that would otherwise be fit for the box office, but distributed online first (or at the same time as they hit theaters). The concept of simultaneous releases has been tossed around for some time now, with some adventurous moviemakers even experimenting with it. But overall, the big studios are still slow on the uptake, worrying that they'll never be able to make the kind of revenue they do at the box office through advertising, licensing, and download fees.

But perhaps Jackass 2.5, with its low overhead and easy-to-reach goals, can change that. We're rooting for you and your nut-crushing antics, Johnny.

1 2 3 4 5 13 14