Roy Firestein

Security Feeds

Archive for May, 2009

Saved By Junk DNA: Vital Role In The Evolution Of Human Genome

May 31st, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
Stretches of DNA previously believed to be useless ‘junk’ DNA play a vital role in the evolution of our genome, researchers have now shown. They found that unstable pieces of junk DNA help tuning gene activity and enable organisms to quickly adapt to changes in their environments.

Omega Fatty Acid Balance Can Alter Immunity And Gene Expression

May 30th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
Using a controlled diet study with human volunteers, researchers may have teased out a biological basis for the increased inflammation observed due to humans’ shift in their consumption of omega fatty acids.

The Top 5 Cyber Security Myths

May 29th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

Given the media hype around the Conficker worm (and now Gumblar), and the constant barrage of alarming disclosure announcements, I thought it would be a good time to take a calmer look at some of the security myths, misconceptions and mistruths that plague the industry.

Many of these cyber security myths have been around for close to a decade. They have driven marketing campaigns and have sold a lot of traditional newspapers. But for the most part these threats have proven much less dangerous than ballyhooed. Worse, they distract us from addressing the routine problems that lead to a more secure global IT environment. Until we can address every day vulnerabilities threats, how can we justify focusing on exotic edge cases?

5. China is the Leading Exporter of Cybercrime

China has become the favorite security bad-guy country. If you believe media hype, that half of Beijing is dotted with malware manufacturing sweat shops turning out some of the most devilishly clever digital pathogens since the Black Death.

There is no doubt that the Chinese military is experimenting with Cyberwarfare techniques and there have been several highly publicized security incidents involving Chinese citizens. But in terms of organized Cybercrime, China is not nearly as involved as the pundits say. By contrast, China has been quite cooperative in working with the international community to address security incidents. In fact, they were instrumental in identifying and shutting down the command and control servers for the Conficker worm. China has also implemented tough Cybercrime litigation and has worked with international law enforcement to apprehend and prosecute cybercriminals.

4. Insider Threats Trump Outside Attacks

Most recognize that the main impetus for cybercrime has shifted from hobby-based cyber-vandalism to financially motivated theft of data and services. This shift has caused many to question the loyalty of internal employees. But as scary as the image of the bent accountant absconding with millions of confidential records, or the misguided IT consultant destroying decades of intellectual property, the reality remains that external parties commit majority of security incidents.

Should organizations implement controls to properly manage user access to sensitive information? Yes. Should IT continue to define usage policies and monitor activity for violations? Absolutely. But let’s not allow our attention to drift from those outsiders that initiate the majority of security incidents.

3. Advanced Hacking Techniques Render Conventional Security Pointless

90 percent of all external attacks take advantage of poorly administered, misconfigured, or inadequately managed systems that any moderately competent hacker can exploit. Sure, there are some real artists out there, but when you can take candy from a baby 90 percent of the time, you rarely need expert safecrackers.

It still stands that the majority of external attacks exploit most organizations’ astonishing inability to implement the most basic security controls. Why would criminals go through the trouble of creating elegant methods to bypass advanced controls when they can easily find poorly administered servers in the DMZ running vulnerable versions of Bind, or Windows Servers configured behind firewalls running “full-monty”, i.e. all open ports and protocols.

2. Mobile Malware Equals Apocalypse Now

There is nothing that would make the anti-virus companies happier than mobile malware to bring their performance degrading, signature-based shakedown business to a smart phone near you. The boardroom would be abuzz with talk of record growth and skyrocketing profits. But alas, the onslaught of mobile malware has yet to become the epidemic anti-virus company shareholders so hope for.

Mobile malware will become a reality one day, but that day has not yet come. For the time being, it’s better to focus on improving assets that are actively under threat, such as endpoints, servers, and databases.

1. The End of the Internet is Nigh

The “Warhol” worm is defined as an extremely rapidly propagating computer worm that spreads as fast as physically possible, infecting all vulnerable machines on the entire Internet in 15 minutes or less. This concept emerged shortly after the Y2K hysteria subsided, and has captured headlines ever since.  The reality is that the Internet is far more resilient than we give it credit for and short of a world war-level of effort the Internet will remain that—a net that may suffer its share of tears and gaps, but will remain functionally intact because people want it that way.

Finally, we must realize that myths often have a grain of truth in them that motivated parties can exaggerate into imminent threats to civilization. This is not to say that some of them are not real or shouldn’t be taken seriously.

China (like a number of nations) does have a thriving Cybercrime underground, Insider threats can be devastating to a business. Some ingenious hackers have developed extremely advanced methods infiltrate networks The Internet may supernova and someone, somewhere is probably developing an iPhone worm. But as the old saying goes, let’s change the things we can, endure (but watch carefully) the ones we can’t, and have the wisdom to know the difference.

Researchers make breakthrough in the quantum control of light

May 29th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
Researchers at UC Santa Barbara have recently demonstrated a breakthrough in the quantum control of photons, the energy quanta of light. This is a significant result in quantum computation, and could eventually have implications in banking, drug design, and other applications.

Obama Says New Cyberczar Won’t Spy on the Net

May 29th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
The administration’s internet overseer will secure government networks and protect critical U.S. infrastructures, but will not spy on private networks.

It’s Time for the FTC to Investigate Mac Security

May 28th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
Via MacWorld.com -

When I read the headline about a security researcher who had published proof-of-concept code for a vulnerability, I was upset. To disseminate proof-of-concept code is to basically say, “Here is a way to attack computers for those of you who can’t figure out how to do it yourselves.” The analogy that comes to mind is to throw a gun on a playground and let kids figure out how to load it.

By the time I had finished reading the article, though, my attitude had changed.

The purpose of stunts such as this one is to embarrass a vendor into fixing problems and writing better software. The problem with that scheme is that even when it works exactly as planned, it is users who get hurt, not the vendor. A significant number of users just do not implement fixes when they are available. These people are the ones who suffer (along with all those innocent third parties who pay the price when the PCs belonging to inattentive users are compromised and added to a botnet).

What influenced my change of heart in this case was the fact that the vendor in question was Apple, which has been feckless on the topic of security for a long time. Apple gives people the false impression that they don’t have to worry about security if they use a Mac. And perhaps because the company is invested in fostering that impression, Apple is grossly negligent in fixing problems. The proof-of-concept code in this case is proof that Apple has not provided a fix for a vulnerability that was identified six months ago. There is no excuse for that.

Apple has exuberantly criticized Microsoft for the security vulnerabilities of its products. The fact is, though, that that criticism is grossly misplaced. For its part, Microsoft has been extremely disciplined in ignoring Apple’s advertisements.

The current Mac commercials specifically imply that Windows PCs are vulnerable to viruses and Macs are not. I can’t disagree that PCs are frequent victims of viruses and other attacks, but so are Macs. In fact, the first viruses targeted Macs. Apple itself recommended in December 2008 that users buy antivirus software. It quickly recanted that statement, though, presumably for marketing purposes.

It certainly could not have been for real security reasons. A ZDNet summary of 2007 vulnerabilities showed that there were five times more vulnerabilities for Mac OS than for all types of Windows PC operating systems.

How can Apple get away with this blatant disregard for security? Its advertising claims seem comparable to an automobile manufacturer implying that its cars are completely safe and its competitors’ cars are death traps, when we all know that all cars are inherently unsafe. Claims like those would surely draw the wrath of the Federal Trade Commission. Well, guess what: All commercial software has security vulnerabilities.

Why then is there no investigation of Apple’s security claims and inferences? Where is the FTC? The company’s turn-about on antivirus software should be a red flag to federal regulators. Here’s a company that was telling people that its products were secure, then briefly said they were not secure, and then said it had misspoken, and subsequently used the “Macs are safe” stance as a selling point, when in truth the only way they are safer is that Macs are less attractive to virus writers because there are so few of them. That is security through obscurity, which is always short-lived and a truly terrible security practice. Should Apple be allowed to make such claims? Billions of dollars are at stake, not to mention the public’s computing safety.

And so, much as I hate the concept of releasing proof-of concept code, I have to wonder whether this is what we need to make the public see how much they are at risk. The mainstream press really doesn’t cover Mac vulnerabilities, and Apple’s “it’s all good” talk seems to be winning the day. When I made a TV appearance to talk about the Conficker worm, I mentioned that there were five new Mac vulnerabilities announced the day before. Several people e-mailed the station to say that I was lying, since they had never heard of Macs having any problems. (By the way, the technical press isn’t much better in covering Mac vulnerabilities.)

I have come to the conclusion that either the FTC must investigate Apple’s advertising claims with regard to security, or people must begin releasing proof-of-concept code on a regular basis. European Union and Canadian regulators can certainly step in as well. With Apple selling more Macs, its attitude is putting more people at risk. And just to be clear, it is not that Apple’s software has security vulnerabilities that is the problem; all commercial software does. The problem is that Apple is grossly misleading people to believe otherwise.

Hospital CIO is a Jedi — Really…. (Stupidsecurity)

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

Linux Kernel Security (SELinux vs AppArmor vs Grsecurity)

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

Linux kernel is the central component of Linux operating systems. It is responsible for managing the system’s resources, the communication between hardware and software and security. Kernel play a critical role in supporting security at higher levels. Unfortunately, stock kernel is not secured out of box. There are some important Linux kernel patches to secure your box. They differ significantly in how they are administered and how they integrate into the system. They also allow for easy control of access between processes and objects, processes and other processes, and objects and other objects. The following pros and cons list is based upon my personal experience.

Read more: Linux Kernel Security (SELinux vs AppArmor vs Grsecurity)

Copyright © nixCraft. All Rights Reserved. Support nixCraft when you shop at amazon. Thanks!

.NET 4.0 Security

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

The first beta of the v4.0 .NET Framework is now available, and with it comes a lot of changes to the CLR's security system.  We've updated both the policy and enforcement portions of the runtime in a lot of ways that I'm pretty excited to finally see available.  Since there are a lot of security changes, I'll spend the next month or so taking a deeper look at each of them.  At a high level, the major areas that are seeing updates with the v4 CLR are: 

  • Security policy
  • Security transparency
  • APTCA
  • Evidence
  • AppDomain Managers

Like I did when we shipped the v2.0 CLR, I’ll come back and update this post with links to the details about each of the features we added as I write more detailed blog posts about each of them.

Tomorrow, I’ll start by looking at probably the most visible change of the group – the update to the CLR’s security policy system.

Sandboxing in .NET 4.0

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

Yesterday I talked about the changes in security policy for managed applications, namely that managed applications will run with full trust – the same as native applications – when you execute them directly.

That change doesn’t mean that managed code can no longer be sandboxed however – far from it.  Hosts such as ASP.NET and ClickOnce continue to use the CLR to sandbox untrusted code.  Additionally, any application can continue to create AppDomains to sandbox code in.

As part of our overhaul of security policy in v4, we made some interesting changes to how that sandboxing should be accomplished as well.  In previous releases, the CLR provided a variety of ways to sandbox code – but many of theme were problematic to use correctly.  In the v4 framework, we made it a goal to simplify and standardize how sandboxing should be done in managed code.

One of the key observations we made about sandboxing is that there really isn’t a good reason for the CLR to be involved in the decision as to what grant set should be given to partial trust code.   If your application says “I want to run this code with ReflectionPermission/RestrictedMemberAccess and SecurityPermission/Execution”, that’s exactly the set of permissions that the code should run with.   After all, your application knows much better than the CLR what operations the sandboxed code can be safely allowed to undertake.

The problem is, sandboxing by providing an AppDomain policy level doesn’t provide total control to the application doing the sandboxing.  For instance, imagine you wanted to provide the sandbox grant set of RestrictedMemberAccess + Execution permission.  You might setup a policy level that grants AllCode this grant set and assign it to the AppDomain.   However, if the code you place in that AppDomain has evidence that says it came from the Internet, the CLR will instead produce a grant set that doesn’t include RestrictedMemberAccess for the sandbox.  Rather than allowing safe partial trust reflection as you wanted, your sandbox just became execute-only.

This really doesn’t make sense – what right does the CLR have to tell your application what should and should not be allowed in its sandboxes?  In the v1.x release of the runtime, developers had to go to great lengths in order to ensure they got the grant set they wanted.  (Eric Lippert’s CAS policy acrobatics to get VSTO working correctly is the stuff of legends around the security team – fabulous adventures in coding indeed!).

As many a frustrated application developer found out, intersecting with the application supplied grant set was only the tip of the iceburg when it came to the difficulties of coding with CAS policy.  You would also run into a slew of other problems – such as each version of the CLR having an entirely independent security policy to deal with.

In v2.0, we introduced the simple sandboxing API as a way for applications to say “This is the grant set I want my application to have.  Please don’t mess with it.”.  This went a long way toward making writing an application which sandboxes code an easier task.

Beginning with v4.0, the CLR is getting out of the policy business completely.  By default, the CLR is not going to supply a CAS policy level that interferes with the wishes of the application that is trying to do sandboxing.

Effectively, we’ve simplified the multiple ways that you could have sandboxed code in v3.5 into one easy option – all sandboxes in v4 must be setup with the simple sandboxing API.

This means that the days of wrangling with complicated policy trees with arbitrary decision nodes in them are thankfully a thing of the past.  All that’s needed from here on out is a simple statement: “here is my sandboxed grant set, and here are the assemblies I wish to trust.”

Next time, I’ll look at the implications of this on code that interacts with policy, looking at what you used to write, and what it would be replaced with in v4.0 of the CLR.

Kismet Newcore RC1 Released

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
http://www.kismetwireless.net/

After 5+ years of development, this staging release is to work out any final minor issues before a full release. Kismet-2009-05-RC1 is expected to be fully functional, so please report problems on the forums or via email. Please read the new README and replace your configuration files, as just about everything about configuring Kismet has changed (for the better!) The old Kismet tree also sees a new release as Kismet-old-2009-05-R1, which incorporates minor fixes and support for some of the newer Intel and Ralink cards/driver names. Both are available from the download page.

Defending Against Movie-Plot Threats with Movie Characters

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
Excellent: Seeking to quell fears of terrorists somehow breaking out of America’s top-security prisons and wreaking havoc on the defenseless heartland, President Barack Obama moved quickly to announce an Anti-Terrorist Strike Force headed by veteran counterterrorism agent Jack Bauer and mutant superhero Wolverine. Already dubbed a “dream team,” their appointment is seen by experts as a crucial step in reducing…

Time to Cash Out: Why Paper Money Hurts the Economy

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

Two years ago, Hasbro came out with an electronic version of Monopoly. Want to buy a house? Just put your debit card into the mag-stripe reader. Bing! No more pastel-colored cash tucked under the board. Turns out it wasn’t Lehman Brothers but Parker Brothers that could smell the future. At least, that’s what participants at this year’s Digital Money Forum believe. In March, after a long day of talks with titles like "Currency 2.0" and "Going Live With Voice Payments," forum attendees at London's plush Charing Cross Hotel gathered for drinks—and, yes, a few rounds of Monopoly Electronic Banking Edition.

Unfortunately, the world’s governments remain stuck in the past. To maintain our stock of hard currency, the US Treasury creates hundreds of billions of dollars worth of new bills and coins each year. And that ain’t money for nothing: The cost to taxpayers in 2008 alone was $848 million, more than two-thirds of which was spent minting coins that many people regard as a nuisance. (The process also used up more than 14,823 tons of zinc, 23,879 tons of copper, and 2,514 tons of nickel.) In an era when books, movies, music, and newsprint are transmuting from atoms to bits, money remains irritatingly analog. Physical currency is a bulky, germ-smeared, carbon-intensive, expensive medium of exchange. Let’s dump it.

Markets are already moving that way. Between 2003 and 2006, noncash payments in the US increased 4.6 percent annually, while the percentage of payments made using checks dropped 13.2 percent. Two years ago, card-based payments exceeded paper-based ones—cash, checks, food stamps—for the first time. Nearly 15 percent of all US online commerce goes through PayPal. Smartcard technologies like EagleCash and FreedomPay allow military personnel and college students to ignore paper money, and the institutions that run dining halls and PXs save a bundle by not having to manage bills and coins or pay transaction fees for credit cards. Small communities from British Columbia to the British Isles are experimenting with alternative currencies that allow residents to swap work hours, food, or other assets of value.

But walled-garden economies are a long way from a fully cashless society. As Wired first noted 15 years ago, to rely exclusively on an emoney system, we need a ubiquitous and secure network of places where people can transact electronically, and that system has to be as convenient as—and more efficient than—cash. The infrastructure didn't exist back then. But today that network is in place. In fact, it's already in your pocket. "The cell phone is the best point-of-sale terminal ever," says Mark Pickens, a microfinance analyst with the Consultative Group to Assist the Poor. Mobile phone penetration is 50 percent worldwide, and mobile money programs already enable millions of people to receive money from or “flash” it to other people, banks, and merchants. An added convenience is that cell phones can easily calculate exchange rates among the myriad currencies at play in our world. Imagine someday paying for a beer with frequent flier miles.

Opponents used to argue that killing cash would hurt low-income workers—for instance, by eliminating cash tips. But a modest increase in the minimum wage would offset that loss; government savings from not printing money could go toward lower taxes for employers. And let’s not forget the transaction costs of paper currency, especially for the poor. If you’re less well off, check-cashing fees and 10-mile bus rides to make payments or purchases are not trivial. Yes, panhandlers will be out of luck, but to use that as a reason for preserving a costly, outdated technology would be a sad admission, as if tossing spare change is the best we can do for the homeless.

Killing currency wouldn’t be a trauma; it’d be euthanasia. We have the technology to move to a more efficient, convenient, freely flowing medium of exchange. Emoney is no longer just a matter of geeks playing games.

Contributing editor David Wolman(david@david-wolman.com) wrote about Dutch climate engineering in issue 17.01.

Backtrack 4 USB How-to Updated for Nessus 4.0.1

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

Just a quick note to let you know that the Backtrack 4 USB How-to with Persistent Changes and Nessus has been updated for Nessus 4.0.1.

That is all.

-Kevin

FBI Annouces Annual Can-You-Crack-the-Code Challenge

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

FBI Crack The Code

Over the long Memorial Day Weekend, the Department of Justice’s (DOJ) Federal Bureau of Investigation (FBI) released their annual Can You Crack the Code Crypto Challenge, the image of which is published above. More information appears after the jump.

FBI

CAN YOU CRACK A CODE?
Try Your Hand at Cryptanalysis

We’ve challenged you twice before—in November 2007 and December 2008—to unravel a code and reveal its secret message, just like the “cryptanalysts” in our FBI Laboratory.

This time we’ve used a different set of characters entirely—ancient runes that are sometimes used by criminals to code their communications. Give it a try!


Once again: if you want a primer on basic cipher systems and how to break them, see the article “Analysis of Criminal Codes and Ciphers.”

And if you’re a youngster, we suggest you start with the code on our Kids’ page.
Note: sorry, but cracking this code doesn’t guarantee you a job with the FBI! But do check out careers with us at FBIJobs.gov.

To learn more about code-breaking in the FBI:

Reblog this post [with Zemanta]

Ann Cavoukian reappointed as Ontario Privacy Commissioner

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

The Information and Privacy Commissioner of Ontario has been reappointed for third term. Here’s the media release:

IPC – Office of the Information and Privacy Commissioner/Ontario Dr. Cavoukian reappointed as Information and Privacy Commissioner for an unprecedented third term: Blazes the trail with new priorities

TORONTO – Dr. Ann Cavoukian was reappointed today by the Ontario Legislature for an unprecedented third term as the province’s Information and Privacy Commissioner.

“I would like to sincerely thank members of the Legislature for their strong support,” said the Commissioner. “I feel very honoured to serve as Ontario’s Information and Privacy Commissioner.”

“Five years ago, when I was reappointed to my second term, I said we were in the midst of profound change in the areas of privacy protection and access to government information. But now the pace has grown even faster. Technology – which has resulted in many challenges – can also be tapped for innovative solutions, particularly on the privacy front. I will continue to emphasize the need to embed privacy directly into IT, at the earliest developmental stage. I look forward to the challenge – I have so many new ideas that I wish to pursue.”

We might want to cut the kid some slack

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

Every opportunity I get, I question young Canadians on why they share so much information so freely and so widely when using online sites and services. Being an aged adult, I often frame my questioning by citing the negative consequences that over sharing can produce: job loss, identity theft, even physical risk.

What if we, as a society, simply have to get used to a greater frequency of inappropriate behaviour by people who, frankly, are still learning how relate to other humans in the wild?

Here’s a dose of reality and frank observation from Clay Shirky, writing in the New York Times as part of an exchange on privacy in an online world:

“ … Society has always carved out space for young people to misbehave. We used to do this by making a distinction between behavior we couldn’t see, because it was hidden, and behavior we could see, because it was public. That bargain is now broken, because social life increasingly includes a gray area that is publicly available, but not for public consumption.

Given this change, we need to find new ways to cut young people some slack. Privacy used to be enforced by inconvenience; you couldn’t just spy on anyone you wanted. Increasingly, though, privacy will have to be enforced by us grownups simply choosing not to look, since it’s none of our business.

This discipline isn’t just to protect them, it’s to protect us. If you’re considering a job applicant, and he has some louche photos on the Web, he has a problem. But if one applicant in 10 has similar pictures online, then you’ve got a problem, because you’ll be at a competitive disadvantage for talent, relative to firms that don’t spy.

People my age tut-tut at kids, telling them that we wouldn’t have put those photos up when we were young, but we’re lying. We’d have done it in a heartbeat, but no one ever offered us the chance … ”

As privacy advocates and concerned parents (or uncles, aunts, grandparents, even nosy and irritating siblings), we emphasize the physical or monetary risk that can build by revealing too much about yourself online. As Shirky points out, their behaviour isn’t any different from previous generations – they’re just given more opportunities to shout their strengths and weaknesses from the rooftops.

Maybe the answer is to identify issues that have an immediate effect on the life of young Canadians – like their dating habits.

Researchers at the University of Guelph have discovered that the more time young Canadians spend on a social network, the more likely they are to become jealous of their dating partner.

“It becomes a feedback loop,” [University of Guelph psychology grad student Emily] Christofides said. “Jealousy leads to increased surveillance of a partner’s Facebook page, which results in further exposure to jealousy-provoking information.”

“It fosters a vicious cycle,” Christofides said. “If one partner in a relationship discloses personal information, it increases the likelihood that the other person will do the same, which increases the likelihood of jealousy.”

Now THERE’S a consequence young Canadians will understand.

Google Bets Big on HTML 5: News from Google I/O

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

“Never underestimate the web,” says Google VP of Engineering Vic Gundotra in his keynote at Google I/O this morning. He goes on to tell the story of a meeting he remembers when he was VP of Platform Evangelism at Microsoft five years ago. “We believed that web apps would never rival desktop apps. There was this small company called Keyhole, which made this most fantastic geo-visualization software for Windows. This was the kind of software we always used to prove to ourselves that there were things that could never be done on the web.” A few months later, Google acquired Keyhole, and shortly thereafter released Google Maps with satellite view.

“We knew then that the web had won,” he said. “What was once thought impossible is now commonplace.”

Google doesn’t want to repeat that mistake, and as a result, he said, “we’re betting big on HTML 5.”

Vic pointed out that the rate of browser innovation is accelerating, with new browser releases nearly every other month. The slide below, from early in Vic’s talk, shows the progress towards the level of UI functionality found in desktop apps through adoption of HTML 5 features in browsers. This looks like one of Clayton Christensen’s classic “disruptive innovation vs sustaining innovation” graphs. It’s also fascinating to see how mobile browsers are in the forefront of the innovation.

browser_innovation.png

While the entire HTML 5 standard is years or more from adoption, there are many powerful features available in browsers today. In fact, five key next-generation features are already available in the latest (sometimes experimental) browser builds from Firefox, Opera, Safari, and Google Chrome. (Microsoft has announced that it will support HTML 5, and as Vic noted, “We eagerly await evidence of that.”) Here’s Vic’s HTML 5 scorecard:

html5.png

  1. The canvas element provides a straightforward and powerful way to draw arbitrary graphics on a web page using Javascript. Sample applications demoed at the show include a simple drawing area and a simple game. But to see the real power of the Canvas element, take a look at Mozilla’s BeSpin. Bespin is an extensible code editor with an interface so rich that it’s hard to believe it was written entirely in Javascript and HTML.
  2. The video element aims to make it as easy to embed video on a web page as it is to embed images today. No plugins, no mismatched codecs. See for example, this simple video editor running in Safari. And check out the page source for this YouTube demo. (As a special bonus, the video is demonstrating the power of O3D, an open source 3D rendering API for the browser.)
  3. The geolocation APIs make location, whether generated via GPS, cell-tower triangulation or wi-fi databases (what Skyhook calls hybrid positioning) available to any HTML 5-compatible browser-based app. At the conference, Google shows off your current location to any Google map, and announces the availability of Google Latitude for the iPhone. (It will be available shortly after Apple releases OS 3.) What’s really impressive about Latitude on the phone is that it’s a web app, with all the platform independence that implies, not a platform-dependent phone application.
  4. AppCache and Database make it easy to build offline apps. The killer demo is one that Vic first showed at Web 2.0 Expo San Francisco a few months ago: offline gmail on an Android phone. But Vic also shows off a simple “stickies” app running in Safari.
  5. (I love the language that Vic uses: “You can even store the application itself offline and rehydrate it on demand.”)

  6. Web workers is a mechanism for spinning off background threads to do processing that would otherwise slow the browser to a crawl. For a convincing demo, take a look at a web page calculating primes without web workers. As the demo says, “Click ‘Go!’ to hose your browser.” Then check out the version with web workers. Primes start appearing, with no hit to browser performance. Even more impressive is a demo of video motion tracking, using Javascript in the browser.

During his keynote, Vic was joined on stage by Jay Sullivan, VP of Mobile at Mozilla and Michael Abbot, the SVP in charge of application software and services at Palm. Both showed their own commitment to working with HTML 5. Jay expressed Mozilla’s commitment to keeping the web open: “Anything should be hackable; anything should be scriptable. We need to get out of plugin prison.” Javascript rendering in Firefox 3.5 is 10x faster than in Firefox 2, with support for video, offline storage, web workers, and geolocation.

Michael showed how Palm’s WebOS relies on HTML 5. “You as a developer don’t need to leave your prior knowledge at the door to develop for the phone.” He demonstrates the power of CSS transformations to provide UI effects; he shows how the calendar app is drawn with Canvas, how bookmarks and history are kept in an HTML 5 database. Michael emphasized the importance of standardization, but also suggested that we need new extensions to HTML 5, for example, to support events from the accelerometer in the phone. Palm has had to run out ahead of the standards in this area.

If you’re like me, you had no idea there was so much HTML 5 already in play. When I checked in with my editors at O’Reilly, the general consensus was that HTML 5 isn’t going to be ready till 2010. Sitepoint, another leading publisher on web technology, recently sent out a poll to their experts and came to the same conclusion. Yet Google, Mozilla, and Palm gave us all a big whack upside the head this morning. As Shakespeare said, “The hot blood leaps over the cold decree.” The technology is here even if the standards committees haven’t caught up. Developers are taking notice of these new features, and aren’t waiting for formal approval. That’s as it should be. As Dave Clark described the philosophy of the IETF with regard to internet standardization, “We reject: kings, presidents, and voting. We believe in: rough consensus and running code.”

Support by four major browsers adds up to “rough consensus” in my book. We’re seeing running code at Google I/O, and I’d imagine the 4000 developers in attendance will soon be producing a lot more. So I think we’re off to the races. As Vic said to me in an interview yesterday morning, “The web has not seen this level of transformation, this level of acceleration, in the past ten years.”

Vic ends the HTML 5 portion of his keynote with hints of an announcement tomorrow: “Don’t be late for the keynote tomorrow morning.”

Additional Resources

Here is a convenient list of the HTML 5 demo apps shown in the keynote this morning. Be sure to look at the page source for each of the applications.

New developer features in Firefox 3.5

To learn more about these HTML 5 features, check out these tutorials from the Opera, Mozilla, Palm, and Google teams (plus a few others):

Canvas:
HTML 5 Canvas: The Basics

Painting with HTML 5 Canvas

Video: A Call for Video on the Web

HTML 5 Video Examples

Geolocation: Track User Geolocation with Javascript

Web cache and database:
Palm WebOS HTML 5 DataBase Storage

HTML 5 Features in Latest iPhone Applications

Gmail for Mobile: Using AppCache to Launch Offline

Web workers:
Using DOM Workers

BSA Admits Canadian Software Piracy Rates Estimated; Canada Viewed as Low Piracy Country

May 27th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
As part of the attempt to characterize Canada as a "piracy haven," the Business Software Alliance's annual Global Piracy Report plays a lead role.  The Conference Board of Canada references the findings, as do their funders in their reports on the state of Canadian intellectual property laws (Chamber of Commerce, CACN).  Moreover, the report always generates considerable media interest, with coverage this year in the Globe and Mail and Canwest papers.  For example, the Globe cited the data directly in the Download Decade series stating that "about 32 per cent of the computer software in Canada is pirated, contributing to losses of $1.2-billion (U.S.) in 2008 alone, according to a report from the Business Software Alliance."

This year the BSA reported that Canada declined from 33 to 32 percent.  Michael Murphy, chair of the BSA Canada Committee claimed that "despite the slight decline, Canada’s software piracy rate is nowhere near where it should be compared to other advanced economy countries. We stand a better chance of reducing it significantly with stronger copyright legislation that strikes the appropriate balance between the rights of consumers and copyright holders."

Yet what the BSA did not disclose is that the 2009 report on Canada were guesses since Canadian firms and users were not surveyed.  While the study makes seemingly authoritative claims about the state of Canadian piracy, the reality is that IDC, which conducts the study for BSA, did not bother to survey in Canada.  After learning that Sweden was also not surveyed, I asked the Canadian BSA media contact about the approach in Canada.  They replied that Canada was not included in the survey portion of the study, explaining that:

"Countries that are included in the survey portion are chosen to represent the more volatile economies. IDC has found from past research that low piracy countries, generally mature markets, have stable software loads by segment, with yearly variations driven more by segment dynamics (e.g. consumer shipment versus business shipments of PCs) than by load-by-load segment. IDC believes that in mature markets, piracy rates are driven less by changes in software load than other market conditions, such as shipment rates and volume licensing errors. Canada is also a country that IDC studies regularly using confidential, proprietary methodology to examine PC deployment, software revenues and distribution channel dynamics, all of which help determine both software load and piracy rates."

This is a very revealing response.  First, it is an express acknowledgement that the Canadian data this year is a guess.  The data is never publicly presented in this way – the BSA cites specific numbers, the newspapers report it, and groups like the Conference Board of Canada and the Chamber of Commerce extrapolate these guesses into specific claims about job losses and economic harm.  Second, contrary to the claims of the U.S. government and the copyright lobby groups, Canada is characterized as a low piracy market.  The notion that Canada is the piracy equivalent to China or Russia has always been unsupportable and it now appears that the BSA's own research partner agrees.  Third, the response acknowledges that it is not copyright laws that alter piracy rates in countries like Canada, but rather "market conditions such as shipment rates and volume licensing errors."

The Conference Board of Canada's plagiarized, deceptive report, completed with funding from copyright lobby groups and with the rejection of its own independently commissioned research, opened the door to how public policy may be manipulated through inaccurate data masquerading as authoritative.  The revelations about the BSA's software piracy data further demonstrate that the rhetoric simply does not square with the reality.

Service Pack 2 for Vista and Server 2008 finally arrives

May 26th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
companion photo for Service Pack 2 for Vista and Server 2008 finally arrives

After a lengthy development cycle that included delays and furious testing, Microsoft has finally given the public Service Pack 2 for Windows Vista and Windows Server 2008 (final build is 6.0.6002.18005). You can download the installer from the Microsoft Download Center: 32-bit (348.3MB), 64-bit (577.4MB), and IA64 (450.4MB). There’s also an ISO image (1376.8MB) that contains these installers. The installers will work on English, French, German, Japanese, and Spanish versions of either Vista or Server 2008. Other language versions will arrive later. Those interested in slipstreamed versions of Vista and Server 2008 with SP2 will need to get an MSDN or TechNet subscription.

Click here to read the rest of this article

10 Strange Species Discovered Last Year

May 26th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
There are so many forms of life on Earth that 120 years after Darwin’s death, we’re still discovering new fish, slugs, seahorses and bacteria. Here are 10 strange species as selected by the International Institute for Species Exploration at Arizona State University.

In Hot Pursuit of Fusion

May 26th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

A look at the work of the National Ignition Facility, a U.S. government fusion research facility that if successful, “would help keep the nation’s nuclear arms reliable without underground testing, would reveal the hidden life of stars and would prepare the way for radically new kinds of power plants.”

Napoleon Bonaparte

May 26th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
“History is the version of past events that people have decided to agree upon.”

Legalize it? Medical evidence on marijuana blows both ways

May 26th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
Sparked anew by Gov. Arnold Schwarzenegger’s call for the state to study the legalization of marijuana, both sides in the smoldering pot debate point to research to bolster their positions.

Investigators Replicate Nokia 1100 Online Banking Hack

May 23rd, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
Via PC World -

An old candy-bar style Nokia 1100 mobile phone has been used to break into someone’s online bank account, affirming why criminals are willing to paying thousands of euros for the device.

Using special software written by hackers, certain models of the 1100 can be reprogrammed to use someone else’s phone number and receive their SMS (Short Message Service) messages, said Max Becker, CTO of Ultrascan Knowledge Process Outsourcing, a subsidiary of fraud investigation firm Ultrascan.

The Nokia 1100 hack is powerful since it undermines a key technology relied on by banks to secure transactions done over the Internet.

Banks in countries such as Germany and Holland send a one-time password called an mTAN (mobile Transaction Authentication Number) to a person’s phone in order to allow, for example, the transfer of money to another account.

Since the Nokia 1100 can be reprogrammed to respond to someone else’s number, it means cybercriminals can also obtain the mTAN by SMS. Cybercriminals must already have a person’s login and password for a banking site, but that’s easy since millions of computers worldwide contain malicious software that can record keystrokes.

Ultrascan obtained Nokia 1100 phones made in Bochum, Germany. Phones made around 2003 in that now-closed factory have the firmware version that can be hacked, Becker said. Nokia has sold more than 200 million of the 1100 and its successors, although it’s unknown how many devices have the particular sought-after firmware.

Ultrascan was able to successfully reprogram an 1100 and intercept an mTAN, but just one time. Becker said they are undertaking further tests to see if the attack can be executed repeatedly.

“We’ve done it once,” Becker said. “It looks like we know how to do it.”

Ultrascan experts obtained the hacker software to reprogram the phone through its network of informants, said Frank Engelsman, a fraud and security specialist with the company.

That application allows a hacker to decrypt the Nokia 1100′s firmware, Becker said. Then, the firmware can be modified and information such as the IMEI (International Mobile Equipment Identity) number can be changed as well as the IMSI (International Mobile Subscriber Identity) number, which allows a phone to register itself with an operator.

The modified firmware is then uploaded to the Nokia 1100. Certain models of the 1100 used erasable ROM, which allows data to be read and written to the chip, Becker said. For the final step, the hacker must also clone a SIM (Subscriber Identity Module) card, which Becker said is technically trivial.

Nokia, which was closed on Thursday due to a holiday, could not be contacted. However, the company has said it does not believe there is a vulnerability in the 1100′s software.

Becker said that may be semantically true, however, it’s possible that the encryption keys used to encrypt the firmware have somehow slipped into the public domain. “We would really like to speak with Nokia,” Becker said.

Ultrascan was also able to confirm that criminals are willing to pay a lot of money for the right Nokia 1100. An Ultrascan informant sold one of the devices recently in Tangiers, Morocco, for €5,500 (US$7,567), Engelsman said. Ultrascan previously confirmed data earlier this year that one Nokia 1100 sold for €25,000.

Ultrascan, which specializes in tracking criminals involved in Internet and electronic fraud, is trying to trace criminals who are using Nokia 1100s in online banking frauds.

“We keep trying to infiltrate these groups,” Engelsman said.

Attack of the Zombie Photos

May 22nd, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

One of the defining features of Web 2.0 is user-uploaded content, specifically photos. I believe that photo-sharing has quietly been the killer application which has driven the mass adoption of social networks. Facebook alone hosts over 40 billion photos, over 200 per user, and receives over 25 million new photos each day. Hosting such a huge number of photos is an interesting engineering challenge. The dominant paradigm which has emerged is to host the main website from one server which handles user log-in and navigation, and host the images on separate special-purpose photo servers, usually on an external content-delivery network. The advantage is that the photo server is freed from maintaining any state. It simply serves its photos to any requester who knows the photo’s URL.

This setup combines the two classic forms of enforcing file permissions, access control lists and capabilities. The main website checks each request for a photo against an ACL, it then grants a capability to view a photo in the form of an obfuscated URL which can be sent to the photo-server. We wrote earlier about how it was possible to forge Facebook’s capability-URLs and gain unauthorised access to photos. Fortunately, this has been fixed and it appears that most sites use capability-URLs with enough randomness to be unforgeable. There’s another traditional problem with capability systems though: revocation. My colleagues Jonathan Anderson, Andrew Lewis, Frank Stajano and I ran a small experiment on 16 social-networking, blogging, and photo-sharing web sites and found that most failed to remove image files from their photo servers after they were deleted from the main web site. It’s often feared that once data is uploaded into “the cloud,” it’s impossible to tell how many backup copies may exist and where, and this provides clear proof that content delivery networks are a major problem for data remanence.

For our experiment, we uploaded a test image onto 16 chosen sites with default permissions, then noted the URL of the uploaded image. Every site served the test image given knowledge of its URL except for Windows Lives Spaces, whose photo servers required session cookies (a refreshing congratulations to Microsoft for beating the competition in security). We ran our initial study for 30 days, and posted the results below. A dismal 5 of the 16 sites failed to revoke photos after 30 days:

Site Type CDN Operator
Revocation
Bebo Social Networking Bebo Unrevoked
Blogger Blogging Google 36 hours
Facebook Social Networking Akamai Unrevoked
Flickr Photo Sharing Yahoo Immediate
Fotki Photo Sharing Fotki < 1 hour
Friendster Social Networking Panther Express 6 days
hi5 Social Networking Akamai Unrevoked
LiveJournal Blogging LiveJournal Immediate*
MySpace Social Networking Akamai Unrevoked
Orkut Social Networking Google Immediate
Photobucket Photo Sharing Photobucket Immediate
Picasa Photo Sharing Google 5 hours
SkyRock Blogging Téléfun Unrevoked
Tagged Social Networking Limelight 14 days
Windows Live Spaces Social Networking Microsoft N/A (cookies)
Xanga Blogging Xanga 6 hours*

Just for fun, we’ve also re-started the experiment to allow live viewing.

Most likely, the sites with revocation longer than a few hours aren’t actively revoking at all, but relying on the photos eventually falling out of the photo-server’s cache. This memory-management strategy makes sense technically, as photos are deleted from these types of sites too infrequently to justify the overhead and complexity of removing them from the content delivery network. This paradigm is usually reflected in sites’ Terms of Service, which often give leeway to retain copies for a ‘reasonable period of time.’ Facebook is actually quite explicit about this, stating that ‘when you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer.’

This architecture is not only fundamentally wrong from a privacy standpoint, but likely illegal under the EU Data Protection Directive of 1995 and its UK implementation, the Data Protection Act of 1998, which both clearly ban keeping personally-identifiable data for longer than necessary given the data’s purpose. In the social web case, the purpose of keeping a photo is to share it. Since this is no longer possible after the photo is marked ‘deleted’ all copies of the photo must be removed. There’s also an interesting violation of the provision that a user should have access to all data stored about her, after marking a photo ‘deleted’ the user no longer access to it, as there is no way to see which user content is still cached.

Architecture matters, and though it may be more complicated, sensitive personal data must be stored and cached using reference counts to ensure it can be fully deleted, and not simply left to be garbage collected down the road. Unfortunately, as is common with with social networking sites, privacy is viewed as a legal add-on and not a design constraint. In the terminology of Larry Lessig, privacy is still considered a matter of law and not of code.  As a result a user can have no assurance about where their photos may be floating around in the cloud.

EDIT 22/05/2009: We originally reported that Xanga and LiveJournal left photos unrevoked. After corresponding with developers from both sites, this was revealed to be a UI problem and not a CDN problem in both cases. When a photo is included in a blog post which is deleted, the photo itself is not considered deleted but becomes one of the user’s photos. Unfortunately, in each site the normal photo interface did not reveal this: Xanga showing this in it’s ‘Photos’ interface, and LiveJournal showing showing this. In both cases, deleting photos which were included in blog posts requires a separate interface. In LiveJournal’s case the separate interface itself incorrectly stated I had “no galleries.” Due to this UI confusion, I thought the photos were deleted when they werent’t thus they weren’t revoked. Apologies for the confusion, I re-tested both and printed updated results, though this has led both sites to re-consider their UI’s which were admittedly confusing and outright buggy in LiveJournaI’s case.

No Warrant Required for GPS Tracking

May 16th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
At least, according to a Wisconsin appeals court ruling: As the law currently stands, the court said police can mount GPS on cars to track people without violating their constitutional rights — even if the drivers aren’t suspects. Officers do not need to get warrants beforehand because GPS tracking does not involve a search or a seizure, Judge Paul Lundsten…

No Warrant Required in U.S. for GPS Tracking

May 15th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)
At least, according to a U.S. District Court ruling: As the law currently stands, the court said police can mount GPS on cars to track people without violating their constitutional rights — even if the drivers aren’t suspects. Officers do not need to get warrants beforehand because GPS tracking does not involve a search or a seizure, Judge Paul Lundsten…

Ontario Commissioner releases 2008 annual report and prepares for battle with Victoria University

May 14th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

The Information and Privacy Commissioner of Ontario has released her 2008 Annual Report, which makes broad recommendations for changes to the laws in Ontario and calls for the adoption of better practices:

IPC – Office of the Information and Privacy Commissioner/Ontario Commissioner Cavoukian lays out path for increased privacy protection & accountability – doing battle with Victoria University

Commissioner Cavoukian lays out path
for increased privacy protection & accountability – doing battle with Victoria University

TORONTO – Ontario’s Information and Privacy Commissioner, Dr. Ann Cavoukian, is urging the provincial government to make specific legislative changes and take additional steps to protect privacy and ensure greater accountability.

In her 2008 Annual Report, released today, the Commissioner cites how her sweeping recommendations from her seminal investigation into a privacy complaint against the video surveillance program of Toronto’s mass transit system have been hailed in the United States as a model that cities around the world can build upon, and in Canada as “a road map for the most privacy-protective approach to CCTV.”

Among the recommendations she is making in her 2008 Annual Report, are:

Amend the law to make it clear that all Ontario universities fall under FIPPA

The Commissioner is calling on the government to fix a potential omission in the Freedom of Information and Protection of Privacy Act related to which organizations are covered under the Act.

Under amendments that came into force in mid-2006, publicly funded universities were brought under the Act. Due to the wording of an amended regulation, the University of Toronto, in response to a freedom of information request received under the Act, argued that Victoria University, an affiliated university, was not covered under the Act.

“An IPC adjudicator determined that, based on the financial and academic relationship between the two, Victoria was part of the University of Toronto for the purposes of FIPPA,” said Commissioner Cavoukian. “The University of Toronto has not accepted our ruling and is now appealing it – having it ‘judicially reviewed.’ They have chosen to fight openness and transparency, expending valuable public resources in the process. We find this completely unacceptable, which is why we are prepared to go to battle on this issue, in our effort to defend public sector accountability. We should add that this is contrary to our normal process of working co-operatively with organizations to mediate appeals and resolve complaints informally. In this case, however, the university, having thrown down the gauntlet, left us no choice but to respond in kind and aggressively defend our Order in the courts.”

There are more than 20 other affiliated universities in Ontario that may have a different relationship with the university they are affiliated with, says Commissioner Cavoukian. “I am calling on the government to ensure that all affiliated universities are covered by the Act. There is no rationale for these publicly funded institutions to fall outside of the law.”

The government needs to set specific fees for requests for patients’ health records under PHIPA

The IPC has received a number of inquiries and formal complaints from the public regarding the fees charged by some health information custodians when patients ask for copies of their own medical records.

Ontario’s Personal Health Information Protection Act (PHIPA) provides that when an individual seeks copies of his or her own personal health information, the fee charged by a health information custodian shall not exceed the amount set out in the regulation under the Act or the amount of reasonable cost recovery, if no amount is provided in the regulation. No such regulation has been passed.

Commissioner Cavoukian, in her August 2008 submission to the Standing Committee on Social Policy, which conducted a statutorily mandated review of PHIPA, again raised the need for a fee regulation. Two months later, in its report to the Speaker of the Assembly, the Standing Committee indicated its agreement with the Commissioner’s recommendation, stating that the determination of what constitutes “reasonable cost recovery” should not be left to the discretion of individual health information custodians and their agents.

“The Minister of Health,” said the Commissioner, “should make the creation of a fee regulation a priority.”

Ontario’s enhanced driver’s licence (EDL) needs a higher level of protection

The Commissioner is calling on the Minister of Transportation to provide better privacy protection for the EDL. “The radio frequency identity (RFID) tag that will be embedded into the card can be read not only by authorized readers, but just as easily by unauthorized readers,” said Commissioner Cavoukian. “Over time, these tags could be used to track or covertly survey one’s activities and movements.”

The electronically opaque protective sleeve that will come with these enhanced licences – which drivers without a passport will need as of June 1 to drive across the U.S. border – “only provides protection when the driver’s licence is actually encased in the sleeve,” said Commissioner Cavoukian. “But individuals who voluntarily sign up for these enhanced driver’s licences will not only be required to produce them at the border, but will still have to do so in other circumstances where a driver’s licence or ID card is presently required, including in many commercial contexts. The reality is that most drivers will abandon the use of the protective sleeve.”

“An on-off device on the RFID tag would provide greatly enhanced protection,” said the Commissioner. “The default position would be off since drivers don’t need the RFID to be ‘on’ when routinely taking their licence in and out of their wallets, unless they are actually crossing the border. I am urging the government to pursue adding a privacy-enhancing on-off device for RFID tags embedded in the EDLs.”

FOI REQUESTS

The number of freedom of information requests filed across Ontario in 2008 was the second highest ever – 37, 933, trailing only the 38,584 filed in 2007. Nearly two-thirds of the 2008 requests were filed under the Municipal Freedom of Information and Protection of Privacy Act (24,482), to such organizations as police service boards, municipalities, school boards and health boards. In fact, there were more requests filed to police service boards (13,598) than there were for all organizations under the provincial Act (13,451).

FOI requests may be filed for either personal information or general records (which encompasses all information held by government organizations except personal information). And, the majority of requests each year have been for general records. In 2008 – for the second year in a row – the average cost of obtaining general records under the provincial Act dropped – this time, to $42.74 from $50.54, continuing a reversal of what had been a lengthy trend. The average cost of general records under the municipal Act was $23.54, up only a nickel from the previous year.

Among other key statistics released by the Commissioner:

· Since the IPC began emphasizing in 1999 the importance of quickly responding to FOI requests, in compliance with the response requirements set out in the Acts, the provincial 30-day compliance rate has more than doubled, climbing to 85 per cent from 42 per cent. After achieving a record 30-day compliance rate in 2007 of 84.4 per cent, provincial ministries, agencies and other provincial institutions promptly broke the record in 2008, producing an overall 30-day compliance rate of 85 per cent.

· The Commissioner also reported that her office received 507 complaints in 2008 under Ontario’s three privacy Acts, and 919 appeals from requesters who were not satisfied with the response they received after filing an FOI request with a provincial or local government organization. Overall, the IPC resolved 966 appeals and 534 complaints in 2008.
The Information and Privacy Commissioner is appointed by and reports to the Ontario Legislative Assembly, and is independent of the government of the day. The Commissioner’s mandate includes overseeing the access and privacy provisions of the Freedom of Information and Protection of Privacy Act and the Municipal Freedom of Information and Protection of Privacy Act, as well as the Personal Health Information Protection Act, which applies to both public and private sector health information custodians, in addition to educating the public about access and privacy issues.

When should businesses use the ® or ™ symbols?

May 14th, 2009. Published under My Recent Reads. No Comments.

pulled from Google Reader (click on title for original post)

RegisteredTM_svgYou have probably seen the ® or ™ symbol on products or in advertisements. But what do these symbols mean and when is it appropriate to use them?

Generally, the ® or ™ symbols are used in connection with a trade-mark, which is a word, symbol or design used to distinguish the wares or services of one person or organization from those of others. Trade-marks can be valuable intellectual property.

The Trade-marks Act (Canada) (the “TM Act”) does not contain any marking requirements. However, trade-mark owners often indicate their registration through certain symbols, namely, ® (registered) or ™ (trade-mark). Although the TM Act does not require the use of these symbols, in Canada, the ™ and ® symbols may be used whether the trade-mark is registered or not. However, while this is not a requirement of the TM Act, the ® should be used only if the mark is registered with the Canadian Intellectual Property Office. If the ® is used and the mark is not in fact registered, it may be possible for someone to argue its use amounts to false advertising. The ™ suggests the mark is not registered, but can help establish distinctiveness in the mark.

One should be especially careful using the ® outside in Canada. In certain jurisdictions, including the U.S., ® may only be used by the owner of a mark following registration with that jurisdiction’s trade-mark office. For example, if a Canadian company is marketing a product in the U.S. and its mark is not registered with the U.S. Patent and Trademark Office, it would not be able to use the ® in connection with its mark and could only use the ™, even if the company has been using ® in Canada all along.

Businesses should consider having their intellectual property “audited” by legal counsel with an expertise in the field and, in doing so, developing an appropriate trade-marks business strategy. When I advise my clients on trade-marks matters I often rely on the expert counsel of my friends and colleagues Jolin Spencer (whom I should thank for this blog post), Robert Watchman and Howard Nerman, all of whom have expertise in trade-marks law.

Posted in Copyright, Due Diligence, Industrial Design, Intellectual Property, Marketing, Patent, Trademark Tagged: Copyright, Due Diligence, Industrial Design, Intellectual Property, Marketing, Patents, Trade Marks