By Alex Hern
Facebook has moved more than 1.5 billion users out of reach of European privacy law, despite a promise from Mark Zuckerberg to apply the “spirit” of the legislation globally.
In a tweak to its terms and conditions, Facebook is shifting the responsibility for all users outside the US, Canada and the EU from its international HQ in Ireland to its main offices in California. It means that those users will now be on a site governed by US law rather than Irish law.
The move is due to come into effect shortly before General Data Protection Regulation (GDPR) comes into force in Europe on 25 May. Facebook is liable under GDPR for fines of up to 4% of its global turnover – around $1.6bn – if it breaks the new data protection rules.
The shift highlights the cautious phrasing Facebook has applied to its promises around GDPR. Earlier this month, when asked whether his company would promise GDPR protections to its users worldwide, Zuckerberg demurred. “We’re still nailing down details on this, but it should directionally be, in spirit, the whole thing,” he said.
A week later, during his hearings in front of the US Congress, Zuckerberg was again asked if he would promise that GDPR’s protections would apply to all Facebook users. His answer was affirmative – but only referred to GDPR “controls”, rather than “protections”. Worldwide, Facebook has rolled out a suite of tools to let users exercise their rights under GDPR, such as downloading and deleting data, and the company’s new consent-gathering controls are similarly universal.
Facebook told Reuters “we apply the same privacy protections everywhere, regardless of whether your agreement is with Facebook Inc or Facebook Ireland”. It said the change was only carried out “because EU law requires specific language” in mandated privacy notices, which US law does not.
In a statement to the Guardian, it added: “We have been clear that we are offering everyone who uses Facebook the same privacy protections, controls and settings, no matter where they live. These updates do not change that.”
Privacy researcher Lukasz Olejnik disagreed, noting that the change carried large ramifications for the affected users. “Moving around one and a half billion users into other jurisdictions is not a simple copy-and-paste exercise,” he said.
“This is a major and unprecedented change in the data privacy landscape. The change will amount to the reduction of privacy guarantees and the rights of users, with a number of ramifications, notably for for consent requirements. Users will clearly lose some existing rights, as US standards are lower than those in Europe.
“Data protection authorities from the countries of the affected users, such as New Zealand and Australia, may want to reassess this situation and analyse the situation. Even if their data privacy regulators are less rapid than those in Europe, this event is giving them a chance to act. Although it is unclear how active they will choose to be, the global privacy regulation landscape is changing, with countries in the world refining their approach. Europe is clearly on the forefront of this competition, but we should expect other countries to eventually catch up.”
Facebook also said the change did not carry tax implications. That means users will exist in a state of legal superposition: for tax purposes, Facebook will continue to book their revenue through Facebook’s Irish office, but for privacy protections, they will deal with the company’s headquarters in California.
The company follows other US multinationals in the switch. LinkedIn, for instance, is to move its own non-EU users to its US branch on 8 May. “We’ve simply streamlined the contract location to ensure all members understand the LinkedIn entity responsible for their personal data,” it told Reuters.
Published on The Guardian on April 19, 2018
By Ron Nixon
A new report concludes that a Department of Homeland Security pilot program improperly gathers data on Americans when it requires passengers embarking on foreign flights to undergo facial recognition scans to ensure they haven’t overstayed visas.
The report, released on Thursday by researchers at the Center on Privacy and Technology at Georgetown University’s law school, called the system an invasive surveillance tool that the department has installed at nearly a dozen airports without going through a required federal rule-making process.
The report’s authors examined dozens of Department of Homeland Security documents and raised questions about the accuracy of facial recognition scans. They said the technology had high error rates and are subject to bias, because the scans often fail to properly identify women and African-Americans.
“It’s telling that D.H.S. cannot identify a single benefit actually resulting from airport face scans at the departure gate,” said Harrison Rudolph, an associate at the center and one of the report’s co-authors.
“D.H.S. doesn’t need a face-scanning system to catch travelers without a photo on file. It’s alarming that D.H.S. still hasn’t supplied evidence for the necessity of this $1 billion program,” he added.
A spokesman for the Customs and Border Protection, an arm of the Homeland Security Department, did not have an immediate comment in response.
The report comes as Homeland Security officials begin to roll out a biometric exit system that uses facial recognition scanning in 2018 at all American airports with international flights.
Customs and Border Protection has been testing a number of biometric programs, partnering with several airlines in Atlanta, Boston, New York and Washington. It will cost up to $1 billion, raised from certain visa fee surcharges over the next decade.
Customs officials say the biometric system has also produced some successes in the pilot testing and has helped catch people who have entered the United States illegally and are traveling on fake documents. They noted that facial scans and fingerprints — unlike travel documents — cannot be forged or altered and therefore give agents an additional tool to ensure border security.
But Senators Edward J. Markey, Democrat of Massachusetts, and Mike Lee, a Utah Republican, expressed concerns about the report’s findings. In a letter to Kirstjen Nielsen, the Homeland Security secretary, the senators urged the department to delay rolling out the facial scans until it addresses the privacy and legal concerns identified in the report.
In 1996, Congress ordered the federal government to develop a tracking system for people who overstayed their entry visas. After the Sept. 11, 2001, attacks, an entry- and exit-tracking system was seen as a vital national security and counterterrorism tool. The 9/11 Commission recommended in 2004 that the newly-developed Department of Homeland Security complete a system “as soon as possible.” Congress has since passed seven separate laws requiring biometric entry-exit screening.
But for years, officials have struggled to put a biometric exit system in place because the technology to collect the data was slow to take hold. And many American airports, unlike those in Europe and elsewhere, do not have designated international terminals, leaving little space for additional scanning equipment.
The biometric system being tested by the Department of Homeland Security can be used either with a small portable hand-held device or a kiosk equipped with a camera.
The system snaps a picture of a passenger leaving the United States and checks whether that person is supposed to be on the plane. It compares the person’s face with a gallery of photos that the airline has collected of passengers on its travel manifest. It also checks the person’s citizenship or immigration status against various Homeland Security and intelligence databases. For American citizens, the facial scans are checked against photos from State Department databases.
Officials at Customs and Border Protection said that while the system does take facial scans of American citizens, the information is used in a very limited way. The officials said scans of Americans are only used to verify identity — not to collect new information.
The officials acknowledged the privacy concerns, but said the agency is working to answer them before the facial scans are placed in all international airports in the United States.
Laura Moy, who helped write the report, said the Customs and Border Protection assurances are not sufficient.
“They can change their minds on how they use this data at any time, because they haven’t put policies in place that govern how it’s supposed to be used,” said Ms. Moy, the deputy director of the Privacy and Technology Center at Georgetown Law. “This invasive system needs more transparency, and Homeland Security officials need to address the legal and privacy concerns about this system, before they move forward.”
An executive order signed in January by President Trump would require all travelers to the United States to provide biometric data when entering and exiting the country. Currently, foreign visitors provide biometric data only when they enter the United States. The executive order also calls for Homeland Security officials to speed up the deployment of the biometric system to airports.
The United States continues to trail other nations in adopting the technology to collect biometric information. Nearly three dozen countries, including in Europe, Asia and Africa, collect fingerprints, iris scans, and photographs that can be used for facial recognition of people leaving their countries.
Published on The NY Times on December 21, 2017
The Chinese government is collecting “voice pattern” samples of individuals to establish a national voice biometric database, Human Rights Watch said today.
Authorities are collaborating with iFlytek, a Chinese company that produces 80 percent of all speech recognition technology in the country, to develop a pilot surveillance system that can automatically identify targeted voices in phone conversations. Human Rights Watch wrote to iFlytek on August 2, 2017, asking about its business relationship with the Ministry of Public Security, the description on its website of a mass automated voice recognition and monitoring system it has developed, and whether it has any human rights policies. iFlytek has not responded.
“The Chinese government has been collecting the voice patterns of tens of thousands of people with little transparency about the program or laws regulating who can be targeted or how that information is going to be used,” said Sophie Richardson, China director. “Authorities can easily misuse that data in a country with a long history of unchecked surveillance and retaliation against critics.”
The Chinese government has stepped up the use of biometric technologyin recent years – including the construction of large-scale biometric databases – to bolster its existing mass surveillance and social control efforts. Compared with other biometric databases run by the police, the voice pattern database appears to be less established, with fewer samples in it. By 2015, police had collected 70,000 voice patterns in Anhui province, one of the main pilot provinces identified by the ministry for such collection. In comparison, national police databases have more than one billion faces and 40 million people’s DNA samples.
The collection of voice biometrics is part of the Chinese government’s drive to form a “multi-modal” biometric portrait of individuals and to gather ever more data about citizens. This voice biometric data is linked in police databases to the person’s identification number, which in turn can then be linked to a person’s other biometric and personal information on file, including their ethnicity, home address, and even their hotel records.
It is extremely difficult in China for individuals to remove such personal information, challenge its collection, or otherwise obtain redress for government surveillance. Unlike other types of biometric collection, such as fingerprinting or DNA sampling, individuals may not even realize their voice pattern has been collected, or that they are under surveillance.
Official tender documents and police reports suggest that police are collecting voice patterns together with other biometrics – fingerprints, palm prints, and profile photos, as well as urine and DNA samples – when they conduct “standardized” (标准化) and “integrated” (一体化) “information collection” (信息采集).
Police officers can subject anyone suspected of “violating the law or committing crimes” (违法犯罪), including misdemeanors, to this treatment. In one case, for example, police collected the voice patterns of three women who were suspected of sex work – including two suspected of administrative offenses – as police filed the case in an Anhui county.
No public official policy documents attempt to justify the creation or use of such voice pattern databases, but academic articles by scientists who are leading their development state that its purpose is to help identify the speaker in voice materials collected during a crime. An artificial intelligence program, known as an Automatic Speaker Recognition (ASR) system, is used to speed up the matching process.
Government reports in the media claim that Automatic Speaker Recognition forensics have been used to match voice patterns to solve cases involving telecommunications fraud, drug trafficking, kidnapping, and blackmail. According to these same reports, it will also be applied for counterterrorism and “stability maintenance” purposes – terms authorities sometimes use to justify the suppression of peaceful dissent.
As the government weaves a tightened web of surveillance, there are more ways ordinary citizens can get caught for criticizing the government, as well as for mobilizing and organizing for social change. There have been documented cases in which activists and netizens have been sentenced for their peaceful expression on communication tools, including on social media applications like WeChat.
The government has stepped up efforts to enforce “real-name registration” requirements for a range of services, including when purchasing mobile SIM cards, narrowing the space for anonymity and privacy. There are also cases in which activists are being tracked down by police when they travel on trains, as the authorities require “real name registration” for this and other forms of public transportation. Authorities have also installed CCTV cameras in front of the residences of activists, intimidating and monitoring them.
Government collection or use of biometric data is not inherently illegal and has been justified at times as a permissible investigative tactic. But to meet international privacy standards enshrined in the International Covenant on Civil and Political Rights, which China has signed but not ratified, each government instance of collection, retention, and use of biometrics must be comprehensively regulated, narrow in scope, and necessary as well as proportionate to meeting a legitimate security goal.
Given the sensitivity of biometric data, government officials should not collect or use such information unless necessary for the investigation of serious crime, and not for minor offenses or administrative purposes such as tracking migrants. Both collection and use should be limited to people found to be involved in wrongdoing, and not broad populations who have no specific link to crime. Collection, use, and retention should never be based on a person’s sex, sexual orientation, race, ethnicity, or religious, political, or other views. Individuals should have the right to know what biometric data the government holds on them.
Technology companies also have a human rights responsibility to ensure that their products and services do not contribute to human rights abuses, including violations of privacy and fair trial rights.
“Chinese authorities’ arsenal of surveillance tools just keeps getting bigger while privacy rights lag far behind,” Richardson said. “The Chinese authorities should immediately stop gathering highly sensitive biometric data until legal protections are clear – and clearly reliable.”
Voice Pattern Database; Automatic Speaker Recognition
In 2012, the Ministry of Public Security started the construction of a national voice pattern database and designated Anhui as one of the pilot provinces.
In 2014, the Anhui provincial police bureau issued an order to accelerate the database construction. Since then, police bureaus across that province have purchased voice pattern collection systems, based on official tender documents.
Similar purchases for voice pattern collection systems were also made in 2016 by the police bureaus in Xinjiang, a repressive region with 11 million ethnic minority Uyghurs, following the “Notice to Fully Carry Out the Construction of Three-Dimensional Portraits, Voice Pattern, and DNA Fingerprint Biometrics Collection System” (关于全面开展三维人像、声纹、DNA指纹生物信息采集系统建设相关工作的通知). A local police station reported that front-line officers are given monthly quotas for biometric collection.
Police and media reports also indicate that police units have been constructing voice pattern databases in Guangdong province, Anqi county in Fujian province, Wuhan city in Hubei province, and Nanjing city in Jiangsu province.
Human Rights Watch also found that police have collected voice patterns of ordinary citizens. For example:
A February 2017 report by the news website The Paper, since deleted inside China but still available on the overseas website China Digital Times, described how Anhui police were piloting an Automatic Speaker Recognition system to monitor phone conversations in real time, automatically picking out the targeted voice patterns of individuals and alerting the police:
"A woman in Huainan, Anhui, received a scam call … just as the scammer was instructing her, step-by-step, how to transfer her money … the voice pattern recognition system, recognizing the scammers’ voice patterns, alerted the police; the police then directly cut off the phone conversation."
The technology is integrated into a surveillance system put in place by iFlytek and an unnamed local telecommunications company.
iFlytek, based in Anhui province, is a major artificial intelligence company focused on speech and speaker recognition. iFlytek’s website touts the company’s achievement in developing the country’s first “mass automated voice recognition and monitoring system.” Its website states that it has helped the Ministry of Public Security in building a national voice pattern database. It is also the designated supplier of voice pattern collection systems purchased by Xinjiang and Anhui police bureaus. It says it has set up, jointly with the ministry’s forensics center, a key ministry laboratory in artificial intelligent voice technology (智能语音技术公安部重点实验室) that has “helped solve cases” in Anhui, Gansu, Tibet, and Xinjiang. The company states it can develop artificial intelligence systems that can handle minority languages, including Tibetan and Uyghur.
iFlytek’s website also claims it has developed other audio-related applications, including “keyword spotting” for “public security” and “national defense” purposes. The web page gives no further details of what these keywords or the security threats might be. In a patent it filed in August 2013, iFlytek states that it has developed a system to discover “repeated audio files” in the telecommunications system and on the internet that may be useful in “monitoring public opinion”:
"[Such a system] … which can automatically pick up, from a massive amount of audio information, audio clips that appear repeatedly is very important in information security and in monitoring public opinion.… For audio information on the phone [system], the use of the technology can quickly find illegal phone recordings that are being transmitted. For audio and video data on the internet, the technology can quickly and accurately dig out the most popular audio and video clips."
iFlytek has a joint laboratory with the Department of Electronic Engineering at Tsinghua University. The department has a long history of developing speech and speaker recognition for automated telephone surveillance, and is a major player in the Golden Shield Project, the Ministry of Public Security’s ambitious plan to bolster and broaden surveillance using technology.
iFlytek also has a range of commercial text-to-speech and speech recognition applications for mobile phones, including a voice assistance app for Android phones in China. The company states it has 890 million users, which would provide a large speech data set that can be used to train and improve its speech recognition software for a range of purposes, potentially including surveillance.
It is unclear to what extent iFlytek shares the personal information it collects for commercial purposes with the Ministry of Public Security. While iFlytek promises confidentiality in its customer privacy statement, it also says that it may disclose personal information “according to the demands of the relevant government departments.” China’s Cybersecurity Law requires companies to provide undefined “technical support” to security agencies to aid in investigations, and provides no privacy protections against state surveillance. iFlytek is not required to inform users of government information requests, for example.
During the 2014 annual meeting of the National People’s Congress (NPC) – China’s rubber stamp legislature – Liu Qingfeng, chairman of iFlytek and a deputy to the NPC, urged the authorities to “employ big data in countering terrorism as soon as possible, and to speed up the construction of the voice pattern database … to protect national security.”
Other governments have used automated speech recognition programs, including the United States for monitoring prison calls and Australia for verifying callers accessing social services; the Spanish police have more than 3,500 voice samples from people convicted of crimes.
While some governments pursue voice pattern collection for identification or authentication in limited situations, there are significant challenges to applying such technology for crime control and surveillance. The accuracy of an Automatic Speaker Recognition system is affected by the circumstances of speech, including emotions.
According to a speech recognition expert who spoke to Human Rights Watch but did not wish to be named, a system’s ability to conduct real-time surveillance is also limited. With current technology, such a system at most can only “listen” to 50 phone lines at one time to trace one targeted voice. The consequences of false positives, where the system incorrectly matches a voice to a stored voice pattern, could be severe when the technology is used to investigate and prosecute crimes, especially in countries such as China, where conviction rate is above 99 percent and few effective redress mechanisms exist.
Governments and private sector companies alike face additional challenges in securing large-scale biometric databases. These can become prime targets for cybercriminals, who could attempt to breach them to acquire biometrics to commit identity theft and fraud. Unlike with a national ID number or password, people cannot generally change their voice, face, or other biometrics, and so they may be left with little recourse or protection if such data is breached.
Biometric Collection and Wiretapping in Chinese and International Law
Chinese law appears to limit police collection of biometric samples to people connected to the investigation of a specific criminal case. Article 130 of the Criminal Procedure Law (CPL) states that in the course of criminal investigations, to “ascertain certain features, conditions of injuries, or physical conditions of a victim or a criminal suspect, a physical examination may be conducted, and fingerprints, blood, urine and other biological samples may be collected. If a criminal suspect refuses to be examined, the investigators, when they deem it necessary, may conduct a compulsory examination.”
But there are no legal guidelines or limitations on how long biometric samples can be stored, shared, or used, or how their collection or use can be challenged. While there are Ministry of Public Security internal departmental rules that focus on the administrative and technical aspects of voice pattern collection, most are not publicly available.
The collection of biometrics from migrants may also be taking place outside the law. While there are provincial-level rules authorizing local governments to collect migrants’ “basic data,” they do not explicitly include biometrics as part of the collected data.
Chinese law also does not authorize the police to collect individuals’ biometric data in cases of administrative offenses, though this may be changing. In early 2017, the Chinese government issued new draft amendments to its Public Security Administrative Punishments Law, in which a new provision, article 112, authorizes police to collect biometrics to identify victims and offenders in minor administrative cases.
Article 148 of the Criminal Procedure Law allows criminal investigators to wiretap criminal suspects as well as anyone connected to the crime for serious crimes, including endangering state security, terrorism, organized crime, drug-related crimes, and corruption. Such wiretapping does not require a court warrant – approval from supervisors in the relevant criminal investigation units is adequate under the law.
The National People’s Congress should review and revise legislation relevant to biometric data collection and wiretapping to ensure they are compliant with standards under the International Covenant on Civil and Political Rights. These standards must be part of a legal framework that ensures collection, use, and retention of such data is a) necessary in the sense that less intrusive measures are unavailable; b) appropriately restricted to ensure the action is proportionate to a legitimate purpose such as public safety; and c) does not impair the essence of the right to privacy and other related rights.
To ensure these standards are enforced, any biometric data program should also include independent authorization for collection and use, public notification, and means of independent oversight, as well as avenues for people to challenge abuses and have access to remedies. The authorities should also publish information about the collection and use of voice pattern recognition technology, including disclosure about databases that have been created and specific searches they conduct.
iFlytek should cease technology transfers and support for surveillance systems provided to the Ministry of Public Security and provincial authorities until regulations are in place that ensure privacy and other human rights are protected. Technology companies should refrain from sharing voice pattern or other personal information collected for commercial purposes with security agencies without a specific court warrant targeting an individual under suspicion of a serious crime.
The companies should not use voice patterns that were collected for commercial purposes to train or otherwise develop technology for surveillance purposes, as information collected from individuals for one purpose should not be used for another without their consent. Companies should also submit voice recognition technology developed for surveillance applications to public, independent accuracy competitions and publish performance results, including tests that address accuracy for ethnic minority languages and potential algorithmic bias that would affect minorities.
Published on HRW on October 22, 2017.
By José Miguel Vivanco, Americas Director, Human Rights Watch
Last week, the rights group Derechos Digitales released the text of a worrying draft decree, signed by President Michelle Bachelet in June, that could greatly increase intrusive government access to personal data.
The decree, which still needs to be approved by the Comptroller General’s Office to take force, would run counter to Chileans’ right to privacy and emulates some of the worst such policies around the globe.
The decree would require telecommunication companies to retain, for at least two years, data on electronic and mobile communications of everyone in the country, including phone calls, e-mail, and messaging cellphone applications. It greatly expands the types of data companies must store, while extending the retention period from one year to two. While it does not mandate retaining the content of the communications, it covers information like the location data and the phone numbers called that can provide a detailed portrait of the user’s intimate life, especially when collected at large scale or combined with other data. If the government knows that someone placed a call to a labor union representative or a suicide hotline, it can potentially draw conclusions about the caller, even without knowing what they said. Cell phone location data can provide authorities a detailed map of a person’s movements for years.
This is a grossly disproportionate intrusion on Chileans’ privacy. It is, of course, reasonable to demand disclosure of specific data to prevent or investigate crimes, subject to safeguards. But data retention under the decree would go much farther, affecting all users, regardless of whether they are suspected of a crime. The European Union’s top court, its Court of Justice, has twice struck down similar blanket data retention laws, noting that they impose an unjustifiably broad infringement of the right to privacy. The UK ignored the ruling and passed an expanded data retention law last year, but overall the rulings have fostered an encouraging trend in the EU. In contrast, such authoritarian countries as China and Russia have recently expanded their data retention laws to increase surveillance over their citizens.
Worse, while Chile’s decree would require a court order to intercept phone and other communications, it does not include such a requirement to access data already retained. Without judicial control, the decree could virtually turn the Chilean government into a “big brother” capable of knowing where everyone is, and whom they are contacting, all the time.
The draft also forbids companies from incorporating technology or equipment that can hinder the interception or recording of communications. If this provision is interpreted broadly to forbid encryption, it would set a troubling precedent. In the digital age, encryption is a cornerstone of security for vulnerable activists and journalists working in repressive regimes around the globe. It also protects millions of ordinary users from cybercriminals and malicious hackers. Even in the U.S., one of the most intrusive countries, lawmakers have not move forward with proposals to restrict encryption, acknowledging its key role to impede cybercrime. The Chilean government should set a positive example by promoting encryption as essential to security, rather than following the lead of countries like Russia, Ethiopia, and Turkey that have restricted it.
When it reviews the decree in coming days, the Comptroller General’s Office will decide whether it will protect Chileans’ right to privacy or allow the government to emulate authoritarian countries. There should be no doubt about its decision.
Published on the HuffPost on September 9, 2017.
By Alfred Ng
Just how good are the technology companies you use at keeping your secrets?
That's what the Electronic Frontier Foundation sought to determine through its seventh annual "Who Has Your Back" report, released Monday. The online privacy watchdog evaluated 26 tech companies on how well they protect people from government surveillance.
User privacy has become a major issue for millions of people as technology becomes more advance and is able to keep track of a person's likes, dislikes, friends, family and location at all times. The apps and devices that you use are a treasure trove of information, one companies control and law enforcement agencies want to access.
Apple in 2016 got tied up in a legal battle with the FBI over unlocking a terrorist's iPhone, while Amazon handed over voice data from the Echo for a murder trial in Arkansas. The Supreme Court is deciding on whether or not police need warrants to obtain phone location data from phone companies.
We expect a certain level of privacy on our devices, but government surveillance puts tech companies in a tough spot when legal concerns are on the line. Companies can fight back, demanding warrants and refusing to provide data, but not all organizations do.
Adobe, Dropbox and Pinterest are among the companies doing the most to keep your personal info from the government, a privacy watchdog finds.
The EFF ranked the companies on five criteria: if they followed best practices for privacy, if they informed users when the government requested data, if they promised not to sell users' data, if they stood up to gag orders and if they supported reforming the National Security Agency's Section 702 surveillance program.
Only nine companies got gold stars for all five criteria: Adobe, Credo, Dropbox, Lyft, Pinterest, Sonic, Uber, Wickr and Wordpress. Google, Microsoft and Facebook missed a mark for abiding by gag orders, while Apple lost one over its stance on the NSA's Section 702.
Amazon, the last of the Big Five tech companies, scored the lowest with only two stars, one for following best practices and another for its stance on Section 702. The retail giant didn't immediately respond to requests for comment.
Every major phone carrier on the list ranked low, with only one star for T-Mobile, Verizon and AT&T for following industry standards for privacy. That was something every company in the report got a star for.
"Both legacy telcos and the giants of Silicon Valley can and must do better. We expect companies to protect, not exploit, the data we have entrusted them with," said Nate Cardozo, EFF's senior staff attorney.
Published on CNet on July 11, 2017.
Canada's top court cleared the way on Friday for a lawsuit against Facebook Inc over privacy rights to be heard in the province of British Columbia instead of California where the social media site is based.
British Columbia resident Deborah Douez brought a notice of claim against Facebook in 2012, saying that her name and image were used without consent for the social media platform's "sponsored stories" product.
Douez said her privacy rights under provincial law were violated, a growing concern among social media users in recent years who fear their personal information or photos will be used without their knowledge.
The ads, which the company has since dropped, used Facebook members' names and pictures to advertise companies and products to other members.
In a 4-3 decision, Canada's Supreme Court ruled that the clause is unenforceable.
Canadian courts also have a greater interest in deciding cases that impinge on citizens' rights "because these rights play an essential role in a free and democratic society and embody key Canadian values," the court said.
A spokesman for Facebook said the company continues to believe the underlying claims are without merit, and it will defend itself "vigorously".
British Columbia's top court is better placed to rule on the province's laws than a California court would be, the court said. Justices said their decision was also supported by the expense and inconvenience of having Canadians travel to California to sue Facebook being greater than any inconvenience the company would face in making its records available in British Columbia.
A lower court had allowed Douez's case to be considered as a group lawsuit, but the high court did not address the issue of class action status.
Facebook reached a $20 million settlement in the United States in a class action lawsuit over the use of Sponsored Stories. The settlement was given court approval in 2013.
Facebook's shares were up 0.4 percent at $154.09 in mid-day trade.
Published on Reuters on June 23, 2017.
By AMANDA HESS
Last month, the true cost of Unroll.me was revealed: The service is owned by the market-research firm Slice Intelligence, and according to a report in The Times, while Unroll.me is cleaning up users’ inboxes, it’s also rifling through their trash. When Slice found digital ride receipts from Lyft in some users’ accounts, it sold the anonymized data off to Lyft’s ride-hailing rival, Uber.
Suddenly, some of Unroll.me’s trusting users were no longer so happy. One user filed a class-action lawsuit. In a blog post, Unroll.me’s chief executive, Jojo Hedaya, wrote that it was “heartbreaking to see that some of our users were upset to learn about how we monetize our free service.” He stressed “the importance of your privacy” and pledged to “do better.” But one of Unroll.me’s founders, Perri Chase, who is no longer with the company, took a different approach in her own post on the controversy. “Do you really care?” she wrote. “How exactly is this shocking?”
This Silicon Valley “good cop, bad cop” routine is familiar, and we spend our time surfing between these two modes of thought. Chase is right: We’ve come to understand that privacy is the currency of our online lives, paying for petty conveniences with bits of personal information. But we are blissfully ignorant of what that means. We don’t know what data is being bought and sold, because, well, that’s private. The evidence that flashes in front of our own eyes looks harmless enough: We search Google for a new pair of shoes, and for a time, sneakers follow us across the web, tempting us from every sidebar. But our information can also be used for matters of great public significance, in ways we’re barely capable of imagining.
When I signed up for Unroll.me, I couldn’t predict that my emails might be strategic documents for a power-hungry company in its quest for total road domination. Such privacy costs often become clear only after they’ve already been paid. Sometimes a private citizen is caught up in a viral moment and learns that a great deal of information about him or her exists online, just waiting to be splashed across the news — like the guy in the red sweater who, after asking a question in a presidential debate, had his Reddit porn comments revealed.
But our digital dossiers extend well beyond the individual pieces of information we know are online somewhere; they now include stuff about us that can be surmised only through studying our patterns of behavior. The psychologist and data scientist Michal Kosinski has found that seemingly mundane activity — like the brands and celebrities people “like” on Facebook — can be leveraged to reliably predict, among other things, intelligence, personality traits and politics. After our most recent presidential election, the company Cambridge Analytica boasted that its techniques were “instrumental in identifying supporters, persuading undecided voters and driving turnout to the polls” on Donald Trump’s behalf. All these little actions we think of as our “private” business are actually data points that can be aggregated and wielded to manipulate our world.
Years ago, in 2009, the law professor Paul Ohm warned that the growing dominance of Big Data could create a “database of ruin” that would someday connect all people to compromising information about their lives. “In the absence of intervention,” he later wrote, “soon companies will know things about us that we do not even know about ourselves.” Or as the social scientist and Times contributor Zeynep Tufekci said in a recent talk: “People can’t think like this: I didn’t disclose it, but it can be inferred about me.” When a peeping Tom looks between the blinds, it’s clear what has been revealed. But when a data firm cracks open our inboxes, we may never find out what it has learned.
Privacy has not always been seen as an asset. The ancient Greeks, for instance, distinguished between the public realm (“koinon”) and the private realm (“idion”). In contrast to those public citizens engaged in political life, humble private citizens were known as “idiotai,” a word that later evolved into “idiots.” Something similar is true of the English word “privacy.” As Hannah Arendt wrote in “The Human Condition,” privacy was once closely associated with “a state of being deprived of something, and even of the highest and most human of man’s capacities.” In the 17th century, the word “private” arose as a more politically correct replacement for “common,” which had taken on condescending overtones.
And yet somewhere along the way, privacy was recast as a necessity for cultivating the life of the mind. In George Orwell’s “1984,” the proles are spared a life of constant surveillance, while higher-ranking members of society are exposed to Big Brother’s watchful eye. The novel’s protagonist, Winston, begins to suspect that real freedom lies in those unwatched slums: “If there is hope,” he writes in his secret diary, “it lies in the proles.” In the influential 1967 book “Privacy and Freedom,” Alan Westin described privacy as having four functions: personal autonomy, emotional release, self-evaluation and intimate communication. This modern understanding of privacy as an intimate good grew up right alongside the technology that threatened to violate it. At the end of the 18th century, the Fourth Amendment to the United States Constitution protected Americans from physical searches of their bodies and homes.
One hundred years later, technological advancements had legal minds thinking about a kind of mental privacy too: In an 1890 paper called “The Right to Privacy,” Samuel Warren and Louis Brandeis cited “recent inventions and business methods” — including instant photography and tabloid gossip — that they claimed had “invaded the sacred precincts of private and domestic life.” They argued for what they called the right “to be let alone,” but also what they called “the right to one’s personality.”
Now that our privacy is worth something, every side of it is being monetized. We can either trade it for cheap services or shell out cash to protect it. It is increasingly seen not as a right but as a luxury good. When Congress recently voted to allow internet service providers to sell user data without users’ explicit consent, talk emerged of premium products that people could pay for to protect their browsing habits from sale. And if they couldn’t afford it? As one congressman told a concerned constituent, “Nobody’s got to use the internet.” Practically, though, everybody’s got to. Tech companies have laid claim to the public square: All of a sudden, we use Facebook to support candidates, organize protests and pose questions in debates. We’re essentially paying a data tax for participating in democracy.
The smartphone is an intimate device; we stare rapt into its bright light and stroke its smooth glass to coax out information and connect with others. It seems designed to help us achieve Westin’s functions of privacy, to enable emotional release and moments of passive reflection. We cradle it in bed, at dinner, on the toilet. Its pop-up privacy policies are annoying speed bumps in the otherwise instantaneous conjuring of desires. It feels like a private experience, when really it is everything but. How often have you shielded the contents of your screen from a stranger on the subway, or the partner next to you in bed, only to offer up your secrets to the data firm tracking everything you do?
The surveillance economy works on such information asymmetry: Data-mining companies know everything about us, but we know very little about what they know. And just as “privacy” has grown into an anxious buzzword, the powerful have co-opted it in order to maintain control over others and evade accountability. As we bargain away the amount of privacy that an ordinary person expects, we’ve also watched businesses and government figures grow ever more indignant about their own need to be left alone. Companies mandate nondisclosure agreements and demand out-of-court arbitration to better conceal their business practices. In 2013, Facebook revoked users’ ability to remain unsearchable on the site; meanwhile, its chief executive, Mark Zuckerberg, was buying up four houses surrounding his Palo Alto home to preserve his own privacy. Sean Spicer, the White House press secretary, has defended President Trump’s secretive meetings at his personal golf clubs, saying he is “entitled to a bit of privacy,” and the administration has cut off public access to White House visitor logs, citing security risks and “privacy concerns.” When The New York Times reported that the president takes counsel from the Fox News host Sean Hannity, Hannity indignantly tweeted that his conversations were “PRIVATE.”
We’ve arrived at a place where public institutions and figures can be precious about their privacy in ways we’re continually deciding individual people can’t. Stepping into the White House is now considered more private than that weird rash you Googled. It’s a cynical inversion of the old association between private life and the lower class: These days, only the powerful can demand privacy.
Published on the NY Times' website on May 9, 2017.
Republicans just made clear how little they care about protecting the privacy of Americans by letting companies like Verizon and Comcast sell advertisers the internet browsing histories and other personal data of their customers without getting permission. The move could bolster the profits of the telecommunications industry by billions of dollars.
Following a party-line vote in the Senate last week, the House approved a resolution on Tuesday that would overturn a broadband privacy regulation the Federal Communications Commission adopted in October. That rule requires cable and phone companies to obtain consent before using information like which websites people visited to show them customized ads and to build detailed profiles on them. The White House said on Tuesday that President Trump would sign the legislation, which would also prohibit the F.C.C. from adopting a similar policy in the future.
Most Americans spend much of their lives online. They should be able to do so without fear that their internet service providers are logging their activities and selling the data. There is a long tradition of the government protecting such information. For example, the F.C.C. has long restricted what phone companies can do with call records. And in 1988 Congress prohibited video stores from disclosing the movies people rented.
Republican lawmakers, like Senator Jeff Flake of Arizona, and the chairman of the F.C.C., Ajit Pai, say that the commission’s privacy rule is unfair because it applies only to broadband companies and not to internet businesses like Google and Facebook. This is highly disingenuous. Congress has only given the commission authority over telecommunications companies, so the F.C.C. couldn’t have come up with rules that applied to other businesses even if it wanted to.
Telecom companies know a lot about what people do online because they are the gatekeepers through which people connect to the internet. And as people link household devices like thermostats, light bulbs and security cameras to the internet, these companies will have even more intimate knowledge about their customers. By comparison, people can more easily evade tracking by businesses like Google and Facebook by not using those services or by deleting the cookies those websites leave on their computers and phones.
In the absence of strong privacy rules, people will have to rely on encryption to prevent service providers from tracking them. But broadband companies would still know what websites people visit. And the companies would be able to see all of the communications between users and websites that do not use encryption. Sophisticated users might increasingly rely on virtual private networks, which are used by corporations to let their employees log into secure systems remotely, and other tools to mask their activities, but most Americans are unlikely to be conversant with such tricks of the trade.
Mr. Trump promised voters during the campaign that he would protect the working class. But now he and his party are moving quickly to do the bidding of a very different interest group: Big Telecom.
This editorial was published on the NY Times' website on March 29, 2017.
Malaysia: Communications and Multimedia Act must be urgently revised to comply with freedom of expression standards
ARTICLE 19 calls on the Malaysian Government to urgently review the Communications and Multimedia Act 1998 (CMA) and bring it into full compliance with international freedom of expression standards in conjunction with the launch of our new legal analysis of The Communications and Multimedia Act today. The analysis is the first to be published since ARTICLE 19 set up a desk dedicated to freedom of expression issues in Malaysia late last year.
In February, ARTICLE 19 analysed the CMA, which in recent years has been invoked more frequently following the social media boom in Malaysia. The use of the Act by police and the Attorney General of Malaysia to arrest and charge individuals expressing progressive views and dissent sets a worrying trend on the boundaries of free speech in the country. At the moment, a constitutional challenge is being mounted on the Act at the Federal Court.
Freedom of expression is guaranteed under Article 10(1)(a) of Malaysia's Federal Constitution, but in practice, draconian laws such as the CMA are frequently used to harass individuals and criminalise the right to freedom of speech and expression in the country.
The CMA has an expansive scope, ranging from spectrum allocation and consumer protection to content regulation and investigatory powers. The main subjects of regulation under the Act are applications services and network services. The Act further pertains to content applications services, which appear to include online intermediaries.
ARTICLE 19 is particularly concerned that Section 233 of the CMA, which deals with “improper use of network facilities or network service” has been used time and again to target human rights defenders in the country. In October 2015, student activist Khalid Ismath was charged with 11 counts under Section 233 of the CMA for posting comments on social media deemed offensive to the Johor royalty and in June 2016, graphic artist Fahmi Reza was likewise charged with 2 counts under Section 233 of the CMA for posting images on social media depicting Malaysian Prime Minister, Najib Razak as a clown.
In addition to this, ARTICLE 19 notes that the CMA is currently being revised by the Government to “incorporate new elements, including network security matters”, which we are concerned may be used to further target social media users in Malaysia.
From our analysis of the CMA, ARTICLE 19 concludes that the Act creates a number of overly broad content-related offences. In addition, the licensing schemes for network and applications services lacks adequate safeguards against censorship and introduce far-reaching investigatory powers which are at odds with the protection of journalistic sources and the right to anonymity.
ARTICLE 19 calls on the Malaysian Government to urgently review the Act, introduce necessary amendments and ensure it fully complies with the international freedom of expression standards.
We recommend that the Malaysian Government:
This article was published on Article 19's website on March 24, 2017.
The Canada Border Services Agency‘s (CBSA) practice of examining and even confiscating travellers’ cellphones and other personal devices has come under the scrutiny of the federal privacy watchdog.
The Office of the Privacy Commissioner of Canada (OPC) has launched an investigation following increasing concerns about travellers’ right at the border, according to a report by the National Post.
CBSA officials have the right to inspect devices such as cellphones, laptops and tablets, and ask for passwords to allow access. If travellers don’t comply, CBSA officers can even confiscate the device.
Electronic devices are classified as “goods,” according to CBSA policy, and under the Customs Act officers have the authority to examine them as part of a routine examination.
CBSA policy states that personal devices should only be searched when officials have reason to believe a device will contain “evidence of contraventions,” or proof you have violated a law through files or information “known or suspected to exist” on your phone.
“Examinations should only occur where there is a multiplicity of indicators, or further to the discovery of undeclared, prohibited, or falsely reported goods,” said CBSA spokesperson Nicholas Dorion in an email to Global News last month.
The CBSA does not require a warrant, the Office of the Privacy Commissioner of Canada notes on its website, and “Officers may examine devices for photos, files, contacts and other media.”
What they do with those files — and whether the CBSA can make a copy of any or all the information found on your phone — is unclear.
Global News has reached out to the CBSA and the Office of the Privacy Commissioner but did not hear back by time of publication.
Anyone with concerns about their experience during a search at the border can file a complaint with the Office of the Privacy Commissioner.
This article was published on Global News' website on March 16, 2017.
A civil right is an enforceable right or privilege, which if interfered with by another gives rise to an action for injury. Examples of civil rights are freedom of speech, press, and assembly; the right to vote; freedom from involuntary servitude; and the right to equality in public places. Discrimination occurs when the civil rights of an individual are denied or interfered with because of their membership in a particular group or class. Various jurisdictions have enacted statutes to prevent discrimination based on a person's race, sex, religion, age, previous condition of servitude, physical limitation, national origin, and in some instances sexual orientation.
Source: Cornell University Law School