Data-driven business models

Data law landscapes beyond Europe

Published on 14th Jun 2022

European businesses with an appetite for global growth cannot afford to ignore the data law landscape in other markets. Regulation on privacy and artificial intelligence (AI) is developing at rapid speeds in other key markets such as the US, China and India. The latter two in particular require certain types of data to be stored locally, and regulators are cranking up the enforcement.

This is chapter 2.15 of Data-driven business models: The role of legal teams in delivering success

 Key Takeaways

  • register interest hand checkbox
    Data laws in the US are fragmented but rapidly expanding, and already more protective of consumers than their reputation
  • India is moving towards a much more structured data regulation framework, modelled after the GDPR
  • China’s data protection and cybersecurity regime has evolved to its final form,
    a three-pillar regulatory framework 

Expanding and enforcing privacy law

Privacy regulation worldwide is evolving to become more protective of individuals, with more comprehensive enforcement to boot.


The American privacy regulation landscape has historically been fragmented, with individual and highly divergent laws for certain sectors, situations or types of data. 

1. Fragmented privacy regime

Depending on business models, companies may need to comply with well-known federal laws like the Children's Online Privacy Protection Act, which protects children's information in online environments, the Health Insurance Portability and Accountability Act for health information, and more obscure legislation such as the Video Privacy Protection Act, which makes it illegal for video rental businesses to disclose their customers' rental history (and has been applied to streaming services as well). 

State laws can add even more complexity. In Illinois, the Personal Information Protection Act imposes breach notification and data-security obligations on any organisation that collects personal data, and the Biometric Information Privacy Act6 specifically regulates the collection and use of biometric information by private corporations.

2. US states limit data sharing

Recently, several US states have passed more comprehensive consumer privacy legislation, with California leading the charge, with its consumer privacy legislation and, more recently, a privacy rights act, followed by Virginia, Colorado and Utah (their statutes come into effect in the course of 2023). Many other states, including New York, Texas, Illinois and Florida, have also introduced new draft consumer privacy legislation. 

These laws aim to limit the "sale" of consumer information by the companies that gather it – with the definition of "sale" being broad enough to include some situations where no money changes hands. Letting consumers opt out of data sharing can also impact the utility of datasets, which will come with a self-selection bias built in. 

These laws have unmistakable similarities, despite differing nuances. They apply to entities doing business in the respective states, subject to various thresholds, including overall company revenue, the number of concerned data subjects, and the portion of revenue derived from data sharing. 

However, unless a business intends to remain small, it would be short-sighted to ignore privacy regulation entirely in business and product decisions that may be hard to change once a business does have enough traction to cross the thresholds.

3. Enforcement risk 

While California will have a dedicated privacy regulator, other states rely on lawsuits brought by their Attorney General or private citizens. Fines and damage claims can add up quickly for structurally non-compliant products or services, as they can be calculated per concerned customer. 

3D copmuter

Data-driven business models

The role of legal teams in delivering success

Download the full report >


India’s privacy laws are in the process of being overhauled, following a 2017 ruling of the Indian Supreme Court holding that all individuals are entitled to informational privacy. The last version of the proposed law, the Personal Data Protection Bill was released in December 2019. After almost two years of discussions, a joint parliamentary committee of the Indian Parliament adopted its report on the 2019 Bill, in November 2021, and this report was placed before the Parliament on 16 December 2021. 

1. Data subject rights

Closely following the General Data Protection Regulation formulation, data subjects in India will have extensive rights under the new law. These include the "right to be forgotten" and data portability, which were not available under the older data law. These rights will increase the regulatory compliance burden on data processors, particularly since some of them may be exercised through the offices of the Indian government and data protection authority. 

2. Localisation mandates 

The proposed privacy law requires entities to store a copy of "sensitive personal data" (for example, financial data, biometrics and health information) within India whenever cross-border transfers are undertaken. Further "critical personal data" (which remains to be defined) is to be stored only in India. As such, entities hosting sensitive and critical personal data outside of India may be required to formulate procedures to mirror such data within India. 

3. "Significant" data fiduciaries face more regulation 

"Significant" data fiduciaries will be classified on the basis of factors such as volume of data processed, nature and sensitivity of data, etc. These entities will have additional compliance obligations under the new law, including appointing a data protection officer and maintaining records of processing activities. 


The year 2021 was remarkable for China’s data protection and cybersecurity regime, as we eventually saw the completion of the three-pillar regulatory framework: alongside the Cybersecurity Law (CSL) (effective as of 1 July, 2017), the People's Republic of China (PRC) enacted the Data Security Law (DSL) and the Personal Information Protection Law (PIPL). In addition to such cornerstone basic laws, multiple implementation rules or national standards were also issued either as binding rules, or drafts for public comments, intended to provide more practical
guidance for implementation. 

1. The Personal Information Protection Law

The long-awaited PIPL, known as the "Chinese GDPR", took effect as of 1 November 2021. Unlike the security-oriented CSL and DSL, this law particularly focuses upon the protection of personal information. While there are many similarities between GDPR and PIPL, there are also noteworthy differences, or China-specific features (for example, legitimate interest is not a legal ground for processing, requirements on
cross-border transfers, etc). 

2. Localisation and cross-border transfer 

Cross-border data transfer (CBDT) has been a hot topic since the CSL, in which critical information infrastructure operators (CIIOs) are subject to an express data localisation requirement. The PIPL provides further restrictions on CBDT by non-CIIOs, thus in a sense providing more clarity on the issue. Notably, non-CIIOs may now also be subject to mandatory security assessment with PRC authorities before transferring data out of China.

The latest development in this regard was the issuance for public comments of the draft measures of security assessment for CBDT rules. The draft CBDT rules set out a more detailed and broad scope of CBDTs by non-CIIOs, which are subject to mandatory security assessment, by reference to the number of data subjects whose personal information (or sensitive personal information) is processed and will be transferred out of China. Businesses, especially multinational companies in China, are hoping to see the final form of these CBDT rules soon. 

3. Enforcement 

The year 2021 was a busy year for PRC regulators for the enforcement of data protection laws and regulations. While both the DSL and PIPL were relatively new, PRC regulators have focused on specific areas of enforcement. For instance, an unprecedented number of websites and apps were identified as non-compliant with data protection requirements, and either disabled and removed from app stores or suspended from operating for violation of applicable data protection laws and regulations (for example, excessive data collection, collection and processing without obtaining valid consent, unlawful sharing). This a very strong indication by the regulator that data protection is no longer an issue that a business in China could possibly ignore and a high price could be paid for non-compliance. 

Regulating artificial intelligence 

Regulation of AI is developing at very different speeds around the world. While this does not appear to be a priority in India, both China and the US are following ambitious, if slightly different, goals. 

The US 

A large number of initiatives are currently under way on a federal and state level to regulate the use of AI, in particular with a view to ensuring ethical decision-making and mitigating actual or perceived risks for consumers. 

Legislators are trying to tackle the issue that machine learning from actual data may result in a perpetuation of existing biases by imposing transparency requirements. Most of the proposed federal and state legislation would force companies to self-audit their algorithms and AI applications, proactively counter any algorithmic discrimination on grounds of protected categories such as ethnicity, gender or disability, and provide disclosures explaining each decision to enable affected consumers to contest the validity of the data used in its making. 

These initiatives would complement existing sectoral laws that already limit or regulate the use of AI for certain situations. In some states, using AI in the recruiting process is subject to information and consent requirements and, in New York City, the technology must be regularly subjected to bias audits. If eligibility decisions, such as for loans, but also housing or employment, are based on AI analysis, this may already trigger certain notification obligations and correction rights for consumers. 

But there are also some encouraging signals: the federal government and many states are actively promoting education and research into AI and improving related policy-making. Alabama has created a special council to advise legislators and the government about AI and, in Mississippi, machine learning and AI are now part of the state's official school curriculum. 


India does not have an overarching law governing AI, and it is not likely that such a law will be formulated any time soon. That said, sector-specific laws still have bearing on how an AI enterprise can function in the Indian context. 

The Indian government’s strategy papers over the past decade call for sector-specific "tweaks" for AI, as opposed to a bespoke law. For example, provisions in any data privacy law can be calibrated to deal with AI issues. In December 2021, in fact, in response to a question in Parliament the Indian government stated that there are no plans to regulate AI and matters such as facial recognition are and will be covered under other laws.

As in many fields, regulation will likely lag behind innovation. It remains to be seen whether the Indian government’s stance on regulating AI changes, having regard to developments in other jurisdictions as well as emergent public uses. As things stand, AI-based systems will still need to abide by current laws that may define or even limit their development. For instance, Indian law is particularly sensitive around sharing images of children, and any 'machine learning' product may need to account for local privacy and child protection regulations.


The latest Chinese law, the Administrative Measures on Algorithm Recommendation of Internet Information Services (effective as of 1 March 2022) regulates the use of algorithms to recommend information to internet users. 

These measures require AI algorithms used in the recommendation must be moral, accountable, and transparent, which is not dissimilar to the principle for automated decision-making under PIPL. 

Like PIPL, the measures also prohibit algorithms from processing personal information to apply differential pricing between users. Businesses are also required to be transparent about the purpose of basic rationale, the intent and the main mechanism for operating the algorithms. The measures require the algorithms to be trustworthy AI and, from a regulatory perspective, they set up concrete rules on what, in many other jurisdictions, is more of a concept. 

These measures represent China’s ambitious approach in regulating AI technology and will bring changes to and impact upon a wide range of businesses, especially where an algorithm is a key element in its pricing strategy and business model.


* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?