Skip to main content

Data Processing Agreement for Data Sharing by Microsoft


With GDPR (General Data Processing Regulation) in place since 2018, the European Union has been making data processing tough for researchers & data miners with malicious desire. Likewise, many new data processing agreements stole the show, such as India’s Personal Data Protection Bill and Canada’s PIPEDA.

Brazil’s General Data Protection Law and California’s Consumer Privacy Act are yet to come into the picture of their data manipulation regulations in 2020. Before them, Microsoft has taken revolutionary initiative. It is likely to introduce three data sharing agreements. For now, they have been put on a trial mode.

Azure Data Share:
Microsoft has already debuted Azure Data Share. It enables companies to share colossal sized data sets among them securely, unlike data sharing through FTP or web APIs. The web research & development industry requires it for processing data to research useful patterns. This share will make it faster for data sharing under highly secured IT network.  

Let’s get through what agreements have been under the trial by Microsoft.

Data Sharing Agreements By Microsoft: The tech giant has initiated three proposals. Catch what these are:
  1. Open Use of Data Agreement: This is basically a provision for abandoning the use of personal data. The data processing services providers turn blindfolded while utilizing open data sets. This data agreement implication can bar them from doing so. 
  2. Computational Use of Data Agreement: This agreement is proposed to share data sets gotten from publically available sources for training Artificial Intelligence (AI). The data mining services providers have to ink a deal with the third parties to prohibit the inclusion of any personal data & copyright-covered data like snippets. It straightly bans the republishing or redistribution of shielded data sets. 
  3. Data Use Agreement for Open AI Model Development: This agreement sticks around privacy-protected data or the data which may be underlying the control of the data owners. 
Microsoft is looking forward to flatten the roadblocks of data sharing across companies conveniently.  The biggest challenges in this way are inconsistencies, lack of standardized data-sharing terms and licensing agreements. These proposals of agreements will fill those anomalies.
However, the tech-giant has put them under-trial for getting community feedback and input on them. Some of these terms will be available on its GitHub code sharing site. 

In the nutshell, the fourth industrial revolution requires data processing and sharing agreements as an urgency to keep data breaching at bay.

Comments

Popular posts from this blog

Excellent Data Entry Clerk’s Qualities for Data Entry Services

What a qualified and skilled professional wants? Obviously, one looks forward to handsome salary and perks apart from satisfaction. Big-data is rolled out with the advent of internet. Heydays are on for expert data entry clerks and analysts. Payscale.com states an average salary worth $52,188 for an entry level data analyst in the US. In India, the vetted professional of SAS, R, data mining and data warehouse earns revenue worth Rs. 309,785 on an average. Just imagine, how much bigger would be the salary package of an adept entry-level clerk and analyst! Having good typing speed and knowledge of MS Excel fulfills prior requirements only. The candidate needs to be the master of many more skills. Data entry services based companies accommodate such aspirants those have:           Technical Skills:   Speedy typing assures an entry ticket to the budding data operators. And if their memory has all shortcut keys of MS Excel and Word, they manage to type quicker. But leapfro

What Are the Most Common Data Quality Issues?

  Do you know that IBM’s bad data cost around $3.1 trillion dollars every year? Such a big loss it is! It’s all because of data inaccuracies, which clarify how precious high-quality data is. Therefore, it’s a must to identify, segment, and fix typos, and duplicates, and fill in missing details so that data analysts can draw feasible strategies or business ideas.   Let’s talk about the most common data quality issues that are no less than big challenges. Most Common Data Quality Issues •                      Segmenting Semi-Structured and Unstructured Data Fortunately, we have technologies and data management tools that make it easier to create a centralized database. But, this fortunate value for nothing when data warehouses or servers prove inefficient in effectively dealing with relational datasets. It’s because of different data qualities, which can be good and bad, structured and unstructured big data. So, data managers should emphasize the structuring of unstructure

What is data cleansing and processing?

  Data cleansing is a part of processing wherein it is ensured that the data is correct, consistent and useful. This process involves detecting and filtering errors or corrupt data entries, missing space, incomplete data typos and other related inconsistencies. These are all corrected and then, the data are transformed in a usable form or make it ready for analysis, research or any other business purposes. This is called processing. In the data world, a data cleansing and processing services company covers every step from capturing, web extraction, cleansing to quality testing and tailoring data format as per requirements. Today, these processes are the must-have for big data research, AI and data science.  In short, Data Cleansing Processing can be a combination of, but is not limited to the following sub-processes: •                      Data Migration, which is all about data upload, capture, import, & export to the defined server/ cloud storage •                      Data Coll