


“Think” in MapReduce to effectively write algorithms for systems including Hadoop and Spark. Evaluate key-value stores and NoSQL systems, describe their tradeoffs with comparable systems, the details of important examples in the space, and future trends.ĥ. Use database technology adapted for large-scale analytics, including the concepts driving parallel databases, parallel query processing, and in-database analyticsĤ. Identify and use the programming models associated with scalable data manipulation, including relational algebra, mapreduce, and other data flow models.ģ. Describe common patterns, challenges, and approaches associated with data science projects, and what makes them different from projects in related fields.Ģ. At the end of this course, you will be able to:ġ. You will also learn the history and context of data science, the skills, challenges, and methodologies the term implies, and how to structure a data science project. Cloud computing, SQL and NoSQL databases, MapReduce and the ecosystem it spawned, Spark and its contemporaries, and specialized systems for graphs and arrays will be covered. You will learn how practical systems were derived from the frontier of research in computer science and what systems are coming on the horizon. In this course, you will learn the landscape of relevant systems, the principles on which they rely, their tradeoffs, and how to evaluate their utility against your requirements.

The abstractions that emerged in the last decade blend ideas from parallel databases, distributed systems, and programming languages to create a new class of scalable data analytics platforms that form the foundation for data science at realistic scales. Extracting knowledge from large, heterogeneous, and noisy datasets requires not only powerful computing resources, but the programming abstractions to use them effectively. Data analysis has replaced data acquisition as the bottleneck to evidence-based decision making - we are drowning in it.
