Big Data from A to Z


Welcome to another awesome list. This time, Big Data and all the tools Data Scientists and Data Engineers use to build platforms and models.

But first, let’s clear up any confusion on how Machine Learning, Artificial Intelligence and Deep Learning fit together:

Artificial Intelligence – The overarching name of the field concerned with bringing out intelligent behaviour in machines
Machine Learning – The subfield aiming to use data to bring out Artificial Intelligence.
Deep Learning – A method of Machine Learning applied to large datasets.

AI-Machine-Learning

Source: Rapidminer.com


With that out of the way, let’s get into the list:

Athena
Batch Processing
Compute
ocker
Ethical Guidelines
Fuzzy Logic
GPU
Hadoop
Image Recognition
Jupyter Notebook
Kaggle
Linear Regression
Map Reduce
Natural Language Processing
Overfitting
Pattern Recognition
Quantitative v Qualitative
Real Time
Spark]
Testing
Unstructured Data
Volume and Velocity
Web Scraping
XML
Numpy
ZooKeeper



Athena

AWS Athena is a service used to query files in S3 buckets directly on a pay-for-what-you-use basis. This makes it easy to get going querying data in various formats without having to use an ETL tool to load it into a database.

The service can be used on its own, integrated with AWS Glue as a Data Catalogue or with AWS Lambda as part of a bigger architecture.



Batch Data Processing

Big Data projects rely on the Data Scientist being able to process terabytes or even petabytes of data. Tools like Apache Flink can get the job done using data streams or batch processing.



Compute

To allow Data Scientists to process big data sets, the infrastructure needs to be there to support them. This can be put in place by using autoscaling to make sure there is enough capacity to process the volume of data.

To make this even easier to manage AWS has introduced Predictive Auto Scaling that uses Machine Learning to scale up compute resources to support Machine Learning.

So meta.



Docker

Sharing the results of Data Science experiments isn’t always easy. Operating systems and R libraries aren’t always compatible depending on who you are sharing with. Security is also an issue when sharing datasets and final dashboards between users.

That’s when Docker comes in. Data Engineers can provision Docker Images that freeze the Operating System and libraries so sandboxes or final products can be shared securely.



Ethical Guidelines

Use of customers personal information in analysis needs to be taken seriously and guidelines need to be in place to keep it secure. This is more than just complying with legal requirements. Models should not have any kind of bias and participants should always know where their data is being used.



Fuzzy Logic

Fuzzy Logic is used to calculate the distance between two strings. This is similar to using wildcards in SQL and Regular Expressions in many other languages.

In the Data Science world, we can use the Fuzzy Wuzzy Python library across big data sets.



GPU

Graphics Processing Units (GPUs) are designed to process images, as they are made up of multiple cores. Because they can process huge batches of data and perform the same task over and over they are also used in Data Science.



Hadoop

The Open-Source Hadoop project is a collection of utilities that decouples Storage and Compute so these can be scaled up and down as needed.

Hadoop Distributed Files System (HDFS) breaks the files into logical chunks for storage, Spark, MapReduce or another tool can then take over to do the processing (more on that later in the post).

Fun fact: Hadoop is the name of the creator’s sons toy elephant.



Image Recognition

Tensorflow is a Machine Learning framework used to train models using Neural Networks to perform image recognition.

Neural Networks break up inputs into vectors which they use to then interpret, cluster and classify.



Jupyter Notebook

Jupyter Notebooks run code, perform statistical analysis and present data visualisations all in one place. It supports 40 languages and is named Jupyter as a nod to Galileo’s notebooks recording the discovery of the moons of Jupiter.



Kaggle

If you are looking to get some practice in or need a dataset for a project Kaggle is the place to start. Once you’ve practised on a few of the test data sets you can then compete in competitions to solve problems. The community and discussions are friendly and you can use your tool of choice.



Linear Regression

Regression is one of the statistical techniques used in Data Science to predict how one variable influences another. Linear regression can be used to analyse the relationship between long queues at the supermarket and customer satisfaction or temperature and ice cream sales.

If you think there is a relationship between two things you can use regression to confirm it.



MapReduce

MapReduce is the compute part of the Hadoop ecosystem. Once we have stored the data using HDFS, we can then use MapReduce to do the processing. MapReduce processes the data in logical chunks then processes them in parallel before aggregating the chunks again.



Natural Language Processing

Natural Language Processing (NLP) is the arm of Artifical Intelligence that is concerned with how computers can derive meaning from human language. If you’ve ever used Suri, Cortana, or Grammarly you’ve encountered NLP.



Overfitting

Both overfitting and underfitting lead to poor predictions.

Overfitting – happens when a model is too complex and has too much noise. The model ‘memorises’ and makes generalisations on all the training data and can’t ‘fit’ this to another data set.

Underfitting – happens when a model is too simple and there aren’t enough parameters to capture trends.



Pattern Recognition

Pattern Recognition is used to detect similarities or irregularities in data sets. Practical applications can be seen in fingerprint identification, analysis of seismic activity and speech recognition.



Quantitative v Qualitative

If moving into Data Science from an Engineering background you may need to brush up on your statistics. Learn more about the skills needed to transition into the role in this fascinating interview with Julia Silge of Stack Overflow.

 



Real Time

Apache Kafka is a pub/sub system that allows streaming of data from logs, web activity and monitoring systems.

Kafka is used for two classes of applications:



Spark

Apache Spark, like MapReduce is a tool for data processing.

Spark – can process in-memory so is much faster. Useful if data is needed to be processed iteratively or in real time.

MapReduce – must read from and write to a disk but can work with far larger data sets than Spark. If results aren’t required right away this may be a good choice.



Testing

Artificial Intelligence (AI) has practical uses in Marketing with real-time product recommendations, Sales with VR systems helping shoppers make decisions and Customer Support with Natural Language Processing.

An emerging use case comes is Software Testing. AI can be used in prioritising the order of tests, automating and optimising cases and freeing up QAs from tedious tasks.



Unstructured Data

Structured Data can be stored in a Relational Database is columns, rows and tables.

When it comes to Unstructured Data which includes images, videos, and text the storage needs change. Data Lakes can hold both types of data at low cost.

Data stored here is retrieved and read when required and organised based on need, making it popular with data scientists who would rather keep the quirks and ‘noise’ in, rather than having it cleaned and aggregated.



Volume and Velocity

In 2001 Big Data was defined by the three Vs:

Fast forward to today and there are additional Vs used in industry publications:

There is debate over whether these are relevant, or truly describe what Big Data and Data Science is but if you are researching the industry these will inevitably come up.



Web Scraping

Use cases for Web Scraping in Big Data projects include:

To get started using Python install Scrapy to extract structured data from websites.



XML

XML and JSON formats are common in the Big Data world as ways to store and transport data. To use these with Python check out ElementTree for parsing XML and json for JSON.



NumPy

I cheated a little bit but in any case …

NumPy is used in Python to integrate with databases, perform scientific calculations and manipulating arrays.



ZooKeeper

Apache ZooKeeper takes care of keeping clusters running and available. It maintains the network by passing messages back and forth and guarantees:

 


Photo by Magda Ehlers from Pexels

# #

May 27, 2019

Bitnami