Big data from A to Z

Welcome to an awesome list of all the tools data scientists and data engineers use to build platforms and models.


Athena
Batch processing
Compute
Docker
Ethical guidelines
Fuzzy logic
GPU
Hadoop
Image recognition
Jupyter notebook
Kaggle
Linear regression
Map-reduce
Natural language processing
Overfitting
Pattern recognition
Quantitative v qualitative
Real-time
Spark
Testing
Unstructured data
Volume and velocity
Web scraping
XML
Numpy
ZooKeeper



Athena

AWS Athena is a service used to query files in S3 buckets directly on a pay-for-what-you-use basis. This makes it easy to get going querying data in various formats without having to use an ETL tool to load it into a database.

The service can be used on its own, integrated with AWS Glue as a data catalogue or with AWS Lambda as part of a bigger architecture.



Batch data processing

Data science projects rely on the Data Scientist being able to process terabytes or even petabytes of data. Tools like Apache Flink can get the job done using data streams or batch processing.



Compute

To allow data scientists to process large data sets, the infrastructure needs to be there to support them. This can be put in place by using autoscaling to make sure there is enough capacity to process the volume of data.

To make this even easier to manage AWS has introduced Predictive Auto Scaling that uses machine learning to scale up compute resources to support machine learning.

So meta.



Docker

Sharing the results of data science experiments isn’t always easy. Operating systems and R libraries aren’t always compatible depending on who you are sharing with. Security is also an issue when sharing datasets and final dashboards between users.

That’s when Docker comes in. Data engineers can provision Docker images that freeze the operating system and libraries so sandboxes or final products can be shared securely.



Ethical guidelines

Use of customers personal information in analysis needs to be taken seriously and guidelines need to be in place to keep it secure. This is more than just complying with legal requirements. Models should not have any kind of bias and participants should always know where their data is being used.



Fuzzy logic

Fuzzy logic is used to calculate the distance between two strings. This is similar to using wildcards in SQL and Regular Expressions in many other languages.

In the data science world, we can use the Fuzzy Wuzzy Python library across large data sets.



GPU

Graphics processing units (GPUs) are designed to process images, as they are made up of multiple cores. Because they can process huge batches of data and perform the same task over and over they are also used in data science.



Hadoop

The open-source Hadoop project is a collection of utilities that decouples storage and compute so these can be scaled up and down as needed.

Hadoop Distributed Files System (HDFS) breaks the files into logical chunks for storage, Spark, MapReduce or another tool can then take over to do the processing (more on that later in the post).

Fun fact: Hadoop is the name of the creator’s sons toy elephant.



Image recognition

Tensorflow is a machine learning framework used to train models using Neural Networks to perform image recognition.

Neural networks break up inputs into vectors which they use to then interpret, cluster and classify.



Jupyter notebook

Jupyter Notebooks run code, perform statistical analysis and present data visualisations all in one place. It supports 40 languages and is named Jupyter as a nod to Galileo’s notebooks recording the discovery of the moons of Jupiter.



Kaggle

If you are looking to get some practice in or need a dataset for a project Kaggle is the place to start. Once you’ve practised on a few of the test data sets you can then compete in competitions to solve problems. The community and discussions are friendly and you can use your tool of choice.



Linear regression

Regression is one of the statistical techniques used in data science to predict how one variable influences another. Linear regression can be used to analyse the relationship between long queues at the supermarket and customer satisfaction or temperature and ice cream sales.

If you think there is a relationship between two things you can use regression to confirm it.



MapReduce

MapReduce is the compute part of the Hadoop ecosystem. Once we have stored the data using HDFS, we can then use MapReduce to do the processing. MapReduce processes the data in logical chunks then processes them in parallel before aggregating the chunks again.



Natural language processing

Natural language processing (NLP) is the arm of artificial intelligence that is concerned with how computers can derive meaning from human language. If you’ve ever used Suri, Cortana, or Grammarly you’ve encountered NLP.



Overfitting

Both overfitting and underfitting lead to poor predictions.

Overfitting – happens when a model is too complex and has too much noise. The model ‘memorises’ and makes generalisations on all the training data and can’t ‘fit’ this to another data set.

Underfitting – happens when a model is too simple and there aren’t enough parameters to capture trends.



Pattern recognition

Pattern recognition is used to detect similarities or irregularities in data sets. Practical applications can be seen in fingerprint identification, analysis of seismic activity and speech recognition.



Quantitative v qualitative

If moving into data science from an engineering background you may need to brush up on your statistics.

Quantitative – data with a mathematical value to indicate a quantity, amount, or measurement.

Qualitative – data grouped into classes – for example, people with different colours of eyes — blue eyes, green eyes, brown eyes.



Real-time

Apache Kafka is a pub/sub system that allows streaming of data from logs, web activity and monitoring systems.

Kafka is used for two classes of applications:

Building real-time streaming data pipelines that reliably get data between systems or applications

Building real-time streaming applications that transform or react to the streams of data



Spark

Apache Spark, like MapReduce is a tool for data processing.

Spark – can process in-memory so is much faster. Useful if data is needed to be processed iteratively or in real time.

MapReduce – must read from and write to a disk but can work with far larger data sets than Spark. If results aren’t required right away this may be a good choice.



Testing

Artificial intelligence (AI) has practical uses in marketing with real-time product recommendations, sales with VR systems helping shoppers make decisions and customer support with NLP.

An emerging use case comes is software testing. AI can be used in prioritising the order of tests, automating and optimising cases and freeing up QAs from tedious tasks.



Unstructured data

Structured data can be stored in a relational database is columns, rows and tables.

When it comes to unstructured data which includes images, videos, and text the storage needs change. Data lakes can hold both types of data at low cost.

Data stored here is retrieved and read when required and organised based on need, making it popular with data scientists who would rather keep the quirks and ‘noise’ in, rather than having it cleaned and aggregated.



Volume and velocity

In 2001 Big Data was defined by the three Vs:

Volume

Velocity

Variety

Fast forward to today and there are additional Vs used in industry publications:

Value

Veracity

Variability

Visualisation

There is debate over whether these are relevant, or truly describe what Big data and data science is but if you are researching the industry these will inevitably come up.



Web scraping

Use cases for web scraping in data science projects include:

  • Pulling data from social media sites or forums for sentiment analysis
  • Fetching prices and products for comparison
  • Analysing site content to rank and compare content

To get started using Python install Scrapy to extract structured data from websites.



XML

XML and JSON formats are common in the Big Data world as ways to store and transport data. To use these with Python check out ElementTree for parsing XML and json for JSON.



NumPy

I cheated a little bit but in any case …

NumPy is used in Python to integrate with databases, perform scientific calculations and manipulating arrays.



ZooKeeper

Apache ZooKeeper takes care of keeping clusters running and available. It maintains the network by passing messages back and forth and guarantees:

Sequential consistency – updates from a client will be applied in the order that they were sent.

Atomicity – updates either succeed or fail. No partial results.

Single system image – a client will see the same view of the service regardless of the server that it connects to.

Reliability – once an update has been applied, it will persist from that time forward until a client overwrites the update.

Timeliness – the clients’ view of the system is guaranteed to be up-to-date within a certain time-bound.


Photo by cottonbro from Pexels

Comments are closed, but trackbacks and pingbacks are open.