Welcome to an awesome list of all the tools data scientists and data engineers use to build platforms and models.
Natural language processing
Quantitative v qualitative
Volume and velocity
AWS Athena is a service used to query files in S3 buckets directly on a pay-for-what-you-use basis. This makes it easy to get going querying data in various formats without having to use an ETL tool to load it into a database.
To allow data scientists to process large data sets, the infrastructure needs to be there to support them. This can be put in place by using autoscaling to make sure there is enough capacity to process the volume of data.
Sharing the results of data science experiments isn’t always easy. Operating systems and R libraries aren’t always compatible depending on who you are sharing with. Security is also an issue when sharing datasets and final dashboards between users.
That’s when Docker comes in. Data engineers can provision Docker images that freeze the operating system and libraries so sandboxes or final products can be shared securely.
Use of customers personal information in analysis needs to be taken seriously and guidelines need to be in place to keep it secure. This is more than just complying with legal requirements. Models should not have any kind of bias and participants should always know where their data is being used.
In the data science world, we can use the Fuzzy Wuzzy Python library across large data sets.
Graphics processing units (GPUs) are designed to process images, as they are made up of multiple cores. Because they can process huge batches of data and perform the same task over and over they are also used in data science.
Hadoop Distributed Files System (HDFS) breaks the files into logical chunks for storage, Spark, MapReduce or another tool can then take over to do the processing (more on that later in the post).
Fun fact: Hadoop is the name of the creator’s sons toy elephant.
Tensorflow is a machine learning framework used to train models using Neural Networks to perform image recognition.
Neural networks break up inputs into vectors which they use to then interpret, cluster and classify.
Jupyter Notebooks run code, perform statistical analysis and present data visualisations all in one place. It supports 40 languages and is named Jupyter as a nod to Galileo’s notebooks recording the discovery of the moons of Jupiter.
If you are looking to get some practice in or need a dataset for a project Kaggle is the place to start. Once you’ve practised on a few of the test data sets you can then compete in competitions to solve problems. The community and discussions are friendly and you can use your tool of choice.
Regression is one of the statistical techniques used in data science to predict how one variable influences another. Linear regression can be used to analyse the relationship between long queues at the supermarket and customer satisfaction or temperature and ice cream sales.
If you think there is a relationship between two things you can use regression to confirm it.
MapReduce is the compute part of the Hadoop ecosystem. Once we have stored the data using HDFS, we can then use MapReduce to do the processing. MapReduce processes the data in logical chunks then processes them in parallel before aggregating the chunks again.
Natural language processing (NLP) is the arm of artificial intelligence that is concerned with how computers can derive meaning from human language. If you’ve ever used Suri, Cortana, or Grammarly you’ve encountered NLP.
Both overfitting and underfitting lead to poor predictions.
Overfitting – happens when a model is too complex and has too much noise. The model ‘memorises’ and makes generalisations on all the training data and can’t ‘fit’ this to another data set.
Underfitting – happens when a model is too simple and there aren’t enough parameters to capture trends.
Pattern recognition is used to detect similarities or irregularities in data sets. Practical applications can be seen in fingerprint identification, analysis of seismic activity and speech recognition.
If moving into data science from an engineering background you may need to brush up on your statistics.
Quantitative – data with a mathematical value to indicate a quantity, amount, or measurement.
Qualitative – data grouped into classes – for example, people with different colours of eyes — blue eyes, green eyes, brown eyes.
Apache Kafka is a pub/sub system that allows streaming of data from logs, web activity and monitoring systems.
Kafka is used for two classes of applications:
Building real-time streaming data pipelines that reliably get data between systems or applications
Building real-time streaming applications that transform or react to the streams of data
Apache Spark, like MapReduce is a tool for data processing.
Spark – can process in-memory so is much faster. Useful if data is needed to be processed iteratively or in real time.
MapReduce – must read from and write to a disk but can work with far larger data sets than Spark. If results aren’t required right away this may be a good choice.
Artificial intelligence (AI) has practical uses in marketing with real-time product recommendations, sales with VR systems helping shoppers make decisions and customer support with NLP.
An emerging use case comes is software testing. AI can be used in prioritising the order of tests, automating and optimising cases and freeing up QAs from tedious tasks.
Structured data can be stored in a relational database is columns, rows and tables.
When it comes to unstructured data which includes images, videos, and text the storage needs change. Data lakes can hold both types of data at low cost.
Data stored here is retrieved and read when required and organised based on need, making it popular with data scientists who would rather keep the quirks and ‘noise’ in, rather than having it cleaned and aggregated.
In 2001 Big Data was defined by the three Vs:
Fast forward to today and there are additional Vs used in industry publications:
There is debate over whether these are relevant, or truly describe what Big data and data science is but if you are researching the industry these will inevitably come up.
Use cases for web scraping in data science projects include:
- Pulling data from social media sites or forums for sentiment analysis
- Fetching prices and products for comparison
- Analysing site content to rank and compare content
To get started using Python install Scrapy to extract structured data from websites.
I cheated a little bit but in any case …
NumPy is used in Python to integrate with databases, perform scientific calculations and manipulating arrays.
Apache ZooKeeper takes care of keeping clusters running and available. It maintains the network by passing messages back and forth and guarantees:
Sequential consistency – updates from a client will be applied in the order that they were sent.
Atomicity – updates either succeed or fail. No partial results.
Single system image – a client will see the same view of the service regardless of the server that it connects to.
Reliability – once an update has been applied, it will persist from that time forward until a client overwrites the update.
Timeliness – the clients’ view of the system is guaranteed to be up-to-date within a certain time-bound.