Welcome to an awesome list of all the tools Data Scientists and Data Engineers use to build platforms and models.<\/p>\n\n\n\n
Athena<\/a>
Batch Processing<\/a>
Compute<\/a>
Docker<\/a>
Ethical Guidelines<\/a>
Fuzzy Logic<\/a>
GPU<\/a>
Hadoop<\/a>
Image Recognition<\/a>
Jupyter Notebook<\/a>
Kaggle<\/a>
Linear Regression<\/a>
Map Reduce<\/a>
Natural Language Processing<\/a>
Overfitting<\/a>
Pattern Recognition<\/a>
Quantitative v Qualitative<\/a>
Real Time<\/a>
Spark]<\/a>
Testing<\/a>
Unstructured Data<\/a>
Volume and Velocity<\/a>
Web Scraping<\/a>
XML<\/a>
Numpy<\/a>
ZooKeeper<\/a><\/strong><\/p><\/blockquote>\n\n\n\n
\n\n\n\n<\/a>
Athena<\/h3>\n\n\n\nAWS Athena<\/a> is a service used to query files in S3 buckets<\/a> directly on a pay-for-what-you-use basis. This makes it easy to get going querying data in various formats without having to use an ETL tool to load it into a database.<\/p>\n\n\n\n
The service can be used on its own, integrated with AWS Glue<\/a> as a Data Catalogue or with AWS Lambda<\/a> as part of a bigger architecture.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Batch Data Processing<\/h3>\n\n\n\nData Science projects rely on the Data Scientist being able to process terabytes or even petabytes of data. Tools like Apache Flink<\/a> can get the job done using data streams<\/a> or batch processing.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Compute<\/h3>\n\n\n\nTo allow Data Scientists to process large data sets, the infrastructure needs to be there to support them. This can be put in place by using autoscaling to make sure there is enough capacity to process the volume of data.<\/p>\n\n\n\n
To make this even easier to manage AWS<\/a> has introduced Predictive Auto Scaling<\/a> that uses Machine Learning to scale up compute resources to support Machine Learning.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Docker<\/h3>\n\n\n\nSharing the results of Data Science experiments isn’t always easy. Operating systems and R libraries aren’t always compatible depending on who you are sharing with. Security<\/a> is also an issue when sharing datasets and final dashboards between users.<\/p>\n\n\n\n
That’s when Docker<\/a> comes in. Data Engineers can provision Docker Images that freeze the Operating System and libraries so sandboxes or final products can be shared securely.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Ethical Guidelines<\/h3>\n\n\n\nUse of customers personal information in analysis needs to be taken seriously and guidelines need to be in place to keep it secure. This is more than just complying with legal requirements<\/a>. Models should not have any kind of bias and participants should always know where their data is being used.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Fuzzy Logic<\/h3>\n\n\n\nFuzzy Logic is used to calculate the distance between two strings. This is similar to using wildcards in SQL<\/a> and Regular Expressions<\/a> in many other languages.<\/p>\n\n\n\n
\n\n\n\n<\/a>
GPU<\/h3>\n\n\n\nGraphics Processing Units (GPUs) are designed to process images, as they are made up of multiple cores. Because they can process huge batches of data<\/a> and perform the same task over and over they are also used in Data Science.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Hadoop<\/h3>\n\n\n\nThe Open-Source Hadoop<\/a> project is a collection of utilities that decouples Storage and Compute<\/a> so these can be scaled up and down as needed.<\/p>\n\n\n\n
Fun fact:<\/strong> Hadoop is the name of the creator’s sons toy elephant<\/a>.<\/p><\/blockquote>\n\n\n\n
\n\n\n\n<\/a>
Image Recognition<\/h3>\n\n\n\nTensorflow<\/a> is a Machine Learning framework used to train models using Neural Networks to perform image recognition.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Jupyter Notebook<\/h3>\n\n\n\nJupyter Notebooks<\/a> run code, perform statistical analysis and present data visualisations<\/a> all in one place. It supports 40 languages and is named Jupyter as a nod to Galileo’s notebooks recording the discovery of the moons of Jupiter.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Kaggle<\/h3>\n\n\n\nIf you are looking to get some practice in or need a dataset for a project Kaggle<\/a> is the place to start. Once you’ve practised on a few of the test data sets you can then compete in competitions to solve problems. The community and discussions are friendly and you can use your tool of choice.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Linear Regression<\/h3>\n\n\n\nRegression is one of the statistical techniques used in Data Science to predict how one variable influences another. Linear regression can be used to analyse the relationship between long queues at the supermarket and customer satisfaction or temperature and ice cream sales.<\/p>\n\n\n\n
If you think there is a relationship between two things you can use regression to confirm it.<\/p>\n\n\n\n
\n\n\n\n<\/a>
MapReduce<\/h3>\n\n\n\nMapReduce<\/a> is the compute part of the Hadoop ecosystem. Once we have stored the data using HDFS, we can then use MapReduce to do the processing. MapReduce processes the data in logical chunks then processes them in parallel before aggregating the chunks again.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Natural Language Processing<\/h3>\n\n\n\nNatural Language Processing (NLP) is the arm of Artifical Intelligence that is concerned with how computers can derive meaning from human language. If you’ve ever used Suri, Cortana, or Grammarly you’ve encountered NLP.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Overfitting<\/h3>\n\n\n\nBoth overfitting and underfitting lead to poor predictions.<\/p>\n\n\n\n
Overfitting<\/strong> – happens when a model is too complex and has too much noise. The model ‘memorises’ and makes generalisations on all the training data and can’t ‘fit’ this to another data set.<\/p>\n\n\n\n
Underfitting<\/strong> – happens when a model is too simple and there aren’t enough parameters to capture trends.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Pattern Recognition<\/h3>\n\n\n\nPattern Recognition is used to detect similarities or irregularities in data sets. Practical applications can be seen in fingerprint identification, analysis of seismic activity and speech recognition.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Quantitative v Qualitative<\/h3>\n\n\n\nIf moving into Data Science from an Engineering background you may need to brush up on your statistics. Learn more about the skills needed to transition into the role in this fascinating interview with Julia Silge<\/a> of Stack Overflow.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Real Time<\/h3>\n\n\n\nApache Kafka<\/a> is a pub\/sub system that allows streaming of data from logs, web activity and monitoring systems.<\/p>\n\n\n\n
Kafka is used for two classes of applications:<\/p>\n\n\n\n
\n\n\n\n<\/a>
Spark<\/h3>\n\n\n\nApache Spark,<\/a> like MapReduce is a tool for data processing.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Testing<\/h3>\n\n\n\nArtificial Intelligence (AI) has practical uses in Marketing with real-time product recommendations, Sales with VR systems helping shoppers make decisions and Customer Support with Natural Language Processing.<\/p>\n\n\n\n
An emerging use case comes is Software Testing. AI can be used in prioritising the order of tests, automating and optimising cases and freeing up QAs from tedious tasks.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Unstructured Data<\/h3>\n\n\n\nStructured Data can be stored in a Relational Database is columns, rows and tables.<\/p>\n\n\n\n
When it comes to Unstructured Data which includes images, videos, and text the storage needs change. Data Lakes can hold both types of data at low cost.<\/p>\n\n\n\n
Data stored here is retrieved and read when required and organised based on need, making it popular with data scientists who would rather keep the quirks and \u2018noise\u2019 in, rather than having it cleaned and aggregated.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Volume and Velocity<\/h3>\n\n\n\nIn 2001 Big Data was defined by the three Vs:<\/p>\n\n\n\n
Volume<\/p>
Velocity<\/p>
Variety<\/p><\/blockquote>\n\n\n\n
Fast forward to today and there are additional Vs used in industry publications:<\/p>\n\n\n\n
Value<\/p>
Veracity<\/p>
Variability<\/p>
Visualisation<\/p><\/blockquote>\n\n\n\n
There is debate over whether these are relevant, or truly describe what Big Data and Data Science is but if you are researching the industry these will inevitably come up.<\/p>\n\n\n\n
\n\n\n\n<\/a>
Web Scraping<\/h3>\n\n\n\nUse cases for Web Scraping in Data Science projects include:<\/p>\n\n\n\n
- Pulling data from social media sites or forums for sentiment analysis<\/li><\/ul>\n\n\n\n
- Fetching prices and products for comparison<\/li><\/ul>\n\n\n\n
- Analysing site content to rank and compare content<\/li><\/ul>\n\n\n\n
To get started using Python install Scrapy<\/a> to extract structured data from websites.<\/p>\n\n\n\n
\n\n\n\n<\/a>
XML<\/h3>\n\n\n\nXML and JSON formats are common in the Big Data world as ways to store and transport data. To use these with Python check out ElementTree<\/a> for parsing XML and json<\/a> for JSON.<\/p>\n\n\n\n
\n\n\n\n<\/a>
NumPy<\/h3>\n\n\n\nI cheated a little bit but in any case …<\/p>\n\n\n\n
NumPy<\/a> is used in Python to integrate with databases, perform scientific calculations and manipulating arrays.<\/p>\n\n\n\n
\n\n\n\n<\/a>
ZooKeeper<\/h3>\n\n\n\nApache ZooKeeper<\/a> takes care of keeping clusters running and available. It maintains the network by passing messages back and forth and guarantees:<\/p>\n\n\n\n
Atomicity<\/strong> – Updates either succeed or fail. No partial results.<\/p>\n\n\n\n
\n\n\n\nPhoto by Magda Ehlers<\/a> from Pexels<\/p>\n","protected":false},"excerpt":{"rendered":"