I first heard this term used at the Tableau Conference a few years ago. In one of their first HR tracks, I attended every presentation remotely close to the intersection of HR and Tableau. The disclaimer from every presenter at the start included a blurb that we weren’t looking at real data. I would have cringed had I not heard this. One presenter though introduced the term ‘fata’ – fake data. I’ve adopted it since.
In the pursuit of sharing ideas in the space of People Analytics, one hurdle is the extremely sensitive nature of the actual data we’re working with. Names, emails, gender, age, social security numbers, and much more are all often part of an employee data file and useful in analysis. However, sharing this information is not proper and could result in you losing your job in people analytics. Especially outside your organization, but even within, the information must be protected from those who shouldn’t have access.
This tutorial will show a few ways in which you can create this data. Useful for development, sharing internally, and presenting your work to the work – without losing your job.
Using one of these options you’ll have the ability to create a complete data set, quickly, and without exposing any real user or employee data.
Beginner’s Guide to Using Python with HR Data | Exploration Series
Part Three – Seaborn
In this first tutorial series, I’m exploring the IBM HR Attrition and Performance data set. This is a great data set used to demonstrate the possibilities from using machine learning and other data science techniques.
Now we’ll move on to using Seaborn for our visualizations. The benefit of Seaborn is it continues to abstract the complex, underlying calls to visualize your data – further allowing you to focus on your analysis task and not having to think about how to implement what you want to do. It goes even further and provides built-in functionality that would be incredibly complex to implement without the benefit of Seaborn.
Beginner’s Guide to Using Python with HR Data | Exploration Series
Part Two – Pandas
In this first tutorial series, I’m exploring the IBM HR Attrition and Performance data set. This is a great data set used to demonstrate the possibilities from using machine learning and other data science techniques.
Next, we’ll take a look at the power of Pandas to plot our data. As a budding data [analyst/scientist/enthusiast], Pandas has become my most common import and tool. Plotting directly from pandas objects makes it very easy to stay in the flow of analyzing data. Let’s get going.
Beginner’s Guide to Using Python with HR Data | Exploration Series
Part One – Matplotlib
In this first tutorial series, I’m exploring the IBM HR Attrition and Performance data set. This is a great data set used to demonstrate the possibilities from using machine learning and other data science techniques.
In this next walkthrough, we’ll begin to ‘see’ our data through the use of visualization packages. In R there are 3 commons plotting tools, and other packages extend these main items. In Python, there is Matplotlib, and most other packages build on this foundation. So, the decision of where to start with Python plotting is an easy one – let’s get going.
Beginner’s Guide to Using Python with HR Data | Exploration Series
Part Zero – The Basics
In this first tutorial series, I’m exploring the IBM HR Attrition and Performance data set. This is a great data set used to demonstrate the possibilities from using machine learning and other data science techniques.
I’ll be back with tutorial posts that walk through how to apply more advanced techniques to generate predictive and prescriptive insights from the data. But that’d be jumping ahead. First, the basics. Exploratory Data Analysis, or EDA.
It’s often tempting to jump right in and try to find the most advanced insight possible. When I’m in the process of learning something new, it’s my first instinct to begin applying it straight away, skipping the basics. Eventually, I’ll stumble; and it’s always something I could have avoided by simply spending a little bit of time really understanding the data I have.
To properly analyze data, you must understand it. Is it complete (missing values), are the errors (values out of normal bounds – is this correct), and generally what information is contained within the data? Depending on where the request is coming from in a work-context, you may not control the data, so what you get is what you have; it’s often much easier when you’ve pulled your own data – it’s just not always possible, or even smart to do so.
Always begin with an exploration of your data. In this tutorial, I’m digging out my current favorite tool – Python. If you’ve never programmed, if Excel still frightens you a bit, or you’re firmly in the R camp – read on; this series will show the possibilities while exploring 5 different packages and interpreting and understanding data.