How to become a good data mining programmer?

In this post, I will discuss what it takes to be a good data mining programmer and how to become one.


Data mining is a broad field that can be approached from several angles. Some people with a mathematical background will employ a statistical approach to data mining and use statistical tools to study data. Others will use already made  commercial or open-source data mining software to analyses their data. In this post, we will discuss the computer science view of data mining. It is aimed at programmers who would like to become good at implementing and designing data mining algorithms.

There are some great benefits to not just be a user, but to be a data mining programmer.  First, you can implement algorithms that are not offered in existing data mining tools. This is important because several data mining tools are restricted to a small set of algorithms. For example, if you consider data mining tasks such as clustering, there are hundreds of algorithms that have been proposed to handle many different scenarios. However, general purpose data mining tools often only offer just a few algorithms. Second, you can download open-source algorithms and adapt them to your needs. Third, you could eventually design your own data mining algorithms and implement them efficiently.

So now that we have talked about the advantages, let’s talk about how to become a good data mining programmer.  We can break this down into two aspects: being good at programming and being knowledgeable at computer science in general, and being good at programming data mining algorithms.

To be good at programming, you should have good knowledge of at least one programming language that you will use. Choosing a programming language is important because performance is generally important in data mining. So you may go for a language like C++ that will compile to machine code, or some languages like Java or C# that are reasonably fast and can be more convenient to use. You should avoid web languages such as PHP and Javascript that are less efficient, unless you have some good reasons to use them.

After that, you should try to get a good knowledge of the data structures that are offered in your programming language. A good programmer should  know when to use the different data structures.  This is important because you will eventually optimize your algorithms. In data mining, optimizations can make the difference between an algorithm that will run for hours or just a few minutes, or use gigabytes or megabytes of memory!   So you should get to know the main data structures that are offered such as array lists, linked list, binary trees, hash tables, hash sets, bitsets, priority queue (heaps).  But more importantly, you should know that there are many data structures that are not offered with your programming language. You should know how to look up in books or websites for other data structures.

Besides, you should try to get better at algorithmic  (designing efficient algorithms) and computer science in general. There are many different way to do that such as taking courses on this topic or to read some books.  But  most importantly, you need to to put the theory into practice and to do some programming, which leads me to the key part of this post.

To become good at programming data mining algorithms, you need to write data mining algorithms.  To get started, you should read some data mining books such as the book by Tan, Steinbach & Kumar, or the book by Han & Kamber. I recommend to start by implementing some simple algorithms without optimizations. For example, K-means or Apriori are relatively easy to implement. After you have debugged and checked that your implementation generates the correct result, you should spend time to think about how to optimize it.  First,  think about optimizations by yourself. Then look at how other people did it by looking at websites, articles or by looking at the code of other people. Most likely, there are many optimizations that have been proposed. After that, you could implement the optimizations, and then look at more complex algorithms.  Finally, remember that Rome was not built in a day.  Give yourself some time to learn!

I have obviously not mentioned everything. In particular, being good at mathematics is also important. If you have some additional thoughts, you can share them in the comment section. By the way, if you like this blog, you can subscribe to the RSS Feed or my Twitter account ( to get notified about next blog posts.

Why it is important to publish source code and datasets for researchers?

Today, I will discuss about why it is important that researchers share their source code and data.

source code

As some of you know, I’m working on the design of data mining algorithms. More specifically, I’m working on algorithms for discovering patterns in databases. It is a problem that dates back to the 1990s. Hundreds of papers have been published on this topic. However, when searching on the Web, I found that there are very few source code or even binary files available.  On some specialized topics like uncertain itemset mining, there is for example about 20 algorithms published but about only two papers that provide the source code and datasets.

This is a serious problem for research.

First, some of these algorithms are hard to implement. For some people that are not  familiar with the subject or that are average programmers, it is a huge waste of time to implement the algorithms again and this could deter them from using the algorithms. As some people say: why reinvent the wheel ?

Second, algorithm descriptions that are provided in research papers are often incomplete due to the lack of space. Some researchers will not provide optimizations details due to the lack of space. Or some researchers will intentionally not provide enough details in their paper so that other people cannot implement their algorithm properly and beat its performance.

Third, let’s say that someone develops a new algorithm and want to compare its performance with an already published algorithm. If this person cannot find the source code or binary files of the published algorithm, he has to implement it by himself. However, this version will be different from the original and depending on how it is implemented, the comparison could potentially be unfair.

Now, let’s talk about what are the advantages of sharing your source code and data.

First, as a researcher, if you publish your source code, it is much more likely that someone will use your algorithm or application. If someone use your algorithm/application, he will  cite you, and it will provide benefits to you.

Second, other researchers can save time if they don’t have to implement again the same algorithms. They can use this time to do more research.  And therefore, this would benefit the whole research community.

Third, if you are the author of an algorithm, other people can compare with your version of your algorithm.  By sharing your source code, you are therefore sure that the comparison will be fair.

Fourth, other people are more likely to integrate your algorithm/software in other software or to modify it to develop new algorithms/software. Again, this will benefit you because these people will cite you.  And the more people will cite you, the more people will read your papers and will cite you.

update in 2018 Now to conclude, I will talk about the benefits that I have received from sharing my work as open-source software since the last few years.  I’m the author of the SPMF data mining software.  This software offers more than 100 algorithms, most of them implemented by me, including a dozen that are my own algorithms.  Since about 8 years, the website has received more than 500,000 visitors and the software has been cited in more than 500 research papers and journal articles. Some people have applied the algorithms in biology, website clickstream analysis and even chemistry. This has also greatly contributed to increasing the citations of my research papers.

I hope that this blog post will convince you that it is important to share the source code and the data of your work with other researchers.

By the way, if you like this blog, you can subscribe to the RSS Feed or my Twitter account ( to get notified about next blog posts. Also, please leave a comments below if you have some additional thoughts or story about this.

Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 145 data mining algorithms.




This is the first post. This blog is going to be updated weekly (or more often, if I have time). It is going to talk about data mining news and other topics related to data mining, or just research and algorithms in general. I will write text, discuss code and just share some thoughts. Hope you will enjoy it!

By the way, I’m a computer science professor  at a university in Canada.  I have done research on various topics including data mining, intelligent tutoring systems, cognitive modeling, etc. I’m the author of an open-source data mining software that you can download here: http://www.philippe-fournier-viger/spmf/

Also, you are welcome to join my network on LinkedIn:  and you can also follow me on Twitter: