Plagiarism by Bhawna Mallick and Kriti Raj at Galgotias College of Engineering & Technology

Today, I found that a paper written by Bhawna Mallick Kriti Raj and Himani from India is plagiarizing my papers. Those persons are affiliated to the  Galgotias College of Engg & Tech., India formerly known as Uttar Pradesh  Technical University.

The paper is called “Weather prediction using CPT+ algorithm” and was published in the International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issue 5 May 2016, Page No. 16467-16470. (https://www.ijecs.in/issue/v5-i5/23%20ijecs.pdf ).

Bhawna Mallick Galgotias College of Engineering & Technology

Plagiarism by Bhawna Mallick Uttar at Galgotias College of Engineering & Technology

The paper is an obvious case of plagiarism as it copies several pages of my PAKDD 2015 paper about the CPT+ algorithm, published a year before ( http://www.philippe-fournier-viger.com/spmf/PAKDD2015_Compact_Prediction_tree+.pdf ).

The authors of the plagiarized paper,  Bhawna Mallick et al., claims to propose the CPT+ algorithm in their paper, and infringes our copyright by copying several pages of the paper, with figures and text, which is unacceptable.

Who is Bhawna Mallick, Kriti Raj  et al.?

I have done a little search to find who these persons are. According to the paper, their e-mails addresses are:

  • Bhawna Mallick. Galgotias College of Engg & Technology, Greater Noida, UP, India bhawna.mallick@galgotiacollege.edu
  • Kriti Raj. Galgotias College of Engg & Technology, Greater Noida, UP, India kritiraj31may@gmail.com
  • Himani.  Galgotias College of Engg & Technology, Greater Noida, UP, India himanichaprana@gmail.com

The website of the Galgotias College of Engineering & Technologycan be found here: http://www.galgotiacollege.edu/gcet.asp

Galgotias college plagiarism

Galgotias college plagiarism

And it can be found that Bhawna Mallick et al are affiliated to the Department of Computer Science and Engineering.  In particular,  it is possible to find the following information about Bhawna Mallick the first author of the plagiarized paper:

Prof. (Dr.) Bhawna Mallick Professor & HOD Data Mining and Fuzzy Logic

Apparently, Bhawna Mallick is professor and head of the department.  Here is a screenshot of her webpage:

http://www.galgotiacollege.edu/gei-courses/docse-faculty-bhawna-mallick.asp

Bhawna Mallick Uttar Pradesh Plagiarism

Bhawna Mallick (Galgotias College of Engg & Technology) homepage

The fact that this person is head of that department and put her name as first author on a plagiarized paper raises serious questions about the quality of that college.

Also it raises questions about the quality of that journal.

And who is Kriti Raj?

According to this LinkedIn page, he is a student at the Galgotias College:
https://www.linkedin.com/in/kriti-raj-06ab2574/

Kriti Raj Galgotia College Plagiarism

Kriti Raj Galgotia College Plagiarism

 

What will happen?

Well, this is not the first time that someone plagiarizes my papers. I stopped counting a while ago. I think that it has happened more than 10 times already in the last 7 years. So what I will do? I will send an e-mail to the Galgotias College of Engg & Technology to let them know about the situation, and I will ask them to take appropriate action. Moreover, I will ask that the journal paper be retracted  by contacting the journal editor.

Hopefully, the Galgotias College of Engg & Technology will take this problem of plagiarism seriously and take appropriate action to punish those responsible.  In India, this is not always the case. In another case of plagiarism that I had reported just a few months ago at the Ilahia College of Engineering and Technology, they have decided to simply ignore the e-mails and take no actions, which is very bad from an academic perspective.

Related posts:

Posted in Plagiarism | Tagged | 1 Comment

How to publish in top conferences/journals? The Blue Ocean Strategy

A question that many young researchers ask is how to get your papers published in top conferences and journal.  There are many answers to this question. In this blog post, I will discuss a strategy for carrying research called the “Blue Ocean Strategy”.  This strategy was initially proposed in the field of marketing. But in this blog post, I will explain how it is also highly relevant in Academia.

The Blue Ocean Strategy was proposed in a 2007 book by Kim, C. W. and Mauborgne, R. The idea is quite simple. Let’s say that you want to start a new business and get rich. To start your business, you need to choose a market where your business will operate. Let’s say that you decide to start selling pens.  However, there are already a lot of pen manufacturers that are well-established and thus this market is extremely competitive and profit margins are very low. Thus, it might be very difficult to become successful in this market if you just try to produce pens like every other manufacturers. It is like jumping in a shark tank!

The Blue Ocean Strategy indicates that rather than fighting for some existing market, it is better to create some new markets (what is called a “blue ocean“). By creating a new market, the competition becomes irrelevant and you may easily get many new customers rather than fighting for a small part of an existing market. Thus, instead of trying to compete with some very well established manufacturer in a very competitive market (a “red ocean“), it is more worthy to start a new market (a “blue ocean”). This could be for example, a new type of pens that has some additional features.

Now let me explain how this strategy is relevant for academia.

In general,  there are two main types of research projects:

  • a researcher try to provide a solution to an existing research problem,
  • the researcher works on a new research problem.

The first case can be seen as a red ocean, since many researchers may be already working on that existing problem and it may be hard to publish something better. The second case is a blue ocean, since the researcher is the first one to work on a new topic. In that case, it can be easy to publish something since you do not need to do something better than other people, since you are the first one on that topic.

For example, I work in the field of data mining. In this field, many researchers work on publishing faster or more memory efficient algorithms for existing data mining problems. Although this research is needed, it can be viewed in some cases as lacking originality, and it can be very competitive to publish a faster algorithm.  On the other hand, if researchers instead work on proposing some new problems, then the research appears more original, and it becomes much more easy to publish an algorithm as it does not need to be more efficient than some previous algorithms.  Besides, from my observation, top conferences/journal often prefer papers on new problems to incremental work on existing problems.

Thus, it is not only easier to provide a solution to new research problem, but top conferences in some fields at least put a lot of value on papers that address new research problems. Thus, why fighting to be the best on an existing research problem?

Of course, there are some exceptions to this idea. If a researcher succeeds to publish some exceptional paper in a red ocean (on an existing research problem), his impact may actually be greater. This is especially if the research problem is very popular. But the point is that publishing in a red ocean may be harder than in a blue ocean.  And of course, not all blue oceans are equal. It is thus also important to find some good idea for new research topics (good blue oceans).

Personally, for these reasons, I generally try to work on “blue ocean” research projects.

Conclusion

In this blog post, I have discussed how the “Blue Ocean Strategy” and how it can be applied in academia to help in publishing in top conferences/journals. Of course, there are also a lot of other things to consider to write a good paper. But this will be for another blog post. 😉

If you like this blog and want to support it, please share it on social networks (Twitter, LinkedIn, etc.), write some comments, and continue reading other articles on this blog. 🙂

Related posts:

Posted in Academia, General, Research | Tagged , , , | Leave a comment

This is why you should visualize your data!

In the data science and data mining communities, several practitioners are applying various algorithms on data, without attempting to visualize the data.  This is a big mistake because sometimes, visualizing the data greatly helps to understand the data. Some phenomena are obvious when visualizing the data. In this blog post, I will give a few examples to convince you that visualization can greatly help to understand data.

An example of why using statistical measures may not be enough

The first example that I will give is a the Francis Anscombe Quartet.  It is a set of four datasets consisting of X, Y points. These four datasets are defined as follows:

Dataset I

Dataset II

Datset III

Dataset IV

x

y

x

y

x

y

x

y

10.0

8.04

10.0

9.14

10.0

7.46

8.0

6.58

8.0

6.95

8.0

8.14

8.0

6.77

8.0

5.76

13.0

7.58

13.0

8.74

13.0

12.74

8.0

7.71

9.0

8.81

9.0

8.77

9.0

7.11

8.0

8.84

11.0

8.33

11.0

9.26

11.0

7.81

8.0

8.47

14.0

9.96

14.0

8.10

14.0

8.84

8.0

7.04

6.0

7.24

6.0

6.13

6.0

6.08

8.0

5.25

4.0

4.26

4.0

3.10

4.0

5.39

19.0

12.50

12.0

10.84

12.0

9.13

12.0

8.15

8.0

5.56

7.0

4.82

7.0

7.26

7.0

6.42

8.0

7.91

5.0

5.68

5.0

4.74

5.0

5.73

8.0

6.89

To get a feel of the data, the first thing that many  would do is to calculate some statistical measures such as the mean, average, variance, and standard deviation.  This allows to measure the central tendency of data and its dispersion. If we do this for the four above datasets, we obtain:

Dataset 1:   mean of X = 9, variance of X= 11, mean of Y = 7.5, variance of Y = 4.125
Dataset 2:   mean of X = 9, variance of X= 11, mean of Y = 7.5, variance of Y = 4.125
Dataset 3:   mean of X = 9, variance of X= 11, mean of Y = 7.5, variance of Y = 4.125
Dataset 4:   mean of X = 9, variance of X= 11, mean of Y = 7.5, variance of Y = 4.125

So these datasets appears quite similar. They have exactly the same values for all the above statistical measures.  How about calculating the correlation between X and Y for each dataset to see how the points are correlated?

Dataset 1:   correlation 0.816
Dataset 2:  correlation 0.816
Dataset 3:  correlation 0.816
Dataset 4:  correlation 0.816

Ok, so these datasets are very similar, isn’t it?  Let’s try something else. Let’s calculate the regression line of each dataset (this means to calculate the linear equation that would best fit the data points).

Dataset 1:  y = 3.00 + 0.500x
Dataset 2:  y = 3.00 + 0.500x
Dataset 3:  y = 3.00 + 0.500x
Dataset 4:  y = 3.00 + 0.500x

Again the same!  Should we stop here and conclude that these datasets are the same?

This would be a big mistake because actually, these four datasets are quite different! If we visualize these four datasets with a scatter plot, we obtain the following:

Francis Anscombe Quartet

Visualization of the four datasets (credit: Wikipedia CC BY-SA 3.0)

This shows that these datasets are actually quite different. The lesson from this example is that by visualizing the data, difference sometimes becomes quite obvious.

Visualizing the relationship between two attributes

Simple visualizations techniques like scatter plots are also very useful for quickly analyzing the relationship between pairs of attributes in a dataset. For example, by looking at the two following scatter plots, we can quickly see that the first one show a positive correlation between the X and Y axis (when values on the X axis are greater, values on the Y axis are generally also greater), while the second one shows a negative correlation (when values on the X axis are greater, values on the Y axis are generally also smaller).

(a) positive correlation  (b) negative correlation (Credit: Data Mining Concepts and Techniques, Han & Kamber)

If we plot two attributes on the X and Y axis of a scatter plot and there is not correlation between the attributes, it may result in something similar to the following figures:

No correlation between the X and Y axis (Credit: Data Mining Concepts and Techniques, Han & Kamber)

These examples again show that visualizing data can help to quickly understand the data.

Visualizing outliers 

Visualization techniques can also be used to quickly identify outliers in the data. For example in the following chart, the data point on top can be quickly identified as an outlier (an abnormal value).

outlier scatter plot

Identifying outliers using a scatter plot

Visualizing clusters

In data mining, several clustering algorithms have been proposed to identify clusters of similar values in the data. These clusters can also often be discovered visually for low-dimensional data. For example, in the following data, it is quite apparent that there are two main clusters (groups of similar values), without applying any algorithms.

Two clusters

Data containing two obvious clusters

Conclusion

In this blog post, I have shown a few simple examples of how visualization can help to quickly see patterns in the data without actually applying any fancy models or performing calculations. I have also shown that statistical measures can actually be quite misleading if no visualization is done, with the classic example of the Francis Anscombe Quartet.

In this blog post, the examples are mostly done using scatter plots with 2 attributes at a time, to keep things simple. But there exists many other types of visualizations.


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Big data, Data Mining, Data science | Tagged , , , , | 2 Comments

An Introduction to Sequential Pattern Mining

In this blog post, I will give an introduction to sequential pattern mining, an important data mining task with a wide range of applications from text analysis to market basket analysis.  This blog post is aimed to be a short introductino. If you want to read a more detailed introduction to sequential pattern mining, you can read a survey paper  that I recently wrote on this topic.

What is sequential pattern mining?

Data mining consists of extracting information from data stored in databases to understand the data and/or take decisions. Some of the most fundamental data mining tasks are clustering, classification, outlier analysis, and pattern mining. Pattern mining consists of discovering interesting, useful, and unexpected patterns in databases  Various types of patterns can be discovered in databases such as frequent itemsets, associations, subgraphs, sequential rules, and periodic patterns.

The task of sequential pattern mining is a data mining task specialized for analyzing sequential data, to discover sequential patterns. More precisely, it consists of discovering interesting subsequences in a set of sequences, where the interestingness of a subsequence can be measured in terms of various criteria such as its occurrence frequency, length, and profit. Sequential pattern mining has numerous real-life applications due to the fact that data is naturally encoded as sequences of symbols in many fields such as bioinformatics, e-learning, market basket analysis, texts, and  webpage click-stream analysis.

I will now explain the task of sequential pattern mining with an example. Consider the following sequence database, representing the purchases made by customers in a retail store.

sequence database

This database contains four sequences.  Each sequence represents the items purchased by a customer at different times. A sequence is an ordered list of itemsets (sets of items bought together). For example, in this database, the first sequence (SID 1) indicates that a customer bought some items a and b together, then purchased an item c, then purchased items f and g together, then purchased an item g, and then finally purchased an item e.  

Traditionally, sequential pattern mining is being used to find subsequences that appear often in a sequence database, i.e. that are common to several sequences. Those subsequences are called the frequent sequential patterns. For example, in the context of our example, sequential pattern mining can be used to find the sequences of items frequently bought by customers. This can be useful to understand the behavior of customers to take marketing decisions.

To do sequential pattern mining, a user must provide a sequence database and specify a parameter called the minimum support threshold. This parameter indicates a minimum number of sequences in which a pattern must appear to be considered frequent, and be shown to the user. For example, if a user sets the minimum support threshold to 2 sequences, the task of sequential pattern mining consists of finding all subsequences appearing in at least 2 sequences of the input database.  In the example database, 29  subsequences met this requirement. These sequential patterns are shown in the table below, where the number of sequences containing each pattern (called the support) is indicated in the right column of the table.sequential patterns

For example, the patterns <{a}> and <{a}, {g}> are frequent and have a support of 3 and 2 sequences, respectively. In other words, these patterns appears in 3 and 2 sequences of the input database, respectively.  The pattern <{a}> appears in the sequences 1, 2 and 3, while the pattern <{a}, {g}> appears in sequences 1 and 3.   These patterns are interesting as they represent some behavior common to several customers. Of course, this is a toy example. Sequential pattern mining can actually be applied on database containing hundreds of thousands of sequences.

Another example of application of sequential pattern mining is text analysis. In this context, a set of sentences from a text can be viewed as sequence database, and the goal of sequential pattern mining is then to find subsequences of words frequently used in the text. If such sequences are contiguous, they are called “ngrams” in this context. If you want to know more about this application, you can read this blog post, where sequential patterns are discovered in a Sherlock Holmes novel.

Can sequential pattern mining be applied to time series?

Besides sequences, sequential pattern mining can also be applied to time series (e.g. stock data), when discretization is performed as a pre-processing step.  For example, the figure below shows a time series  (an ordered list of numbers) on the left. On the right, a sequence (a sequence of symbols) is shown representing the same data, after applying a transformation.   Various transformations can be done to transform a time series to a sequence such as the popular SAX transformation. After performing the transformation, any sequential pattern mining algorithm can be applied.

sequences and time series

Where can I get Sequential pattern mining implementations?

To try sequential pattern mining with your datasets, you may try the open-source SPMF data mining software, which provides implementations of numerous sequential pattern mining algorithms: http://www.philippe-fournier-viger.com/spmf/

It provides implementations of several algorithms for sequential pattern mining, as well as several variations of the problem such as discovering maximal sequential patterns, closed sequential patterns and sequential rules. Sequential rules are especially useful for the purpose of performing predictions, as they also include the concept of confidence.

What are the current best algorithms for sequential pattern mining?

There exists several sequential pattern mining algorithms. Some of the classic algorithms for this problem are PrefixSpan, Spade, SPAM, and GSP. However, in the recent decade, several novel  and more efficient algorithms have been proposed such as CM-SPADE  and CM-SPAM (2014), FCloSM and FGenSM (2017), to name a few.  Besides, numerous algorithms have been proposed for extensions of the problem of sequential pattern mining such as finding the sequential patterns that generate the most profit (high utility sequential pattern mining).

Conclusion

In this blog post, I have given a brief overview of sequential pattern mining, a very useful set of techniques for analyzing sequential data.  If you want to know more about this topic, you may read the following recent survey paper that I wrote, which gives an easy-to-read overview of this topic, including the algorithms forf sequential pattern mining, extensions,  research challenges and opportunities.

Fournier-Viger, P., Lin, J. C.-W., Kiran, R. U., Koh, Y. S., Thomas, R. (2017). A Survey of Sequential Pattern Mining. Data Science and Pattern Recognition, vol. 1(1), pp. 54-77.


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Big data, Data Mining, Data science | Tagged , , , , , , , | 2 Comments

An Introduction to Data Mining

In this blog post, I will introduce the topic of data mining. The goal is to give a general overview of what is data mining.

what is data mining

What is data mining?

Data mining is a field of research that has emerged in the 1990s, and is very popular today, sometimes under different names such as “big data” and “data science“, which have a similar meaning. To give a short definition of data mining,  it can be defined as a set of techniques for automatically analyzing data to discover interesting knowledge or pasterns in the data.

The reasons why data mining has become popular is that storing data electronically has become very cheap and that transferring data can now be done very quickly thanks to the fast computer networks that we have today. Thus, many organizations now have huge amounts of data stored in databases, that needs to be analyzed.

Having a lot of data in databases is great. However, to really benefit from this data, it is necessary to analyze the data to understand it. Having data that we cannot understand or draw meaningful conclusions from it is useless. So how to analyze the data stored in large databases?  Traditionally, data has been analyzed by hand to discover interesting knowledge. However, this is time-consuming, prone to error, doing this may miss some important information, and  it is just not realistic to do this on large databases.  To address this problem, automatic techniques have been designed to analyze data and extract interesting patterns, trends or other useful information. This is the purpose of data mining.

In general, data mining techniques are designed either to explain or understand the past (e.g. why a plane has crashed) or predict the future (e.g. predict if there will be an earthquake tomorrow at a given location).

Data mining techniques are used to take decisions based on facts rather than intuition.

What is the process for analyzing data?

To perform data mining, a process consisting of seven steps is usually followed. This process is often called the “Knowledge Discovery in Database” (KDD) process.

  1. Data cleaning: This step consists of cleaning the data by removing noise or other inconsistencies that could be a problem for analyzing the data.
  2. Data integration: This step consists of integrating data  from various sources to prepare the data that needs to be analyzed. For example, if the data is stored in multiple databases or file, it may be necessary to integrate the data into a single file or database to analyze it.
  3. Data selection: This step consists of selecting the relevant data for the analysis to be performed.
  4. Data transformation: This step consists of transforming the data to a proper format that can be analyzed using data mining techniques. For example, some data mining techniques require that all numerical values are normalized.
  5. Data mining:  This step consists of applying some data mining techniques (algorithms) to analyze the data and discover interesting patterns or extract interesting knowledge from this data.
  6. Evaluating the knowledge that has been discovered: This step consists of evaluating the knowledge that has been extracted from the data. This can be done in terms of objective and/or subjective measures.
  7. Visualization:  Finally, the last step is to visualize the knowledge that has been extracted from the data.

Of course, there can be variations of the above process. For example, some data mining software are interactive and some of these steps may be performed several times or concurrently.

What are the applications of data mining?

There is a wide range of data mining techniques (algorithms), which can be applied in all kinds of domains where data has to be analyzed. Some example of data mining applications are:

  • fraud detection,
  • stock market price prediction,
  • analyzing the behavior of customers in terms of what they buy

In general data mining techniques are chosen based on:

  • the type of data to be analyzed,
  • the type of knowledge or patterns to be extracted from the data,
  • how the knowledge will be used.

What are the relationships between data mining and other research fields?

Actually, data mining is an interdisciplinary field of research partly overlapping with several other fields such as: database systems, algorithmic, computer science, machine learning, data visualization, image and signal processing and statistics.

There are some differences between data mining and statistics although both are related and share many concepts.  Traditionally, descriptive statistics has been more focused on describing the data using measures, while inferential statistics has put more emphasis on hypothesis testing to draw significant conclusion from the data or create models. On the other hand, data mining is often more focused on the end result rather than statistical significance. Several data mining techniques do not really care about statistical tests or significance, as long as some measures such as profitability, accuracy have good values.  Another difference is that data mining is mostly interested by automatic analysis of the data, and often by technologies that can scales to large amount of data. Data mining techniques are sometimes called “statistical learning” by statisticians.  Thus, these topics are quite close.

What are the main data mining software?

To perform data mining, there are many  software programs available. Some of them are general purpose tools offering many algorithms of different kinds, while other are more specialized. Also, some software programs are commercial, while other are open-source.

I am personally, the founder of the SPMF open-source data mining library, which is free and open-source, and specialized in discovering patterns in data. But there are many other popular software such as Weka, Knime, RapidMiner, and the R language, to name a few.

Data mining techniques can be applied to various types of data

Data mining software are typically designed to be applied on various types of data. Below, I give a brief overview of various types of data typically encountered, and which can be analyzed using data mining techniques.

  • Relational databases:  This is the typical type of databases found in organizations and companies. The data is organized in tables. While, traditional languages for querying databases such as SQL allow to quickly find information in databases, data mining allow to find more complex patterns in data such as trends, anomalies and association between values.
  • Customer transaction databases: This is another very common type of data, found in retail stores. It consists of transactions made by customers. For example, a transaction could be that a customer has bought bread and milk with some oranges on a given day. Analyzing this data is very useful to understand customer behavior and adapt marketing or sale strategies.
  • Temporal data: Another popular type of data is temporal data, that is data where the time dimension is considered. A sequence is an ordered list of symbols. Sequences are found in many domains, e.g. a sequence of webpages visited by some person, a sequence of proteins in bioinformatics or sequences of products bought by customers.  Another popular type of temporal data is time series. A time series is an ordered list of numerical values such as stock-market prices.
  •  Spatial data: Spatial data can also be analyzed. This include for example forestry data, ecological data,  data about infrastructures such as roads and the water distribution system.
  • Spatio-temporal data: This is data that has both a spatial and a temporal dimension. For example, this can be meteorological data, data about crowd movements or the migration of birds.
  • Text data: Text data is widely studied in the field of data mining. Some of the main challenges is that text data is generally unstructured. Text documents often do no have a clear structure, or are not organized in predefined manner. Some example of applications to text data are (1) sentiment analysis, and  (2) authorship attribution (guessing who is the author of an anonymous text).
  • Web data: This is data from websites. It is basically a set of documents (webpages) with links, thus forming a graph. Some examples of data mining tasks on web data are: (1) predicting the next webpage that someone will visit, (2) automatically grouping webpages by topics into categories, and (3) analyzing the time spent on webpages.
  • Graph data: Another common type of data is graphs. It is found for example in social networks (e.g. graph of friends) and chemistry (e.g. chemical molecules).
  • Heterogeneous data. This is some data that combines several types of data, that may be stored in different format.
  • Data streams: A data stream is a high-speed and non-stop stream of data that is potentially infinite (e.g. satellite data, video cameras, environmental data).  The main challenge with data stream is that the data cannot be stored on a computer and must thus be analyzed in real-time using appropriate techniques. Some typical data mining tasks on streams are to detect changes and trends.

What types of patterns can be found in data?

As previously discussed, the goal of data mining is to extract interesting patterns from data. The main types of patterns that can be extracted from data are the following (of course, this is not an exhaustive list):

  • Clusters: Clustering algorithms are often applied to automatically group similar instances or objects in clusters (groups).  The goal is to summarize the data to better understand the data or take decision. For example, clustering techniques can be used to automatically groups customers having a similar behavior.
  • Classification models: Classification algorithms aims at extracting models that can be used to classify new instances or objects into several categories (classes). For example, classification algorithms such as Naive Bayes, neural networks and decision trees can be used to build models that can predict if a customer will pay back his debt or not, or predict if a student will pass or fail a course.
  • Patterns and associations: Several techniques are developed to extract frequent patterns or associations between values in database. For example, frequent itemset mining algorithms can be applied to discover what are the products frequently purchased together by customers of a retail store.
  •  Anomalies/outliers: The goal is to detect things that are abnormal in data (outliers or anomalies). Some applications are for example: (1) detecting hackers attacking a computer system, (2) identifying potential terrorists based on suspicious behavior, and (3) detecting fraud on the stock market.
  • Trends, regularities:  Techniques can also be applied to find trends and regularities in data.  Some applications are for example to (1) study patterns in the stock-market to predict stock prices and take investment decisions, (2) discovering regularities to predict earthquake aftershocks, (3) find cycles in the behavior of a system, (4) discover the sequence of events that lead to a system failure

In general, the goal of data mining is to find interesting patterns. As previously mentioned, what is interesting can be measured either in terms of objective or subjective measures. An objective measure is for example the occurrence frequency of a pattern (whether it appears often or not), while a subjective measure is whether a given pattern is interesting for a specific person. In general, a pattern could be said to be interesting if: (1) it easy to understand, (2) it is valid for new data (not just for previous data); (3) it is useful, (4) it is novel or unexpected (it is not something that we know already).

Conclusion

In this blog post, I have given a broad overview of what is data mining. This blog post was quite general. I have actually written it because I am teaching a course on data mining and this will be some of the content of the first lecture. If you have enjoyed reading, you may subscribe to my Twitter feed (@philfv) to get notified about future blog posts.


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Data Mining, Data science, General | 2 Comments

Write more papers or write better papers? (quantity vs quality)

In this blog post, I will discuss an important question for young researchers, which is: Is it better to try to write more papers  or to try to write fewer but better papers?  In other words, what is more important: quantity or quality in research?

To answer this question, I will first explain why quantity and quality are important, and then I will argue that a good trade-off needs to be found.

Quantity

There are several reasons why quantity is important:

  • Quantity shows that someone is productive and can have a consistent research output. For example, if someone has published 4 papers each year during the last four years, it approximately shows what can be expected from that researcher in terms of productivity for each year. However, if a researcher has an irregular research output such as zero papers during a few years, it may raise questions about the reasons why that researcher did not write papers. Thus writing more show to other people that you are more active.
  • Quantity is correlated with research impact.  Even though, writing more papers does not means that the papers are better, some studies have shown a strong correlation between the number of papers and the influence of researchers in their field. Some of reasons may be that (1) writing more papers improve your visibility in your field and your chances of being cited, (2) if you are more successful, you may obtain more resources such as grants and funding, which help you to write more papers, and (3) writing more may improve your writing skills and help you to write more and better papers.
  • Quantity is used to calculate various metrics to evaluate the performance of researchers.  In various countries and institutions, metrics are used to evaluate the research performance of researchers. These metrics include for example: the number of papers and the number of citations. Although metrics are imperfect, they are often used for evaluating researchers because they allow to quickly evaluate a researcher without reading each of his publications.  Metrics such as the number of citations are also used on some website such as Google Scholar to rank articles.

Quality

The quality of papers is important for several reasons:

  • Quality shows that you can do excellent research. It is often very hard to publish in top level journals or conferences. For example, some conferences have an acceptance rate of 5 % or even less, which means that out of 1000 submitted papers, only 50 are accepted.  If you can get some papers in top journals and conferences, it shows that you are among the best researchers in your field. On the contrary, if someone only publish papers in weak and unknown journals and conferences, it will raise doubts about the quality of the research, and about his ability at doing research. Publishing in some unknown conference/journals can be seen as something negative that may even decrease the value of a CV.
  • Quality is also correlated with research impact. A paper that is published in a top conference or journal has more visibility and therefore has more chance of being cited by other researchers. On the contrary, papers published in small or unknown conferences have more chance of not being cited by other researchers.

A trade-off

So what is the best approach?  In my opinion, both quantity and quality are important. It is especially important to write several papers for young researchers to kickstart their career and fill their CV to apply for grants and obtain their diplomas. But having  some quality papers is also necessary .  Having a few good papers in top journals and conferences can be worth much more than having many papers in weak conferences.  For example, in my field, having a paper in a conference like KDD or ICDM could be worth more than 5 or 10 papers in smaller conferences.  But the danger of putting too much emphasis on quality is that the research output may become very low if the papers are not accepted.  Thus, I believe that the best approach is to use a trade-off: (1) once in a while write some very high quality papers and try to get them published in top journals and conferences, (2) but sometimes write papers for easier journals and conference to increase the overall productivity, and get some papers published.

Actually, a researcher should be able to evaluate whether a given research project is suitable for a high level conference/journal or not based on the merit of the research, and whether the research needs to be published quickly (for very competitive topics). Thus, a researcher should decide for each paper whether it should be submitted to a high level conference/journal or something easier.

But, there should always be a minimum quality requirement for papers. Publishing bad papers or publishing very weak papers can have a negative influence on your CV and even look bad. Thus, even when considering quantity, one should ensure that a minimum quality requirement is met. For example, since my early days as researchers, I have set a minimum quality requirements that all my papers be at least published by a well-known publisher among ACM, IEEE, Springer, Elsevier, and be indexed in DBLP (an index for computer science). For me, this is the minimum quality requirement but I will often aim at good or excellent confernce/journal depending on the projects.

Hope that you have enjoyed this post. If you like it, you can continue reading this blog, and subscribe to my Twitter ( @philfv ).


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Academia, General, Research | Leave a comment

Using LaTeX for writing research papers

Many researchers are using Microsoft Word  for writing research papers. However, Microsoft Word has several problems or limitations.  In this blog post, I will discuss the use of LaTeX as an alternative to Microsoft Word for writing research papers.

What is LaTeX?

LaTeX is a document preparation system, proposed in the 1980s. It is used to create documents such as research papers, books, or even slides for presentations.

The key difference between LaTeX and software like Microsoft Word is that Microsoft Word let you directly edit your document and immediately see the result, while using LaTeX is a bit like programming. To write a research paper using LaTeX, you have to write a text file with the .tex extension using a formatting language to roughly indicate how your paper should look like. Then, you can run the LaTeX engine to generate a PDF file of your research paper. The following figure illustrate this process:

Latex to PDF conversion

In the above example, I have created a very simple LaTeX document (Example.tex) and then I have generated the corresponding PDF for visualization (Example.pdf).

Why using LaTeX?

There are several reasons why many researchers prefer LaTeX to Microsoft Word for writing research papers. I will explain some of them, and then I will discuss also some problems about using LaTeX.

Reason 1: LaTeX papers generally look better

LaTeX papers often look better than papers written using Microsoft Word. This is especially true for fields like computer science, mathematics and engineering where mathematical equations are used.  To illustrate this point, I will show you some screenshots of a paper that I have written for the ADMA 2012 conference a few years ago. For this paper, I had made two versions: one using the Springer LNCS LaTeX template and the other one using the Springer LNCS Microsoft Word template.

This is the first page of the paper.

Word vs Latex 1The first page is quite similar. The main difference is the font being used, which is different using LaTeX. Personally, I prefer the default LaTeX font. Now let’s compare how the mathematical equations appears in Latex and Word.

Latex vs Word

Here, we can see that mathematical symbols are more beautiful using LaTeX. For example, the set union  and the subset inclusion operators are in my opinion quite ugly in Microsoft Word. The set union operator of Word looks too much like the letter “U”. In this example, the mathematical equations are quite simple. But LaTeX really shines when displaying more complex mathematical equations, for example using matrixes.

Now let’s look at another paragraph of text from the paper to further compare the appearance of Word and LaTeX papers:

Word vs Latex 3

In the above picture,  it can be argued that both LaTeX and Word papers look quite similar. For me, the big difference is again in the font being used. In the Springer Word  template, the Times New Roman font, while LaTeX has its own default font.  I prefer the LaTeX font. Also, I think that the URLs look better in LaTeX using the url package.

Reason 2: LaTeX is available for all platforms

The LaTeX system is free and available for most operating systems, and documents will look the same on all operating systems.

To install LaTeX on your computer you need to install a LaTeX distribution such as MikTeK  ( https://miktex.org/ ). After installing LaTeX, you can start working on LaTeX documents using a text editor such as Notepad. However, it is more convenient to also install an editor such as TexWorks or WinShell. Personally, I use TexWorks.  This is a screenshot of my working environment using TexWorks:

texworks

I will open my LaTeX document on the left window. Then, the right window will display the PDF generated by LaTeX. Thus, I can work on the LaTeX code of my documents on the left and see the result on the right.

If you want to try LaTeX without installing it on your computer, you can use an online LaTeX editor such as ShareLatex (http://www.sharelatex.org ) or OverLeaf.  Using these editors, it is not necessary to install LaTeX on your computer. I personally sometimes use ShareLatex as it also has some function for collaboration (history, chat, etc.), which is very useful when working on a research papers with other people.

Reason 3: LaTeX offers many packages

Besides the basic functionalities of LaTeX, you can install hundreds of packages to add more features to LaTeX. If you use MikTek for example, there is a tool called the “MikTek package manager” that let you choose and install packages.  There are packages for about everything from packages to display algorithms to packages for displaying chessboards. For example, here is some algorithm pseudocode that I have written in one of my recent paper using a LaTeX package called algorithm2e:

algorithm EFIM

As you can see the presentation of the algorithm is very good looking. Doing the same using Word would be very difficult. For example, it would be quite difficult to add a vertical line for the “for” loop using Microsoft Word.

Reason 4: You don’t need to worry about how your document will look like

When writing a LaTeX document, you don’t need to worry about how your final documents will look like visually. For example, you don’t need to worry about where the figures and tables will appear in your document or where the page breaks will be.  All of this is handled by the LaTeX engine during the compilation of your document. When writing document, you only need to use some basic formatting instructions such as indicating when a new section starts in your document. This let you focus on writing.

Reason 5: LaTeX can generate and update your bibliography automatically

Another reason for using LaTeX is that it can generate the bibliography of a document automatically. There are different ways of writing a bibliography using LaTeX. One of the most common way is to use a .bib file. A .bib file provide  a list of references that can be used in your  document.  Then, you can use these references in your .tex document using the \cite{} command and the bibliography will be automatically generated.

I will illustrate this with an example:

bibtek

A), I have created a Latex document (a .tex file) where I cite a paper called “efim” using the LaTeX command \cite{efim}.

B) I have created a corresponding LaTeX bib file that provides bibliographical information about the “efim” paper.

C) I have generated  the PDF file using the  .tex file and the .bib file.  As you can see, the \cite{} command has been replaced by 25, and the corresponding entry 25 has been automatically generated in the correct format for this paper and added to the bibliography.

The function for generating a bibliography using LaTeX can save a lot of time to researchers especially for documents containing many references such as thesis, books, and journal papers.

Moreover, once you have created a .bib file, you can reuse it in many different papers.  And  it is also very easy to change the style of your bibliography. For example, if you want to change from the APA style to the IEEE style, it can be done almost automatically, which saves lot of time.

In Microsoft Word, there is some basic tool for generating a bibliography but it provides much less features than LaTeX.

Reason 6: LaTeX works very well for large documents

LaTeX also provides many features that are useful for large documents such as Ph.D thesis and books. These features include generating tables of contents, tables of figures, and dividing a document into several files. Some of these features are also provided in Microsoft Word but are not as flexible as in LaTeX. I have personally written both my M.Sc. and Ph.D. thesis using LaTeX and I have saved a lot of time by doing this. I have simply downloaded the LaTeX style file from my university and then used it in my LaTeX document, and after that all my thesis was properly formatted according to the university style, without too much effort.

Problems of LaTeX

Now, let’s talk about the disadvantage or problems faced using LaTeX. The first problem is that there is a somewhat steep learning curve. LaTeX is actually not so difficult to learn but it is more difficult than using Word. It is necessary to learn various commands for preparing LaTeX documents. Moreover, some errors are not so easy to debug. However, the good news is that there exist some good places to ask questions  and obtain answers when encountering problems with LaTeX such as Tex.StackExchange ( http://tex.stackexchange.com/ ).  There also exist some free books such as the Not So Short Introduction To LaTeX that are quite good for learning LaTeX, and that I use as reference.  Actually, although, there is a steep learning curve, I think that it is a good investment to learn to use LaTeX for researchers. Moreover, some journals in academia actually only accept LaTeX papers (e.g. JMLR).

The second problem with LaTeX is that it is actually not necessary to use LaTeX for writing simple documents.  LaTeX is very useful for large documents or documents with complex layouts or for special needs such as displaying mathematical equations and algorithms.  I personally use LaTeX only for writing research papers. For other things, I use Microsoft Word. Some people also use LaTeX for preparing slides using packages such as beamer. This can be useful for preparing a presentation with lot of mathematical equations.

Conclusion

In this blog post, I have discussed the use of LaTeX for writing research papers. I hope that you have enjoyed this blog post.


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Academia, General, Research | Tagged , , , | 2 Comments

An introduction to frequent subgraph mining

In this blog post, I will give an introduction to an interesting data mining task called frequent subgraph mining, which consists of discovering interesting patterns in graphs. This task is important since data is naturally represented as graph in many domains (e.g. social networks, chemical molecules, map of roads in a country). It is thus desirable to analyze graph data to discover interesting, unexpected, and useful patterns, that can be used to understand the data or take decisions.

What is a graph? A bit of theory…

But before discussing the analysis of graphs, I will introduce a few definitions.  A graph is a set of vertices and edges, having some labels. Let’s me illustrate this idea with an example. Consider the following graph:

This graph contains four vertices (depicted as yellow circles). These vertices have labels such as “10” and “11”.  These labels provide information about the vertices. For example, imagine that this graph is a  chemical molecule. The label 10 and 11 could represent the chemical elements of Hydrogen and Oxygen, respectively. Labels do not need to be unique. In other words, the same labels may be used to describe several vertices in the same graph. For example, if the above graph represents a chemical molecule, the labels “10” and “11” could be used for all vertices representing Oxygen and Hydrogen, respectively.

Now, besides vertices, a graph also contains edges. The edges are the lines between the vertices, here represented by thick black lines.  Edges also have some labels. In this example, four labels are used, which are 20, 21, 22 and 23.  These labels represents different types of relationships between vertices. Edge labels do not need to be unique.

Types of graphs: connected and disconnected 

Many different types of graphs can be found in real-life. Graphs are either connected or disconnected.  Let me explain this with an example. Consider the two following graphs:

Connected and disconnected graphs

The graph on the left is said to be a connected graph because by following the edges, it is possible to go from any vertex to any other vertices. For example, imagine that vertices represents cities and that the edges are roads between cities. A connected graph in this context is a graph where it is possible to go from any city to any other cities by following the roads.  If a graph is not connected, it is said to be a disconnected graph. For example, the graph on the right is disconnected since Vertex A cannot be reached from the other vertices by following the edges. In the following, we will use the term “graph” to refer to connected graphs.  Thus, all the graphs that we will discuss in the following paragraphs will be connected graphs.

Types of graphs: directed and undirected

It is also useful to distinguish between directed and undirected graphs. In an undirected graph, edges are bidirectional, while in a directed graph, the edges can be unidirectional or bidirectional. Let’s illustrate this idea with an example.

directed and undirected graphs

The graph on the left is undirected, while the graph on the right is directed. What are some real-life examples of a directed graph?  For example, consider a graph where vertices are locations and edges are roads. Some roads can be travelled in both directions while some roads may be travelled in only a single direction  (“one-way” roads in a city).

Some data mining algorithms are designed to work only with undirected graphs, directed graphs, or support both.

Analyzing graphs

Now that we have introduced a bit of theory about graphs, what kind of data mining task can we do to analyze graphs?  There are many  answers to this question. The answer depends on what is our goal but also on the type of graph that we are analyzing (directed/undirected, connected/disconnected,  a single graph or many graphs).

In this blog post, I will explain a popular task called frequent subgraph mining. The goal of subgraph mining is to discover interesting subgraph(s) appearing in a set of graphs (a graph database). But how can we judge if a subgraph is interesting?  This depends on the application. The interestingness can be defined in various ways. Traditionally, a subgraph has been considered as interesting if it appears multiple times in a set of graphs. In other words, we want to discover subgraphs that are common to multiple graphs. This can be useful for example to find association between chemical elements common to several chemical molecules.

The task of finding frequent subgraphs in a set of graphs is called  frequent subgraph mining.  As input the user must provide:

  • a graph database (a set of graphs)
  • a parameter called the minimum support threshold (minsup).

Then, a frequent subgraph mining algorithm will enumerate as output all frequent subgraphs. A frequent subgraph is a subgraph that appears in at least minsup graphs from a graph database.  For example, let’s consider the following graph database containing three graphs:

A graph database

Now, let’s say that we want to discover all subgraphs that appear in at least three graphs. Thus, we will set the minsup parameter to 3. By applying a frequent subgraph mining algorithm, we will obtain the set of all subgraphs appearing in at least three graphs:

frequent subgraphs

Consider the third subgraph (“Frequent subgraph 3”).  This subgraph is frequent and is said to have a support (a frequency) of 3 since it appears in three of the input graphs. These occurrences are highlighted in red, below:

Frequent subgraph 3

Now a good question is how to set the minsup parameter? In practice, the minsup parameter is generally set by trial and error.  If this parameter is set too high, few subgraphs will be found, while if it is set too low, hundred or millions of subgraphs may be found, depending on the input database.

Now, in practice, which tools or algorithms can be used to find frequent subgraphs? There exists various frequent subgraph mining algorithms. Some of the most famous are GASTON, FSG, and GSPAN.

Mining frequent subgraphs in a single graph

Besides discovering graphs common to several graphs, there is also a variation of the problem of frequent subgraph mining that consists of finding all frequent subgraphs in a single graph rather than in a graph database. The idea is almost the same. The goal is also to discover subgraphs that appear frequently or that are interesting. The only difference is how the support (frequency) is calculated. For this variation, the support of a subgraph is the number of times that it appears in the single input graph. For example, consider the following input graph:

A single graph

This graph contains seven vertices and six edges. If we perform frequent subgraph mining on this single graph by setting the minsup parameter to 2, we can discover the five following frequent subgraphs:

frequent subgraphs in a graph

These subgraphs are said to be frequent because they appear at least twice in the input graph. For example, consider “Frequent subgraph 5”. This subgraph has a support of 2 because it has two occurrences in the input graph. Those two occurrences are highlighted below in red and blue, respectively.

the frequent subgraph 5

Algorithms to discover patterns in a graph database can often be adapted to discover patterns in a single graph.

Conclusion

In this blog post, I have introduced the problem of frequent subgraph mining, which consists of discovering subgraphs appearing frequently in a set of graphs.  This data mining problem has been studied for more than 15 years, and many algorithms have been proposed.  Some algorithms are exact algorithms (will find the correct answer), while some other are approximate algorithms (do not guarantee to find the correct answer, but may be faster).

Some algorithms are also designed to handle directed or undirected graphs, or mine subgraphs in a single graph or in a graph database, or can do both. Besides, there exists several other variations of the subgraph mining problem such as discovering frequent paths in a graph, or frequent trees in a graph.

Besides, in data mining in general, many other problems are studied related to graphs such as optimization problems, detecting communities in social networks, relational classification, etc.

In general, problems related to graphs are quite complex compared to some other types of data. One of the reason why subgraph mining is difficult is that algorithms typically need to check for “subgraph isomorphisms“, that is to compare subgraphs to determine if they are equivalent. But nonetheless, I think that these problems are quite interesting as there are several research challenges.

I hope that you have enjoyed this blog post. If there is some interest about this topic, I may do another blog post on graph mining in the future.


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Big data, Data Mining, Data science | Tagged , , , , , , | 7 Comments

We are launching a new data mining journal

In this blog post, I will discuss one of my recent and current project. I have been recently working with my colleague Chun-Wei Lin on launching a new journal, titled “Data Science and Pattern Recognition“.

Data Science and Pattern Recognition

This is a new open-access journal, with a focus on data science, big data, and pattern recognition. The journal will be published by the Taiwanese publisher Ubiquitous International, which already publishes several SCI journals. In the next year, we will be pushing hard to make this journal a high level journal. For this purpose, we have worked hard on creating an outstanding editorial board with world-class researchers in the fields of data science and pattern recognition.

It is our goal to make the journal EI and SCI indexed in the next years.  Currently, there is no publication fee for publishing in this journal, to help support the journal initially. But there should be some in the future, since it is an open-access journal. Thus, it is a good time to submit your papers, now!

There will be 4 issues per year, for a total of approximately 24 papers per year. The first issue has just been published. It contains five papers including a paper that I wrote.

The next issue is planned for June, so if you want to submit a paper, please send it before June. After that, the following issues should be in September and December. In any case, if you have any question about the journal, you can have a look at the website, let me know directly, or use the contact e-mail on the journal website to contact us.


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Big data, Data Mining, Data science, Research | Tagged , , , | 2 Comments

What is the job of a university professor?

In this blog post, I will discuss the job of university professor. And, I will discuss why I have chosen to become one. This post is especially aimed at students who are considering working in academia after their Ph.D.

A professor

What is the job of university professor?

When I was an undergraduate student, I did not know much about what the professors were really doing at my university beside teaching.  In general, there are three main tasks that a professor must do:

  • Teaching. This consists of teaching courses for students. This is a very important task since it is the reason for having professors in universities. The numbers of courses that a professor teaches every year can vary greatly depending on the university and the rank of the professor. Some university are known to put more emphasis on research, while other put more emphasis on teaching. When I was working as a professor in Canada, I first started with about 220 hours (5 courses) / year, which did not give me much free time for doing research. Then after receiving some funding and having several graduate students in my team, my teaching load was reduced to about 3 courses / year to let me do more research.  Now in China, in 2016, I have taught about 100 hours.
  • Research. The second task that a professor must do is to carry innovative research that lead to the publication of research papers. Why a professor is doing research? Some key reasons are to attract top talents in university, and to ensure that the professors stay active and keep their knowledge up to date. Now, what is the difference between the research that a professor does and the research that a graduate student do?  The main difference is that a professor is expected to carry a long-term research program and have several students carrying research in his team. It is a bit like going from the job of an employee to being a team leader or business manager, in the sense that the professor has to manage not only his own research projects but generally also the research of a team, and have a clear plan for the next years to come. Many of the tasks related to research that a professor does is related to the management of his team. For example, a professor typically has to apply for funding from the government, or find projects with companies to supply funds to his research team. This may involve writing numerous grant applications and attend many meetings.
  • Service to the community. The third task that a professor must do is to provide service to the community. This can be various activities at the university level, national level, or international level. For example, when I was a professor in Canada, I was involved in a programming competition for high schools, and recruiting students for our university. I also helped to organize a LAN party for students every year, and other activities with students. Another task that I was doing was to evaluate the applications of the master degree students applying to the university in our program. This latter task was consuming many hours of my time every week.  At a national and international level, my service to the community has included tasks such as reviewing articles for conferences and journals, being the editor of a journal, and the founder of an open-source data mining software.

Thus, to be a professor, one should enjoy doing these three tasks. If one only enjoys teaching but does not enjoy research, then perhaps that it is best to become a lecturer (a professor not carrying research – in the North-American system of education). Or if one only enjoy research but does not enjoy teaching, then it is probably best to become a researcher rather than a professor, and work in the industry. But there are some exceptions. Nonetheless, a professor should ideally enjoy doing all these tasks, and should do them well.

What are the advantages/disadvantages of being a professor?

As for any other jobs, there are advantages and disadvantages for choosing this job. Let’s analyze this:

  • Salary: Depending on the country and university, the salary of a university professor can be quite high. But it is generally lower than working as a researcher in the industry. Thus, the motivation should not be only about money.
  • Schedule: One of the greatest advantage and disadvantage of being a professor is the work schedule. A professor may be extremely busy. He may have to work during week-ends, evenings, and often more than 10 hours a day to keep up with research, teaching and other tasks. The first few years of being a professor can be really hard in terms of schedule. The reason is that a new professor typically has to teach, prepare many new courses and apply for funding, and setup his research team. This is quite different from the life of a Ph.D student who often can concentrate on only doing research. After a few years, the job of a professor becomes easier.  However, although the schedule of a professor is very busy, what I like about it is the freedom about how to organize my time. A professor may typically decide to work at home if he is not teaching and may decide to wake up late but finish working late a night.  This is different from working in a company.
  • Traveling: Another thing that I like about being a professor is the opportunity for traveling.  A professor typically has to attend international conferences to present his research and meet other researchers, and may also visit other universities for collaborations. This is interesting from the research perspective. But it is also interesting from a personal perspective. Of course, it depends on the funding. Not every professor has funding for attending international conferences and travelling.
  • Being your own boss: Another advantage of being a professor is that a professor generally has a lot of freedom for the topic of his research. A professor may decide to work on topics that he like rather than work on the projects of other people (as someone would typically do, when working in a company). I often think of being a professor as someone running a business. The professor must decide of the research directions and manage his team. I enjoy this freedom of being able to work on the topics that I like, and also to publish the result of my research freely as open-source programs that can benefit anyone.

What is required to become a university professor?

In terms of academic skills, good universities require to have a Ph.D. degree. But having a PhD. is often not enough. It is also generally required to have several publications in good conferences and journals.  Now, if we analyze this in more details, a professor should have the following skills:

  • teaching: A professor must be able to explain concepts in simple ways, clearly, and concisely so that students can learn efficiently.  A professor must also be able to make the classes enjoyable rather than boring, and prepare their courses well, assignments and exams.
  • Carrying research:  A professor must be good at doing research.  This includes being able to find interesting research questions and find innovation solutions to solve these problems.
  • writing: A professor must also be good at writing. This is important as a professor must write journal papers, conference papers, grant applications, and many other documents. Being good at writing is related to being good at teaching, since writing often requires to explain concepts in simple ways so that the reader can understand (just like teaching).
  • Managing a team/sociability: A professor should be able to manage a team and also ideally establish collaborations with other researchers. Thus some management and social skills are required.
  • Being good in his field: A professor must also be good in his field of study. For example, for the field of computer science, a professor should ideally also be a good programmer. This is different from being a good researcher, as a good programmer is not necessarily a good researcher, and a good researcher is not necessarily a good programmer.
  • Being a workaholic 😉 :  actually, not everybody can work or likes to work 10 hours or more every day. And also not every professor work very hard but to become a professor it still generally requires a huge amount of work. And the first years of being a professor can be quite hard. When I was an undergraduate student, I saw it as a challenge for me to see how far I would be able to go into academia, and since I like working, the amount of work did not put me away. I have worked very hard in the last 10 years . For example, during my studies, I typically just took a few days off during a whole year, and worked every day from the morning until the evening. For me, it is worth it.

It might seem like a long list of skills to have. But actually, all these skills can be developed over time. When I was an undergraduate students for example, I did not know how to write well. I remember that the first paper that I attempted to write by myself during my master degree was quite terrible. Over the years during my master’s degree and Ph.D., I have greatly improved my writing skills by practicing writing many papers. I also greatly improved my research skills, and teaching skills. In terms of teaching, it requires some practice and dedication to become a good teacher.

Conclusion

So why did I choose to become a professor? The short answer is that I really like research and the freedom of doing my own research. I also like to be able to manage my schedule, travelling, and I also enjoy teaching.  Would I move to the industry someday? No. Even if I could probably earn more in the industry, I am happy to do what I am doing in academia.

If you have enjoyed reading this blog post, you can continue reading this blog, which has many other posts related to academia and data mining.

Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 120 data mining algorithms.

Related posts:

Posted in Academia, General, Research | Tagged , , , | Leave a comment