Report about the 13th ADMA conference (ADMA 2018)

I have recently attended the 13th International Conference on Advanced Data Mining and Applications (ADMA 2018) in Nanjing, China from the 16th to 18th October 2018. In this blog post, I will give a brief report about this conference.

ADMA 2018 conference

What is the ADMA conference?

ADMA is a conference on data mining, which is generally held in China, and sometimes in other parts of Asia. It is a overall a decent conference. In particular, the proceedings are published by Springer in their Lecture Notes in Artificial Intelligence, which ensures a good visibility of the accepted papers, and all papers are indexed in EI and DBLP. One of the particularity of this conference is that it has a focus on applications of data mining.

The ADMA conference has started in 2005 and was held every year until 2014. I have attended ADMA 2011, ADMA 2013 and ADMA 2014, and also had a paper in ADMA 2012. In recent years, I had submitted papers to the ADMA 2015 which was cancelled. Then, since ADMA 2016, the conference has been held every year, with quality papers, and this year, I am glad to be back at attending ADMA.

Location

The conference was held at the Mariott hotel in Nanjing. Nanjing is the capital of the Jiangsu province in China. Nanjing has a long history and has been the capital of several Chinese dynasties. There are many things to see, and it is close to some other popular cities like Suzhou.

adma conference location

Schedule

The main conference was held on two days, while a third day was used for some doctoral student forums. For the main conference, there was two keynote speakers in the morning of each day. Then, in the afternoon, there was paper presentations. Due to this tight schedule, all papers were either selected to be presented in 10 or 15 minutes (including the questions). In the evening, there was a reception and a banquet on the first and second day, respectively.

Acceptance rate

It was announced that 104 research papers have been submitted this year from 20 countries and 5 continents. A total of 46 papers were accepted.  From these papers, 24 were selected for a long presentations, while 22 for short presentations. Both types of papers had the same number of pages in the proceedings. Thus, the overall acceptance rate is 44.2%. Here is some slide about the review process, from the opening ceremony:

adma review process

Registration

On the first day, it was conference registration. We received the conference program, badge, pen, notebook and a laser pointer as gift. The conference proceedings was on the USB of the laser pointer.adma conference registration

Welcome speech

Then, there was a brief introduction by some high ranking representative (dean?) of the Nanjing University of Aeronautics and Astronautics, which organized the conference. Then, the local organizers gave some information about the conference.

adma conference opening

And we took a group picture.

adma 2018 conference attendance

Day 1 – Keynote by Xuemin Lin on Graph Data Mining

The first keynote was done by Xuemin Lin, the editor-in-chief of TKDE, one of the top data mining journals. The talk was about graph analysis. In the first part of the talk, some applications of graph analysis were introduced such as:

  • detection of fraud in a social graph (people collaborating to commit fraud). These can be found for example, by mining rings or bi-cohesive subgraphs,
  • product recommendation, where customers, preferences and purchase products and locations are put in a graph model (a multidimensional graph).
  • planning the delivery of food and products to homes in an efficient way.

Then, some key challenges for graph analysis have been presented: define a new computing platform, analytic models, processing algorithms, indexing techniques, and processing systems (primitive operators, query language, distributed techniques, storage, etc.) for graphs. In other words, we need to define new models and software specialized for analyzing graphs.

graph analysis challenges

Finally, several problems related to graph analysis were briefly discussed.

subgraph analysis problems

Overall, this was a good keynote talk as it gave a good and up-to-date overview of several graph analysis problems.

Day 1 – Keynote by Ekram Hossain on Deep Learning for Resource Allocation in Wireless Networks

The talk was about using stacked auto-encoders for resource allocation in wireless network. The main resources are channel, transmission power of a radio station (power allocation) and antennas (shared by many users – how we allocate to many users).  The speaker was some specialist from the field of communication.

In theory, this should have been a very interesting talk. But a problem with this talk was that the speaker spent most of the time explaining basic concepts of machine learning, and ran out of time before talking about how he was actually using deep learning for resource allocation (which was supposed to be the key part of the talk).

Day 1 – paper presentations

There was several paper presentations about various topics related to data mining such as clustering, outlier detection and pattern mining. I also presented a paper about the project of my student, which is to discover change points of high utility patterns in a temporal database of customer transactions:

adma paper presentationFournier-Viger, P., Zhang, Y., Lin, J. C.-W., Koh, Y.-S. (2018). Discovering High utility Change Points in Transactional Data. Proc. 13th Intern. Conference on Advanced Data Mining and Applications (ADMA 2018) Springer LNAI, 10 pages.

Day 1 – Reception

Then, in the evening a buffet diner was offered at the Mariott Hotel, which was a good opportunity for discussing with other researchers.

Day 2

On the second day, there was more keynote and paper presentations.

Day 2 – banquet

Then, there was a banquet at the hotel of the conference.

Next year: the ADMA 2019 conference

It was announced that the ADMA 2019 conference will be held in Dalian, China from the 21st to 23rd November 2018.  The planned dates for ADMA 2019 are as follows:

  • Paper submission: 10th May 2019
  • Demo: 10th June 2019
  • Tutorial: 1st August 2019
  • Competition: 15th August 2019
  • Research student forum: 10th September 2019

Conclusion

It was an interesting conference. Although it is not a very big conference, there was some good keynote speakers, and I had some very good discussions with other researchers. Looking forward to the 14th ADMA conference (ADMA 2019 conference) , next year.

==
Philippe Fournier-Viger is a professor and also the founder of the SPMF open-source data mining software, which offers more than 130 data mining algorithms.

How to calculate Erdös numbers? and co-authorship relationships in academia

There exists many ways of analyzing the relationships between co-authors in Academia. In this blog post, I will talk about a fun measure called the “Erdös number“, which has been proposed in the field of mathematics in the 90s. The Erdos number of a person is the distance to Paul Erdos when considering co-authorship links on academic publications. For example, if you have written a paper with Paul Erdos, you have an Erdos number of 1. If you have written a paper with a co-author of Erdos, then your Erdos number is 2. And so on.

The concept of Erdos number is based on the concept of “degree of separation” between people in a social network. The idea is that everyone should never be very far apart from any other person through their social links. Maybe you wonder Why using Erdos as reference for this measure? The reason is that Paul Erdos is one of the most prolific authors in mathematics, with more than 1000 papers. Thus, Paul Erdos is widely connected to other researchers. But other people can also be used to compute such numbers.

What is your Erdos number?

If you want to compute your distance with Erdos or any other researcher in fields related to mathematics or computer science, a good way is to use the MathSciNet website. It let you compute your collaboration distance to any other people. It may not consider all publications but should give a quite accurate results. For example, I have used it to make a few tests to compute my distance to Paul Erdos, Albert Einstein and Alan Turing. The results are below:

erdos number calculation

Thus, according to this tool, my Erdos, Eistein and Turing numbers are N =  4, 6, and 7, respectively.  If you have collaborated with me, and upper bound on your numbers is thus N+1. All of this, does not mean much as our contributions to sciences are much smaller than those of these geniuses. But it shows that researchers are often not far apart.

Conclusion

This was just a short blog post to show you this interesting tool for calculating the distance between researchers in academia. It is not a new concept, but I think it is interesting. It shows that indeed people are never very far apart in academia. What is your Erdos number? You can share it in the comment section below!

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

(video) Minimal Periodic Frequent Itemset with PFPM

This is a video presentation of the paper “PFPM: Discovering Periodic Frequent Patterns with Novel Periodicity Measures” about periodic pattern mining using PFPM. It is part of my new series of videos about data mining algorithms

(link to download the video if the player does not work)

More information about the PFPM algorithm are provided in this research paper:

Fournier-Viger, P., Lin, C.-W., Duong, Q.-H., Dam, T.-L., Sevcic, L., Uhrin, D., Voznak, M. (2016). PFPM: Discovering Periodic Frequent Patterns with Novel Periodicity Measures. Proc. 2nd Czech-China Scientific Conference 2016, Elsevier, 10 pages.

The source code and datasets of the PFPM algorithm can be downloaded here:

The source code of PFPM and datasets are available in the SPMF software.

I will post more videos like this in the near future. If you have any comments, please post them in the comment section!

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

How journal paper similarity checking works? (CrossCheck)

In this blog post, I will talk about the recent trends of journal editors rejecting papers because of their similarity with other papers as detected by the CrossCheck system. I will explain how this system works and talk about its impact, benefits and drawbacks.

similarity checking

What is similarity checking?

Nowadays, when an author submit a paper to a well-known academic journal  (from publishers such as Springer, Elsevier and IEEE), the editor will first submit the paper to an online system to check if the paper contain plagiarism. That system will compare the paper with papers from a database created by various publishers and websites to check if the paper is similar to some existing documents. Then, a report is provided to the journal editor indicating if there is some similarity with existing documents. In the case where the similarity is high or some key parts have clearly plagiarised from other authors, the editor will typically reject the paper. Otherwise, the editor will submit the paper to reviewers and then start the normal review process.

Why checking the similarity with other papers?

There are two reasons why editors perform this similarity check:

  • to quickly detect plagiarized papers that should clearly not be published.
  • to check if a paper from an author is original (i.e. if it is not too similar to previous papers from the same author).

In the second case, some journal editors will say for example that the “similarity score” should be below 20% or 40%, depending on the journal. Thus, under this model, an author is allowed to reuse just little bit of text in his own papers.

How does it works?

Now you perhaps wonder how that similarity score is calculated. Having access to some similarity report generated by the CrossCheck system, I will provide information about what these reports looks like and then explain some key aspects of this system.

After the editor submit a paper to CrossCheck, he receives a report. This report contains a summary page that looks like this:

Part of a CrossCheck similarity report

This report gives an overall similarity score of 32%. It can be interpreted as that on overall 32 % of the content of the text  matches with existing documents. It is furthermore said that 4 % is a match with internet sources, 31% with some other publications and 2% with student papers.  And as it can be observed,  31% + 2% + 4 % does not add up to 32%.  Why?  Actually, the calculation of the similarity score is misleading. Although I do not have access to the formula or the source code of the system, I found some explanation online. It is that the similarity score  is computed by matching each  part of a text with at most one document. In other words, if some paragraph of a submitted paper match with two existing documents, this paragraph will be counted only once in the overall score of 32 %.

An annotated pdf is also provided to the editor highlighting the parts that are matching with existing documents. For example, I show some page of such report below, where I have blurred the text for anonymization:

Detailed similarity comparison (blurred)

Detailed similarity comparison (blurred)

In such report, matching parts are highlighted in different colors and some number indicates which documents has been matched to which part of the text.

Limitations of this similarity checking systems

I will now describe some problems that I have observed about the report made by this similarity checking software:

  • In the above report, the countries of authors and their affiliations is considered as matching with their previous documents, which increases the similarity score. But obviously, this should not be taken into account. Can we blame an author for using the same affiliations in two papers?
  • Keywords are also considered as matching with previous documents. But I don’t think that using some of the same keywords as another paper should not be an issue.
  • Some of the matches are some very generic expressions or sentences used in many papers such as “explained in the next section” or “this paper is organized as follows”.
  • Another limitation is that this similarity checks completely ignores all the figures or illustrations. Thus if an author extends a conference paper as a journal papers and adds many figures for experiments to further differentiate his two papers, these figures will be completely ignored for calculating the similarity score.
  • Actually, the similarity checking system is limited to the text content of the paper. It can check the main text and the text in the tables, algorithms, math formulas, biography and affiliations. But it cannot check the text in figures that are included as bitmaps (pictures) in a paper. For example, if one includes an algorithm in a paper as a bitmap instead of as text, then the system will ignore that content. The system will only be able to compare the labels of the figures and not their content. Thus, an author with malicious intent could easily hide content from the matching system by transforming some content of an article as bitmap.
  • In the report that I have analyzed, I have found that  the bibliography is also considered when computing the similarity score. Obviously this seems quite unfair. Citing the same references as some other papers (especially when it is from the same author) is not plagiarism. In the case of the report that I have read, about 90% of the references were considered as matching those of several other documents, which increased the similarity score by probably at least 10%. But I have noticed that the editor can deactivate this function.
  • I have also observed that the system can also match the biography of authors at the end of the paper and the acknowledgements with those of their previous papers. This is also a problem. It is clearly not plagiarism to reuse the same biography or acknowledgement in two papers. But in that system, it increases the similarity score.

Thus, my opinion is that this system is quite imperfect. And in fact, it is not claimed that it is a perfect system.

What are the impact of this system?

The major impact is that many plagiarized papers can be detected early which is a good thing, as detecting these papers can save a lot of time to editor and reviewers.

However, a drawback of this system is that these metrics are clearly imperfect and there is a real danger that some editor just check the similarity score to take a decision on a paper and do not read a report carefully. For example, I have heard that some journals simply apply some arbitrary  thresholds such as rejecting all papers with a score >= 30 %. This is in my opinion a problem if that threshold is too low because in some cases it is justified that an author reuses text from his own previous papers. For example, an author may want to reuse some basic problem definitions from his own paper in a second paper with a different contributions. Or an author may want to extend a conference into a journal paper with some new contributions. In such case, I think that accepting some overlap between papers is reasonable.

A few years ago, when such system were not in use, it was quite common that some authors would extend a conference paper into a journal paper by adding 50 % more content. Today, with this system, I think that this may not be allowed anymore, maybe forcing authors to avoid publishing early results in conference papers (or otherwise having to spend extra time to rewrite their paper in a different way to extend it as a journal paper).

Another aspect is that such system needs to create a database of all papers. But should the authors have to agree so that their papers are put in this database?  Probably not because when a paper is published, the authors typically have to give the copyright to the publisher. Thus, I guess that the publisher is free to share the paper with such similarity checking service. But still, it raises some questions. If we make a comparison, there exists a homework plagiarism checking system called TurnItIn. This system have been actually legally challenged in the US and Canada, where some students have won some court battle so that their homework are not submitted/included in the system. Although, it is a slightly different situation, we could imagine that some people may also want to challenge journal similarity checking systems.

How to get a similarity checking report for your paper?

Checking the similarity of a paper is not free. However, editors or associate editors of journals have a subscription to use the similarity checking service. Thus, if you know an editor or associate editor that has a subscription, he may perhaps be able to generate a report for your paper for free. Otherwise, one could pay to obtain the service.

Conclusion

In this blog post, I provided an overview of the similarity checking system called CrossCheck used by several publishers and journals. I also talked about how scores appear to be computed and some limitations of this system, and its impact in the academic world.  Hope this has been interesting. Please share your comments in the comment section below.

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

Skills needed for a data scientists? (comments on the HBR article)

Recently, I have read an article of the Harvard Business Review (HBR) website about data sciences skills for businesses. This article proposes to categorize skills related to data on a 2×2 matrix where skills are labelled as useful VS not useful, and time-consuming VS not time-consuming. The author of that article has drawn such a 2×2 matrix illustrating the needs of his team (see below).

Obtained from Harvard Business Review

This matrix has received many negative comments online, in the last few days. These comments have mainly highlighted two problems:

  • Why mathematics and statistics are viewed as useless?
  • Data science is viewed as useful but mathematics and statistics are viewed as useless, which is strange since math and stats are part of data science.

Having said that, I also don’t like this chart. And many people have asked why it is published in Harvard Business Review (a good magazine). But  we should keep in mind that this chart illustrates the needs of a company. Thus, it does not claim that mathematics and statistics are useless for everyone. It is quite possible that this company does not see any benefits in taking mathematics and statistics courses or training. Following the negative comments, the author and editor of HBR have reworded some parts of the article to try  to make clearer that this should be interpreted as a case study.

A part of the problem related to this chart and article is that the term “data science” has always been very ambiguous. Some people with very different backgrounds and doing very different things call themselves data scientists. This is a reason why I usually don’t use this term. And it could be a part of the reason why this chart shows a distinction between data science, math and stats, which I would describe as overlapping.

From a more abstract perspective, this article highlights that some companies are not interested into investing into skills that takes too much time to acquire (have no short-term benefits).  For example, I know that some companies prefer to use code from open-source projects or ready-made tools to analyze data rather than spending time to develop customized tools to solve problems. This is understandable as the goal of companies is to earn money and there are many tools available for data analysis.  However, one should not forget that using these tools often requires to possess an appropriate background in mathematics, statistics or computer science to choose an appropriate model given its assumptions and correctly interpret the results. Thus having those skills that take more times to acquire is also important.

What is your opinion about this chart and the most important skills for a data science?  Please share your opinion in the comment section below.

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

(video) Minimal High Utility Itemset Mining with MinFHM

This is a video presentation of the paper “Mining Minimal High Utility Itemsets” about high utility itemset mining using MinFHM. It is the first video of a series of videos that will explain various data mining algorithms.

(link to download the video if the player does not work)

More information about the MinFHM algorithm are provided in this research paper:

Fournier-Viger, P., Lin, C.W., Wu, C.-W., Tseng, V. S., Faghihi, U. (2016). Mining Minimal High-Utility Itemsets. Proc. 27th International Conference on Database and Expert Systems Applications (DEXA 2016). Springer, LNCS, 13 pages, to appear

The source code and datasets of the MinFHM algorithm can be downloaded here:

The source code of MinFHM and datasets are available in the SPMF software.

I will post videos like that perhaps once every week or every few weeks.  I actually have a lot of PPTs to explain various algorithms on my computer but I just need to find time to record the videos.  In a future blog post, I will also explain which software and equipment can be used to record such videos. This is the first video, so obviously it is not perfect. I will make some improvements in the following videos.  If you have any comments, please post it in the comment section!

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

Expensive Academic Conferences – the case of ICDM

I was recently thinking of attending IEEE ICDM 2018 (International Conference on Data Mining) in Singapore, next month. It is a top 5 data mining conference. According to my schedule, I could attend it for 2 days, and since Singapore is close to China, it is convenient to go there. However,  I was quite surprised by how expensive the registration fee of this conference has became. As of today the “standard registration fee (by 28 October)” is roughly 1360 USD$ or 9300 CNY.

icdm registration fee 2018

Registration fees  from ICDM2018 website

This is actually the most expensive conference that I have ever considered attending. Most conferences that I have attended have been in the 300-700 USD range, twice less than ICDM. But is it an outlier? To see more clearly, I decided to compare the standard registration of ICDM 2018 with those of previous editions of ICDM:

  • ICDM 2018: 1360 $ USD (11 % increase from 2017)
  • ICDM 2017: $1220 USD  (12% increase from 2015)
  • ICDM 2015: 1080 $USD  (28 % increase from 2013)
  • ICDM 2013: 844 $USD   (68% increase from 2011)
  • ICDM 2011: about 500 $ USD

This is quite interesting. It shows a steady increase in the registration price of the ICDM conference over the years. The registration fee has increased so much, that the price is now 2.7 times higher than 8 years ago!

Why is it so expensive?

One could argue that the reason is the location of the conference. But the increase has been steady over the years no matter where the conference was organized. Moreover, such big conferences have often thousands of attendees, and usually many sponsors. I recently attended the KDD 2018 conference, which was also expensive, but less than ICDM. There was about more than 3000 attendees, and if I remember they received more than 1 million dollars in sponsorship.

Thus, where all this money goes?  A good part goes to renting a convention center, publishing the proceedings and other aspects such as providing scholarships to students. But many conferences also make some considerable profit.  Some conferences are not for profit, while some other conferences will pay the local organizers or the association organizing the conference. I am not sure about how the money is used in the case of ICDM or IEEE and what they will do with the profits, as I could not find the information. But I believe that such big conferences can generate a huge amount of money. By discussing with organizers of smaller conferences (200 attendees) that have much lower registration fees and less sponsorship, I know that some conferences can still make 20,000$ profit.

About IEEE, it is not their only conference in the 1000$ USD range. Some other flagship conferences like IEEE ICC (about communication) also have fees greater than 1000$ USD.  In the field of data mining, the KDD conference is also quite expensive, although still less than ICDM.  In some ways, manypeople want to attend these conferences so they are willing to pay these high fees.

Consequences of high registration fees

The consequence of such high registration fees is that some people may not have enough money to attend, and that a lot of money is spent by researchers.  And in many case, that money comes from research projects funded by the government. Thus, one could argue that this money could be used in better ways.

Personally, I was thinking of attending ICDM but when I saw that I would have to pay almost 1400 $ USD for two days, I think it is not reasonable to spend that much money. I have enough research funding but I still do not want to waste the money provided by the government for supporting research. Thus, this year, I will use the money for other things rather than going to ICDM.

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

Periodic patterns in Web log time series

Recently, I have analysed trends about visitors on this blog. I have made two observations. First, there is about 500 to 1000 visitors per day. For this, I want to thank you all for reading and commenting on the blog.  Second, if we look carefully at the number of visitors per day, it becomes a time series, and we can clearly see some patterns that is repeating itself every week. Below is a picture of this time series for January 2018.


periodic visitor accesses

As you can see, there is a clear pattern every week. Toward the beginning of the week on Monday and Tuesday, the number of visitor increases, while around Friday it starts to decrease. Finally, on Saturday and Sunday, there is a considerable decrease, and then it increases again on Monday. This pattern is repeating itself every week. We can see it visually, but such patterns could be detected using time series analysis techniques such as an autocorrelation plot. Besides, it would be easy to predict this time series using time series forecasting models.

We can also see a relationship with the concept of  periodic patterns that I have previously discussed in this blog. A periodic pattern is pattern that is always repeating itself over time.  That is all for today. I just wanted to shared this interesting finding.

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

Upcoming book: High Utility Itemset Mining: Theory, Algorithms and Applications

I am happy to announce that the draft of the book about high utility pattern mining has been finalized and submitted to the publisher (Springer). It should thus be published in the very near future.

high utility pattern mining

The book contains 12 chapters written by several top researchers from the field of pattern mining, for a total of 350 pages. The title is “High Utility Itemset Mining: Theory, Algorithms and Applications”. It discuss high utility itemset mining and other related topics. I show you here the table of content:

Editors: Philippe Fournier-Viger, Jerry Chun-Wei Lin, Bay Vo, Roger Nkambou, Vincent S. Tseng.

  • Chapter 1: A Survey of High Utility Itemset Mining
    Philippe Fournier-Viger, Jerry Chun-Wei Lin, Tin Truong Chi, Roger Nkambou
    This chapter gives a more than 39 pages introduction to high utility pattern mining, designed for getting a quick overview of the field and the main results.
  • Chapter 2: A Comparative Study of Top-K High Utility Itemset Mining Methods
    Srikumar Krishnamoorthy
    This chapter gives an in-depth discussion of top-k high utility itemset mining, including a very detailed comparison of the state-of-the-art algorithms.
  • Chapter 3: A Survey of High Utility Pattern Mining Algorithms for Big Data
    Morteza Zihayat, Methdi Kargar, Jaroslaw Szlichta
    This chapter reviews algorithms for mining high utility patterns in big data.
  • Chapter 4: A survey of High Utility Sequential Pattern Mining
    Tin Truong Chi, Philippe Fournier-Viger
    This chapter provides a survey of  high utility sequential pattern mining. It contains several new theoretical results and a very detailed comparison of upper-bounds and algorithms.
  • Chapter 5: Efficient Algorithms for High Utility Itemset Mining without Candidate Generation
    Jun-Feng Qu, Mengchi Liu, Philippe Fournier-Viger
    This chapter presents the HUI-Miner algorithm and a novel extension called HUI-Miner*, which improves is performance in many situations.
  • Chapter 6: High Utility Association Rule Mining
    Loan T.T. Nguyen, Thang Mai, Bay Vo
    This discusses another important topic of discovering high utility associations.
  • Chapter 7: Mining High-utility Irregular Itemsets
    Supachai Laoviboon, Komate Amphawan
    This chapter considers the time dimension in high utility itemset mining to find regular patterns.
  • Chapter 8: A survey of Privacy Preserving Utility Mining
    Duy-Tai Dinh, Van-Nam Huynh, Bac Le, Philippe Fournier-Viger, Ut Huynh, Quang-Minh Nguyen
    This chapter provides an overview of techniques for hiding high utility patterns for privacy purposes.
  • Chapter 9: Extracting Potentially High Profit Product Feature Groups by Using High Utility Pattern Mining and Aspect based Sentiment Analysis
    Seyfullah Demir, Oznur Alkan, Firat Cekinel, Pinar Karagoz
    This section presents an interesting application of high utility pattern mining related to sentiment analysis
  • Chapter 10: Metaheuristics for Frequent and High-Utility Itemset Mining
    Youcef Djenouri, Philippe Fournier-Viger, Asma Belhadi, Jerry Chun-Wei Lin
    This chapter provides a survey of evolutionary and swarm intelligence algorithms for high utility itemset mining.
  • Chapter 11: Mining Compact High Utility Itemsets without Candidate Generation
    Cheng-Wei Wu, Philippe Fournier-Viger, Jia-Yuan Gu, Vincent S. Tseng
    This chapter presents algorithms for mining closed and maximal high utility itemsets. It includes a novel strategy for identifying maximal patterns when using a depth-first search.
  • Chapter 12: Visualization and Visual Analytic Techniques for Patterns
    Wolfgang Jentner and Daniel A. Keim.
    This chapter discusses the problem of vizualizing patterns found.

This will be a very good book with many great contributions, and I am excited that it will be published soon. I will keep you updated on this blog as we get closer to the release.

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

What I don’t like about academia

In this blog post, I will talk about academia. There are numerous things that I like about academia, and I really enjoy working in academia. But for this blog post, I will try to talk  about what I don’t like in academia to give a different perspective.

academia

Even when we like something very much, there is always some things that we don’t like. So, here we go. Here is a list of some things that I more or less dislike in academia:

  • A sometime excessive pressure to publish: There is sometimes a great pressure on researchers to produce many publications in a given time frame, which may come from various sources. It is in part necessary as it increases productivity and ensures that researchers do not become lazy. But a drawback is that some researchers may be less willing to take risks or may focus on short-term projects rather than on more difficult but more rewarding projects.
  • Conflicts of interests at various levels. A researcher should avoid conflicts of interest. However, not everyone does and this is a problem. A few years ago, for example, I was a program committee member of a conference and discovered that a reviewer reviewed his own paper. I reported this issue to the conference organizers and that person was kicked out of the program committee. Another, example is some journal reviewers that always ask that we cite their papers in their reviews even if it is not relevant to our paper, just to increase their citation count. In my field, there is one reviewer that is especially known for doing this as several researchers talked to me about him. This is not a good behavior and I usually report it to the journal editor but since reviewers work for free, there is typically no consequence for such people. A third example is that some researchers will often give preferential treatment to their friends. For example, I ever attended a conference  where three of the awards were handed to collaborators of the conference organizer. Although these papers may be good, it remains suspicious. Another example is when I was applying for jobs in Canada, several years ago. At that time, I was one of remaining two candidates for a professor position but finally the other much less experienced researcher was chosen, due to a likely conflict of interest.
  • Predatory journals and conferences. There are many journals of very low quality that only publish to earn money. These journals usually have very broad scope, are published by unknown publishers and sometimes appear to not review papers. They also often send spam to promote their journals. This is a problem, and I obviously dislike such journals.
  • Unethical publications by some researchers. I have discovered and reported several journal papers that contained plagiarism. These papers have been generally retracted, as they should. But in some cases, unethical behavior is not so easy to detect. For example, I have ever read some papers where I thought that results were fake but there was not enough evidences to prove it. It certainly happens that some researchers publish fake results, which is bad for academia.
  • Publishers that sometimes are too greedy. It is well known that some publishers charge very high fees to universities and individuals to publish and/or access research publications. This is somewhat unfortunate because research is often funded by a government, done by researchers and reviewed for free by reviewers, while publishers are those earning money. It would be difficult to change this as popular publishers are well established and there are pressure to keep this system. On the other hand, this publication system is not that bad. Actually, the good publishers will filter many bad papers, and ensure minimum quality levels for papers, which is important.
  • Insufficient funding for research in some countries. Currently, I have a lot of funding so I cannot complain about insufficient funding. But in some other countries, funding is quite rare and often insufficient for researchers in academia. This was the case when I was working in Canada. To apply for the national funding by NSERC, we would have to write a budget requesting large amounts of money but one was considered lucky to even just get a fraction of it. Thus not so much money was available to students, for attending conferences and publications, and buying equipment. Besides, there is not enough professors at several universities in countries like Canada.
  • Reviewers that do not do their job well. As researchers, our work are evaluated by other researchers to determine if our work should be published in a given conference proceedings or journal. Generally, reviewers do a good job and do it for free, which is very appreciated. However, in some cases, reviewers don’t do their job correctly. For example, it ever happened to me that a reviewer rejected my paper because he thought the problem could be solved in a more simple way. But the solution proposed by the reviewer in his review was wrong. Having said that, a reviewer often misunderstand a paper because it is not well written. Thus, such situations are often to be blamed on authors rather than reviewers. And often when a paper is rejected there are multiple problems in the paper.
  • Unprofessional behavior. In some cases, some researchers have highly unprofessional behavior. This was for example the case for the ADMA 2015 conference, which was canceled without notifying authors, after papers had been submitted. The website just went offline and organizers just ignored emails.
  • Bad paper presentations. I have attended many international conferences. Sometimes paper presentations are good. But sometimes they are not good. There are several easily avoidable mistakes that a presenter should not do such as turning is back to the audience, exceeding the time limit, and not being prepared.

This is all for today! I just wanted to share some things that I don’t like about academia. But actually, I really like academia. You can share your own perspective on academia in the comments below, or perhaps that you may want to share solutions on how to improve academia. 😉

==
Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 145 data mining algorithms.