25 years of pattern mining

This year, we are in 2019, and it is already 25 years since Agrawal wrote his seminal papers on frequent itemset mining and association rule mining in 1994. Since then, there has been thousands of papers published on this topic, some about algorithm design, new pattern mining problems, and others about applications in a multitude of fields. And there is still many research issues to work on!

After all these years, it is a good time to look back at what has been achieved to get a new perspective. This is what I did recently with colleagues in a survey paper called “Frequent Itemset Mining: a 25 Years Review“. If you are interested by frequent pattern mining, I encourage you to read the paper, as it makes some interesting observations. For example, it is found that some ideas used in recent algorithms for mining patterns in big data can be traced back to some of the early algorithms. Here is a picture from the paper showing a timeline of key algorithms and events in frequent pattern mining:

That is all I wanted to write for today!

Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 150 data mining algorithms.

Posted in Big data, Data Mining, Data science, Pattern Mining | Tagged , , , | Leave a comment

Brief report about DAWAK 2019 / DEXA 2019

This week, I am attending the DAWAK 2019 and DEXA 2019 conferences in Linz, Austria from the 26th to the 29th August 2019. In this blog post, I will provide a report about these conferences.

About the DAWAK and DEXA conferences

DAWAK ( Intern. Conf. on Data Warehousing and Knowledge Discovery ) and DEXA ( International Conference on Database and Expert Systems Applications ) are well-established conferences related to data mining and database systems. This year, it is the 30th edition of DEXA, and the 21st edition of DAWAK. These conferences are co-located and held in Europe.

It is not the first time that I attend these conferences. You can also see my reports about DAWAK 2018 and DEXA 2018, and about DAWAK 2016 and DEXA 2016.


The proceedings of DEXA and DAWAK are published by Springer in the LNCS (Lecture Notes in Computer Science) series, which ensures that it is indexed in all major databases (EI).

DEXA 2019 received 157 submissions, and 32 were accepted as full papers (acceptance rate of 20%) and 34 as special research papers.

DAWAK 2019 received 61 submissions, and 22 were accepted as full papers (acceptance rate of 36%).


The conferences were held at the Johannes Kepler University of Linz, Austria. The city of Linz has some old buildings and streets, some hill, and the Danube river passes through the city. Holding the conferences in a university is fine. However, the drawback is that the campus of the university is located about 5 km from the city center.


On the first day, I registered for the conference, and everything went smoothly. The registration started at 12:00 AM, which gave plenty of time for arriving at the conference. Some drinks were served but there was no lunch. The conference bag contains the program, proceedings on USB as well as a few papers and tickets for lunch and other activities.

Keynote by Vldimir Marik “AI in manufacturing”

The first keynote was by Prof V. Marik from Czech Technical University. He talked about how AI can be used in manufacturing. He mentioned that there is a lot of expectations about AI in recent years, and AI has the potential to improve production efficiency and develop new business models. He talked about Industry 4.0, and concepts such as augmented reality, internet of things and services, multi-agent systems, and using robots in production facilities.

Welcome reception

On the first day, there was a welcome reception at the university where the conference was held.

Keynote by Axel Polleres about the semantic web and linked data

There was a keynote by A. Polleres about the Semantic Web. It first talked about how the concept of Semantic web has evolved from the idea of Tim Berners Lee in early 2000. Initially, the main idea was to use description logics to annotate Web content with ontologies to perform reasoning about the Web content. Some of the key results from 2000-2009 was that researchers have found which logics are decidable and scalable. A question was also how much reasoning do we really need for the web? and how can one publish knowledge on the Web? To publish data on the Web, it was proposed to use technologies such as URI and RDF to create what is called (open) linked data.

The speaker also mentionned that some lessons learned is that the OWL standard is perhaps too complicated for users (which I agree), and RDFS is among the most used standard. Also in practice, ontologies may contain inconsistencies. The speaker then talked about a prototype semantic web search engine that was created, and how there is more and more open data published by organizations such as governments, and also now there is open data portals to find open data.

The speaker talked about the Knowledge Graph of Google and how we don’t know exactly how it works but it may be related to work on Semantic Web and linked data, and it is used for question answering and showing related data to queries. Then, there was more discussion, but I will not report everything about the talk.

Keynote talk by Dirk Draheim “Future Perspectives of Association Rule Mining Based on Partial Conditionalization

There was a keynote talk about association rule mining by Prof. Dirk Draheim from Estonia. He first indicated that data can be often misleading, and we may draw wrong conclusions if we don’t have enough data or don’t look at all the data. He mentionned the Simpson Paradox and that if we have more data or more information about the context, we can better understand the data. For example, although the average salary in Seattle may be higher than the average salary in Boston, it does not mean that people in Seattle really earn more than those in Boston, because in Seattle more people may be working in the IT industry and have high salary, which increases the average, but at the same time people in other industries in Seattle may be earning less than in Boston.

Prof. Draheim then suggested that we need to use other interesting measures and also consider probability theory. We can reformulate the problem of association rule mining using that theory and see a transaction database as a probability space. He then explained his idea, which I will not report all the details here. I think it is an interesting idea to use more statistics in pattern mining, and it is not the first work that goes in such direction (e.g. work on self-sufficient itemsets by Webb et al. uses statistical testing in pattern mining).


On the evening of the third day, the conference banquet was held on a boat on the Danube River.

This year, several papers about pattern mining

I was pleased to see that there was many papers on pattern mining (e.g. itemsets, sequential patterns, association rules) this year such as:

  1. Philippe Fournier-Viger, Jiaxuan Li, Jerry Chun-Wei Lin, Tin Truong-Chi: Discovering and Visualizing Efficient Patterns in Cost/Utility Sequences. 73-88
  2. Hoang-Son Pham, Gwendal Virlet, Dominique Lavenier, Alexandre Termier: Statistically Significant Discriminative Patterns Searching. 105-115
  3. Philippe Fournier-Viger, Chao Cheng, Zhi Cheng, Jerry Chun-Wei Lin, Nazha Selmaoui-Folcher: Finding Strongly Correlated Trends in Dynamic Attributed Graphs. 250-265
  4. T. Yashwanth Reddy, R. Uday Kiran, Masashi Toyoda, P. Krishna Reddy, Masaru Kitsuregawa: Discovering Partial Periodic High Utility Itemsets in Temporal Databases. 351-361
  5. Hieu Hanh Le, Tatsuhiro Yamada, Yuichi Honda, Masaaki Kayahara, Muneo Kushima, Kenji Araki, Haruo Yokota: Analyzing Sequence Pattern Variants in Sequential Pattern Mining and Its Application to Electronic Medical Record Systems. 393-408
  6. Joe Wing-Ho Lin, Raymond Chi-Wing Wong: Frequent Item Mining When Obtaining Support Is Costly. 37-56
  7. Parul Chaudhary, Anirban Mondal, Polepalli Krishna Reddy: An Efficient Premiumness and Utility-Based Itemset Placement Scheme for Retail Stores. DEXA (1) 2019: 287-303
  8. P. Revanth Rathan, P. Krishna Reddy, Anirban Mondal: Discovering Diverse Popular Paths Using Transactional Modeling and Pattern Mining. DEXA (1) 2019: 327-337
  9. Raj Bhatta, Christie Ezeife, Mahreen Nasir Butt Mining Sequential Pattern of Historical Purchases for E-Commerce Recommendation

Next year

The DAWAK 2020 and DEXA 2020 conferences will be held in Bratislava, Slovakia on September 14th to 17th 2020.


That is all for this blog post! Globally, it was an interesting conference. It is not so big, nor too small, but it is an established conference, and some excellent researchers are attending it. The quality of papers was good. I have attended DEXA and DAWAK a few times, and will be looking forward to the next one.

Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 150 data mining algorithms.

Posted in Big data, Conference, Data Mining, Data science | Tagged , , , , | Leave a comment

Brief report about the HPCC 2019 conference

In this blog post, I will write a short report about the HPCC 2019 conference (21st IEEE Conferences on High Performance Computing and Communications).The HPCC 2019 conference was held in Zhangjiajie, China from the 10th to 12nd August. It is colocated with DSS 2019 and SmartCity 2019, and organized by Hunan University.


I did the on-site registration and I received the conference bag, which contained the conference program, a notebook, a pen, and other information. However, I found that the conference bag did not contained the conference proceedings (neither printed or on a USB drive). So, I checked the website of HPCC which clearly say that:
each registrant will receive a copy of the conference proceedings.

Then, I asked the registration desk why I did not receive a copy of the proceedings since it is written on the website. But they did not wanted to give me one. I am not sure what is the reason for that and they did not explain but just said that there is no proceedings. My guess is that it is because I paid the regular registration free (about 550$) rather than the author registration fee. But still, the website said that ALL registrants would receive the proceedings. After talking with the registration desk, they only offered to copy it to my computer from their USB drive… which is not convenient, and it should not be that way. It should be provided in the bag, or in the worst case, it should be downloadable from the website.

One hour later, after talking with other participants, I found that some of them had received the proceedings on a USB… Thus, while attending the keynotes I sent an e-mail to organizers to ask why I did not receive the proceedings. After about one hour, they apologized and asked me to go back to the registration desk (for the third time) to give me a proceedings on USB. They did not give me a clear explanation but by listening to them talking in Chinese, it seems that they did not have enough proceedings so some people did not receive it. But there might also have been some misunderstanding.

Keynote by Bart Selman on the future of AI

This speaker said that he is excited about recent developments in AI research, and its increasing applications into the real-world. He mentioned that finally machines are starting to “hear” and “see” after about 50+ year of research on AI. Some recent changes is that big set of labelled data are now used to make AI understand our conceptualization of the world, and that there is a strong commercial interest in AI. The speaker said that by 2030, a 1000$ computer will be as powerful as the human brain in terms of computing power and storage (see picture below). I think that this is a bold claim given that the brain has a very different architecture from a computer. I would be curious about how they come with these numbers that the brain has billions megabytes capacity and billions MIPS.

About the future of AI, he mentionned that the next phase is further integration of perception, planning, inference, and learning. Moreover, we also need depper semantics of natural language such as common sense knowledge and reasoning. Common sense is also needed to handle extreme or unforeseen case (for example, to ensure the safety of self-driving cars). Moreover, the speaker mentioned that non human intelligence may be developped. Overall the talk was interesting.

Other keynotes

There was also several other keynotes by some good speakers, including Prof. Witold Pedrycz, editor of Information Science and other journals. And there was a keynote by Yunhao Liu about the internet of things, and a talk by Xindong Wu among others. I will not describe all of the keynotes since some of them are not so much related to my research (e.g. keynote on sensor networks).

One keynote speaker had several videos but could not play them due to some technical problem. The talk was still very interesting, but it is a reminder that one should always do a test on the equipment before giving a talk especially when using videos.

Paper presentation

I came to the conference because I am co-author of the following paper (which was presented by the first author):

Win, K. N., Chen, J., Xiao, G., Chen, Y., Fournier-Viger (2019) A Parallel Crime Activity Clustering Algorithm based on Apache Spark Cloud Computing Platform. Proc. of 21st IEEE Conferences on High Performance Computing and Communications (HPCC-2019). to appear.

This paper is about analyzing criminal activity data to discover interesting patterns (fuzzy clusters). The proposed algorithm is implemented on Apache Spark.


This was a brief report about the HPCC 2019 conference. It is a medium-sized conference (I would guess about 400 persons including the two colocated conferences), with many parallel sessions. The highlight of the conference was for me the keynotes, which were given by some good researchers. The conference proceedings is published by IEEE and included in the EI index, which is interesting. The location of the conference in Zhangjiajie, China was also great. There is a nice national park.

Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Posted in Academia, Conference | Tagged , , , | Leave a comment

What are the milestones in the career of an academic researcher?

Today, I will talk about the different milestones that a researcher may meet during his career. I will start from the first stage, which is graduate studies until reaching the stage of being a permanent researcher working at a research institution or being a well-known researcher. I will give some advices about what is important at each stage of the career of a researcher.

Stage 1: Graduate student

The first stage is graduate studies. The goal of the master degree is to learn how to do research, by joining a research team. At that stage, one should learn how to read research papers about state of the art research,  develop ideas to solve some research problems, develop a solution, carry experiments, and write papers.

During the master degree, the supervisor usually guide the student and help him with some of the tasks (e.g. writing a paper). This is different from doing a PhD, where a student should do more tasks by himself. After completing PhD, one should be an autonomous researcher. It means that someone who has completed a PhD should be able to find interesting research problems by himself (without help from others) and to perform all other steps of a research project by himself.

Normally a graduate student will initially need much help to do research. But after completing a few projects and writing papers, one will become more and more efficient and autonomous. It is important to have that as a goal.

What one should focus on during graduate school?

  • learn to write well research papers (writing is a key skill for a researcher), 
  • publish several papers, and at least some in good conferences and journals (to convince other people of your research ability and then land a researcher job),
  • learn to find research problems and develop original research solutions,
  • improve your presentation skills (not only to present papers at conferences but because researchers who will work as lecturers or professors will be expected to teach well),
  • try to obtain grants and prizes during studies,
  • try to build a network of contacts in academia and have collaborations with other students or researchers,
  • try to publish some papers that may obtain citations (because citation count is sometimes considered as a performance indicator),
  • try to have some teaching experience such as teaching an undergraduate course, or being a teaching assistant,
  • try to have good grades (although this is less important than having good research output),
  • learn other useful research related skills such as finding papers online, using LaTeX for writing papers (especially for science papers), managing time well,
  • learn to identify limitations and weaknesses in the research of others when reading a paper or attending a presentation,
  • try to always ask at least one question when attending a presentation,
  • try to be involved in reviewing papers and other important academic activities.

Stage 2: Postoctoral researcher

Many persons become a postdoctoral researcher after doing the PhD. Such position may be for one or two years and sometimes more, with usually the goal of then obtaining a position of professor or lecturer, or working in the industry.

Why doing a postdoc? It gives the opportunity of exploring new research topics, that are often different from the PhD, and to write more papers further improve research skills, and gives some extra time to find a job. A postdoc will also generally be done with a research team that is not the same as that of the PhD, and sometimes even in another country. This allows to learn other ways of doing research and to build contact with other researchers.

What one should focus on during a postdoc?

  • Find a good team,
  • Write quality papers,
  • Be almost autonomous in finding research problems and doing research,
  • Try to participate in the research of other team members or researchers and perhaps even unofficially cosupervise students,
  • Try participating in funding applications,
  • Work on projects that will lead to papers in a relatively short time and have relatively low chance of failure as a postdoc is often short and may need to show results to then apply for other jobs,
  • Don’t be a postdoc for too many years (ideally no more than two years) as more than that may be considered negative in some fields.

Step 3: Faculty member / researcher

The next stage for an academic researcher is usually to become a faculty member or professional researcher, that is to work for a university or research center and perform research and perhaps also teach.

There are different ranks for faculty members in universities, which depends on the countries. In north america and China some typical ranks in a university  are lecturer, assistant professor, associate professor and professor (also called full professor). Sometimes there are also some honorific ranks such as distinguished professor. Typically, the rank of lecturer consists of only teaching (no research), while the lowest rank that consists of doing research and teaching is assistant professor.

The goal of a new faculty member should be to climb ranks by:

  • Creating a research program that spans over several years with a long-term vision (different from a graduate student that typically do not think more far than a paper at a time).
  • Writing research proposals that obtain significant research funding,
  • Writing high quality papers that have a significant impact,
  • Being an excellent teacher,
  • Obtaining awards, getting involved in international committees,
  • Supervising graduate students successfully, and learning to manage a team,
  • Having international collaborations and industry collaborations,
  • Being involved in university affairs,
  • Having other activities such as publishing books, organizing workshops, conferences, and being a journal editor.

Several young faculty members have problems developing a long term research plan, and/or are still having difficulty finding good research problems. This lead to the inability of obtaining research funding and publishing good papers, and is often caused by not learning to become autonomous during  the PhD. It is thus important to develop these skills as early as possible during one’s career. If one is unable to have a research plan or obtain funding, he may not be promoted and may even not have his contract renewed. I have seen this several times.

Besides climbing the ranks, one may aim at becoming influential and well-known in his field. This requires the same goals but to put extra effort and to work strategically to obtain this goal.

For young faculty members, the most critical period is the first three to five years, where one needs to prove himself to become permanent or be promoted. This requires a huge amount of work because one not only need to prepare new courses as a new faculty member, but also to teach and do well in terms of research.


This post has given an overview of the main steps in the career of an academic researcher. Hope it was interesting. If you have comments and think that I have missed something important, please post a comment below. I will be happy to read it.

Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Posted in Academia, Research | Tagged , , , | Leave a comment

Funny pictures about data mining / machine learning

Today, I will share a few funny pictures related to data mining and machine learning that I have found online. These pictures comes from various sources (I don’t remember who created them). I will also perhaps add more later on that page

Associations between customer purchases?

Everybody is doing AI

Toy datasets vs real-life


Training a model

Overfitting (1)



Overfitting (2)


What people think I do?


If you also have some interesting pictures, you may share it in the comment below and I may add them to this page.

Posted in artificial intelligence, Big data, Data Mining | Tagged , , , | Leave a comment

What happens after the PhD?

People will work several years to obtain a PhD, sometimes with the goal of becoming a  researcher in academia or the industry, or a lecturer. Some think that getting a PhD is  enough to become a successful researcher. But obtaining a PhD is not enough to ensure that.

For example, when I was doing my PhD in Canada, I noticed that there was a huge difference between the best and the worst students who completed their PhD studies. Some students would finish a PhD without publishing a paper (only a thesis), while other had scholarships and dozen of papers and awards, and had multiple collaborations with international researchers. All these students received the same Ph.D. diploma. But their CV were not equal and it made a big difference when it was time to apply for a job, and how successfully they would establish a research career.

I also noticed that some students would finish their PhD in the minimum amount of time, while in some case a student finished in ten years due to a lack of motivation, a part-time job, not producing meaningful research, and perhaps a lack of support from his advisor. This latter student was then unable to pursue a research career despite having  finally obtained his PhD. In fact, he should have perhaps chosen another career path earlier.

Another problem that some PhD students face is that they would wait perhaps just a month before graduating to look for a job. But finding a good research position after the PhD is not always easy and require preparation.

So how to ensure a successful career after the PhD?

I will give some advices:

  • Try to find a mentor which has research experience to give you advices about how to succeed in your field, and overcome the challenges that you are facing to establish your career. This can greatly help as you will avoid making some errors that other people have made.
  • Set a clear goal for your career as early as possible, then think about the milestones or subgoals that you need to attain to succeed.
  • Make a realistic plan of how to attain your goals as early as possible.
  • Build a network of contacts and collaborators in your field. This can help you to find opportunities and bring other benefits. Attend conferences, talk with other researchers online, in your university, etc.
  • Create a website, and online profile on research oriented social networks  like ResearchGate, and a Linkedin profile. This can help to promote your research and keep contact with other researchers. Share your papers online.
  • Publish your data, or software programs that you developed as open source. People who will use them will cite you.
  • Find an important research problem to work on and develop something innovative. Choose a project that is realistic (will not likely lead to failure), not too long (will not likely delay your PhD), and can lead to good publications.
  • Improve your writing skills. This is a key aspect for researchers in academia as writing papers and grant proposals is something researchers always do. A well-written paper or grant proposal that is convincing has always more chance to be accepted/funded.
  • Aim at publishing in good journals and conferences. Getting your papers accepted there will show that your research is recognized by your peers. Publishing in unknown conferences and journals, or not having publiciations  will not convince anyone of your research abilities when it is time to look for a job or apply for funding. Often, publishing good papers is more important than publishing many papers.
  • Improve your presentation skills. As a researcher, you will often need to present your research and deliver talks. A good presentation can make an enormous difference. Besides, when it is time to apply for a job in academia, the hiring committee will likely ask you to present your work and give a teaching demo to evaluate your teaching skills. A poor presenter may not be hired even if he is a good researcher. And an average researcher with poor presentation skills will likely not be hired. Here are some tips for improving your presentation skills.
  • Choose a good PhD supervisor, with a strong team. A good team will give you a good environment for your research and bring opportunities. Working with a famous researcher in your field may bring various benefits, including learning from successful researchers.
  • Don’t be afraid to go abroad or in other cities to find better opportunities. For researchers, having experience abroad generally looks good on a CV, and is even a criterion for hiring in some universities. If no suitable jobs are available in your countries, looking abroad may help find one. I for example did my PhD in Canada, my postdoc in Taiwan, moved to another province in Canada, before going to back to China. And this strategy of going abroad has paid off well as it opened new opportunies that I would not have if I always stayed in the same city.


That is all for today, as I am writing this on the airplane and it will soon land. If you have comments, please share them in the comment section below. I will be happy to read them.

Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Posted in Academia, Research | Tagged , , , | Leave a comment

A Tribute to Hypercard

In this blog post, I will talk about the first programming language that I have learn, which is HyperTalk. Younger readers may have never heard about it, as it was mostly popular in the 1980s and 1990s. Though, it is not a complex language, it was ahead of its time with many ideas, and has influenced many other important technologies such as the Web that we used today. I will briefly introduce the main idea around HyperTalk and its authoring system named HyperCard, and also talk a bit about my experience.

Hypercard software

HyperCard is a visual authoring tool for writing software that was developped for Apple computers. It was designed to be used by novice as the user interface of a software could be built visually by drawing, and dragging and dropping buttons and fields. But one could also use the Hypertalk programming language to add more complex functions to the software. It become popular the end of the 1980s mainly due to it ease of use compared to other programming languages, and because at some point it was distributed for free will all Apple computers. The last release was in 1998.

A program written using HyperCard was called a Stack, and contained several cards. You can consider a card as a page, where you could draw using painting tools and add some elements such as buttons and text fields for entering data and interacting with the software. Then it was possible to program buttons to do action such as going to another card, displaying messages, processing the data that the user entered in the text fields, and playing sounds.

The concept of a stack of cards with links between them was very innovative and can be seen as a precursor of the Word Wide Web. Indeed, the authors of the Mosaic Web browser in the 1990s have indicated that it has inspired them. But the difference with the Web is that Web pages are on different computers, rather than being inside a program. It can also be seen as something similar to Powerpoint as cards could be viewed as slides, but Hypercard would allow more complex programming and was not designed for presentation.

Another innovative aspect of HyperCard was its programming language that was designed to be close to the English language to make it very to read and learn. For example, some code in a button would look like this:

On Mouseup
ask “What is your name”
put answ into field “output”
Go to next card
End Mouseup

This code is very simple and easy to understand even by someone who did not learn the Hypertalk language. When the user click the button, it displays a dialog asking to enter a name, and then the name is put in a text field called “output”. Then the next card is displayed. Clicking on a button could also create new cards. For example, one could write a software to manage contacts, where each card was storing contact information.

An address book Hypercard stack
hypercard stack
A battleship game stack

But one of the best thing aboutf Hypercard is that it was promoting open-source software. In fact, HyperTalk is an interpreted programming language, and the HyperCard software initially acted as both an authoring tool for developing software and a player for running the software. As a user, this concept was extremely interesting, as one could obtain a stack (a program) made by someone else, run the stack, and at anytime look at the code inside the buttons, fields and other objects to learn how it work and modify it. There was of course some ways to hide the code such as calling some binary code external to the stack (eg. XCMDs) of setting up a password, but by default, the code of a stack was open to anyone.

Because Hypercard was an interpreted language, it was not designed to run very fast but it allowed to easily built some software with graphical user interface, and that in the 1980s. Building a user interface with other programming languages was far from easy for novices. When I was 12 years old, I learn programming using HyperCard on a black and white Mac Computer with a 80 Mb Hard Drive and 2 Mb of RAM. During that year, I was in high school and took a week long summer camp to learn HyperTalk at a college during the summer and then bought a book to learn more. I then programmed a few interesting software programs:

  • House of horror 1 and 2. This was a video game where you had to enter an haunted house and click on the right doors to find the exit. Choosing the wrong door would show a monster and the player would loose. Creating this type of visual game with HyperCard was not that hard as one could draw on the cards. In the second version of the game, I made it more complicated by implementing a life bar such that one would not die right away after an attack by a monster. That software was then installed on some computer in a local school for kids to play.
  • A fighting game. I also programmed a simple fighting game using the keyboard. There was a few keys to punch, kick or block, with a life bar for the player and the opponent, which was controled by the computer. Both opponents could not move forward or backward but just kick, punch, block. There was three fighters, and it was inspired by the Street Fighters II game, popular in 1992.
  • Encryption software. I also developed a simple software for encrypting/decrypting messages using a password.
  • A software for playing mazes. The software would allow to load or save a maze. Then the maze was drawn on the screen. The user would have to drag the mouse inside the maze to reach the exit, while avoiding touching the walls.

Unfortunately, I don’t have a copy of these software programs anymore. They were on a 3.5 inch floppy disk, and such disk were not reliable. But anyway, it was just a fun experience and it does not really matter.

For those who wanted to play with Hypercard, it is still possible to use it inside an emular of a Macintosh computer with the System 7 operating system: https://archive.org/details/AppleMacintoshSystem753

Another interesting thing about HyperCard is that it was basically designed by a single man: Bill Atkinson. This man is a legendary software developer. On the first Apple computer, I would open the MacPaint drawing software and see his name as the lead developer, and then open HyperCard and also see his name as the lead developer. He wrote a large part of these software by himself. Moreover, he designed core parts of the operating system of Apple computers such as the QuickDraw for drawing graphics on screen, the event manager and menu system. Bill Atkinson was a very smart man. He was actually almost completing PhD in neuroscience before being called by Steve Jobs to join Apple and write these software programs. For those interested, there are some videos of interviews with him available online.

See the source image

After learning Hypertalk, I learned many other programming languages, including Cobol, C, C++, Java, Assembly language for SPARC processors, and Lisp among others.

That is all for today. I wanted to share something a bit different on this blog this time. What is your first programming language? Or have you used Hypercard? If you want to share your experience, please post in the comment section below!

Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Posted in General, Programming | Tagged , , | Leave a comment

A brief report about the IEA AIE 2019 conference

I have just arrived Austria to attend the IEA AIE 2019 conference ( 32nd Intern. Conf. on Industrial, Engineering and Other Applications of Applied Intelligent Systems), which is held in Graz from the 9th to 11th July. In this blog post, I will give a report about the conference.

About the IEA AIE conference

It is a conference on artificial intelligence and applications that has been held for more than 30 years. The proceedings of IEA AIE 2019 are published by Springer in the Lecture Notes on Artificial Intelligence, which ensures good visibility for the papers.

I have attended this conference several times. You can read my reports about IEA AIE 2018 (Canada) and IEA AIE 2016 (Japan). And I also had papers at IEA AIE 2009, IEA AIE 2010, IEA AIE 2011 and IEA AIE 2014.

This year, 151 papers were submitted. From that 41 were selected as full papers, and 32 as short papers.


The IEA AIE 2019 conference was held in the city of Graz in Austria, and more precisely at the Graz University of Technology.

iea aie 2019 map of location

The Graz University of Technology:

Opening cemenony

The organizers first introduced the program of this year’s conference. Below is a picture of the general chair Prof. M. Ali. giving a few words, and then below a slide about statistics.

Keynote by Reiner John, titled “The 2nd wave of AI – Thesis for success of AI in thrustworthy, safety critical mobility systems”

The talk was about highly automated driving. It talked about challenges for highly automated riving (HAD), an architecture for HAD, the opportunities for AI in components and subsystems and how AI can participate in the system at an application level.

Some of the challenges are how to drive in extreme weather conditions. Humans often rely on experience, precaution, adaptation, training and foreseen scenarios, to handle difficult situations.

A car is a very complex system and AI can be used to control that complexity. Also safety is very important, as well as predictive maintenance. AI can be used to enhance safety, efficiency and functionality. Here is a pictures of some requirements for automated cars:

Another important aspects is connectivity between cars to collaboratively manage traffic. There was then, a lot more details but here I just report some main ideas.

Welcome reception

On the first evening, there was a nice welcome reception on the top of a building that belongs to the university. A dinner was served. Here are a few pictures:

Paper presentation

I am also excited to present a paper at this conference proposing a new model to discover stable periodic patterns in a sequence of transactions (transaction database). This paper, which was written by my student, received a best paper award. The solution based on the cummulative sum is quite innovative and could be extended other pattern mining problems. I will also release the source code soon in my SPMF software. You can read the paper here:

Fournier-Viger, P., Yang, P., Lin, J. C.-W., Kiran, U. (2019). Discovering Stable Periodic-Frequent Patterns in Transactional Data. Proc. 32nd Intern. Conf. on Industrial, Engineering and Other Applications of Applied Intelligent Systems (IEA AIE 2019), Springer LNAI, 14 pages (to appear)


On the evening of the second day, there was a banquet on the top of a hill with a good view of the city. The awards were announced.

Keynote by Dietmar Jannach

Prof. Jannach gave a talk about recommender systems. Recommender systems have numerous applications in our daily lives. They help to filter information and find relevant information. Research in tha field started as far as the 1970s with “Selective Dissemination of Information” and then “Collaborative filtering” and “content-based” approaches in the 1990s.

A common abstraction of the recommendation problem is to see it as a matrix completion task, where the goal is to learn a function to recommend that can be assessed using measures such as accuracy.

The above problem has been well-studied The topic of this talk is session-based recommendation where instead of a rating matrix, we have a sequentially ordered log of user interactions (item views, purchases, etc.). And in many cases, we don’t have a user id or long term preference information, etc. We also don’t know the user intent but want to predict the next user action(s) given his last actions (in the current session) and other types of information (community behavior etc.

How to solve these problems? Some method are to use association rules, markov chains, sequential rulessequential patterns, neural networks, session-based nearest neighbors, etc.

A problem to evaluate session-based recommender system is that there is no standard benchmark protocols and datasets.

The speaker also mentioned that neural networks often do not perform much better than simple approaches.

There was then more details, but I will not report all in this blog post.

Next year: IEA AIE 2020

It was announced that IEA AIE 2020 will be held in Kitakyushu, Japan from 21st to 24th July. The website of IEA AIE 2020 is online already. I am one of the Program Chair of IEA AIE 2020, and I am looking forward to it.


The conference was good on overall. The organization was well done, and the location was interesting. I had a chance to meet several researchers that I knew beforehand and also meet some interesting researchers. Looking forward to next year!

Posted in artificial intelligence, Conference | Tagged , , , , | Leave a comment

Correlation does not imply causation

There is a well known principle in statistics that correlation does not imply causation. It means that even if we observe that two variables behave in the same way, we should not conclude that the behavior of one of those variables is the cause (or is related) to the other.

In statistics and data mining, we can calculate the correlation between two variables or time series to see if they are correlated. The range of values for the correlation is usually [-1,1] where -1 indicates a negative correlation (two variables that behave in opposite ways, 0 indicates no correlation, and 1 indicates a positive correlation. Two variables that have a high correlation may be related. But if two variables have a high correlation but are not related, they are called a spurious correlations.

To be convinced of the principle that correlation does not imply causation, I will share a few examples from a very good website on this topic ( http://tylervigen.com/ ), that lists thousands of spurious correlations.

spurious correlation
Correlation of 0.78
spurious correlation 2
Correlation of 0.66
spurious correlation 3
Correlation of 0.99
spurious correlation of time series

Obviously, these correlations are totally spurious although the variables show very similar behavior. This shows the needs to always look further than just using a correlation measure.

Those are just a few example of spurious correlations. If you try the website, you can also browse various variables to find other spurious correlations.


In this short blog post, I shown a few examples of spurious correlations at I think it is quite interesting. If you have comments, please share them in the comments section below.

Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Posted in Big data, Data Mining, Data science | Tagged , , , , , | Leave a comment

Too many machine learning papers?

A few days ago, I have read a post on LinkedIn showing that the number of Machine Learning (ML) papers has been increasing very quickly over the last few years to about 100 ML papers per day (on Arxiv, a popular public repository of research papers).

growth of machine learning papers
Chart obtained from LinkedIn, that is likely true (if someone knows the original source, I will update this post to cite it)

That is about 33,000 papers per year. This shows the excitement about the new advances in that field in particular with respect to deep learning that has lead to obtaining good results for various applications. Some people on LinkedIn wondered if there are too many ML papers and how they could keep up with advances in that field.

I will make a few comments about this.

  • First, in general in computer science, the papers that present a major innovation or breakthrough are few. There is always a lot of papers that make incremental advances by simply reusing ideas with some small modifications, or that just focus on applications rather than on fundamental problems. In fact, generally, few papers are highly cited while many paper receive few citations. Thus, although there may be a great increase in ML papers, one can ignore a huge amount of low quality papers. It is thus important to learn some strategies to detect low quality papers such as looking at the reputation of conferences and journals where papers are published and other criteria such as paper citation count.
  • Second, the large increase of ML papers result in a huge demand to review ML papers but a problem is that there is perhaps not enough experts to read those. I can share some story related to that. Recently, I have been invited to join the program committee of a good neural network conference. Honesty, I was surprised because I have never published there, and I have never made any significant contributions in that field. I have used neural networks as a tool with other techniques in an applied paper about 4 years ago but that is all, and it should not count. Thus, I tend to think that there is not enough expert reviewers and they may have invited many such as me because I work on data mining, which is related. I also noticed an increase in the number of invitation to review ML papers for journals in my mailbox. But honestly, I rarely accept these invitations because it is not much related to my research. If there is not enough reviewers though, this may just be a temporary problem.
  • Third, due to the increasing number of papers, some conferences on related or overlapping topics such as database or data mining start to receive many ML papers. There is generally no problem about that. But in some cases, these papers are inadequate for the topic of the conference. For example, this year, a conference that I will not name relates to databases, clearly mentioned to reviewers that if a paper is on ML and they do not understand the content or it doesn’t seem interesting to the target audience, then to not  recommend these ML papers for acceptance. As always, it is important to choose a relevant conference when submitting a conference paper (for papers on any topics).
  • Fourth, ML has currently a lot of hype because of some excellent results obtained for applications such a computer vision and translation. Should there be so many researchers working in that area? I do not have the answer but it is a question that is worthy to be asked. For example, I know that in some university, more than 50% of graduate students are now working on deep learning. But it remains that deep learning cannot solve all the problems of computer science, and many other research areas still have complex challenges to address. Also there is always some trends in research that come and goes every few years. For example, a technique like SVM was quite popular 10 years ago but now is less than deep learning. Neural networks have also had cycles of popularity over the last forty years. As an individual, it can be good to somewhat follow the trends to take advantage of opportunities, or at least be aware of them.


In this short blog post, I have just shared a few comments and observations related to the ML trend. If you have other comments, please share them in the comments section below. I will be happy to read them.

Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Posted in Academia, artificial intelligence | Tagged , , | Leave a comment