The importance of explainable data science and machine learning models

In this blog post, I will talk about an important concept, which is often overlooked in data mining and machine learning: explainability.

To discuss this topic, it is necessary to first remember what is the goal of data mining and machine learning. The goal of data mining is to extract models, knowledge, or patterns from data that can help to understand the data and make predictions. There are various types of data mining techniques such as clustering, pattern mining, classification, and outlier detection. The goal of machine learning is more general. It is to build software that can automatically learn to do some tasks. For example, a program can be trained to recognize handwritten characters, play chess, or to explore a virtual world. Generally, data mining can be viewed as a field of research that is overlapping with machine learning and statistics.

Machine learning and data mining techniques can be unsupervised (do not require labelled data to learn models or extract patterns from data) or supervised (labelled data is needed).

In general, the outcome of data mining or machine learning can be evaluated to determine if something useful is obtained by applying these techniques. For example, a handwritten character recognition model may be evaluated in terms of its accuracy (number of characters correctly identified divided by the number of characters to be recognized) or using other measures. By using evaluation measures, a model can be fine-tuned or several models can be compared to choose the best one.

In data mining and machine learning, several techniques work as black-boxes. A black box model can be said to be a software module that takes an input and produces an output but does not let the user understand the process that was applied to obtain the output.

Some examples of blackbox models are neural networks. Several neural networks may provide a very high accuracy for tasks such as face recognition but will not let the user easily understand how the model makes predictions. This is not true for all models, but as neural networks become more complex, it becomes more and more difficult to understand them. The opposite is glassbox models, which let the user understand the process used to generate an output. An example of  glass box models are decision trees. If a decision tree is not too big, it can be easy to understand how it makes its predictions. Although such models may yield a lower accuracy than some blackbox models, glassbox models are easily understood by humans. In data mining, another example of explainable models are patterns extracted by pattern mining algorithms.

A glassbox model is thus said to be explainable.  Explanability means that a model or knowledge extracted by data mining or machine learning can be understood by humans. In many real world applications, explanability is important. For example, a marketing expert may want to apply data mining techniques on customer data to understand the behavior of customers. Then, he may use the learned knowledge to take some marketing decisions or to design a new product. Another example is when data mining techniques are used in a criminal case. If a model predicts that someone is the author of an anonymous text containing threats, then it may be required to explain how this prediction was made to be able to use this model as an evidence in a court.

On the other hand, there are also several applications where explanability is not important. For example, a software program that do face recognition can be very useful even though how it works may not be easily understandable.

Nowadays, many data mining or machine learning models are not explainable. There is thus an important research opportunity to build explainable models. If we build explainable models, a user can participate in the decision process of machines and learn from the obtained models. On the other hand, if a model is not explainable, a user may be left out of the decision process. This thus raises the question of whether machines should be trusted to make decisions without human intervention?

Conclusion

In this blog post, I have described the concept of explainability. What is your opinion about it? You can share your opinion in the comment section below.

Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Five recent books on pattern mining

In this blog post, I will list a few interesting and recent books on the topic of pattern mining (discovering interesting patterns in data). This mainly lists books from the last 5 years.

High utility pattern mining: Theory, Applications and algorithms (2019). This is the most recent book, edited by me. It is about probably the hottest topic in pattern mining right now, which is high utility pattern mining. The book contains 12 chapters written by experts from this field about discovering different kinds of high utility patterns in data. It gives a good introduction to the field, as it contains five survey papers, and also describe some of the latest research. Link: https://link.springer.com/book/10.1007/978-3-030-04921-8

Supervised Descriptive Pattern Mining (2018). A book that focuses on techniques for mining descriptive patterns such as emerging patterns, contrast patterns, class association rules, and subgroup discovery, which are other important techniques in pattern mining. https://link.springer.com/book/10.1007/978-3-319-98140-6

Pattern Mining with Evolutionary Algorithms (2016). A book that focuses on the use of evolutionary algorithms to discover interesting patterns in data. This is another emerging topic in the field of pattern mining. https://link.springer.com/book/10.1007/978-3-319-33858-3

Frequent pattern mining (2014). This book does not cover the latest research as it is already almost five years old. But it gives an interesting overview of some popular techniques in frequent pattern mining. http://link.springer.com/book/10.1007%2F978-3-319-07821-2

Spatiotemporal Frequent Pattern Mining from Evolving Region Trajectories (2018). This is a recent book, which focus on spatio-temporal pattern mining. Adding the time and spatial dimension in pattern mining is another interesting research issue. https://link.springer.com/book/10.1007/978-3-319-99873-2#about

That is all I wanted to write for today. If you know about some other good books related to pattern mining that have been published in recent years, please let me know and I will add them to this list. Also, I am looking forward to edit another book related to pattern mining soon…. What would be a good topic? If you have some suggestions, please let me know in the comment section below!


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 150 data mining algorithms.

The High Utility Pattern Mining book is out!

This is to let you know that the new book on high utility pattern mining is out. It is a 337 pages book containing 12 chapters about various topics related to discovering patterns of high utility in databases. It contains several surveys, good for those new to the field, and also some chapters on more advanced topics. It is a good introduction and reference book!

This is the link for the book on the Springer website:
https://link.springer.com/book/10.1007/978-3-030-04921-8#toc

The book is available as PDF and also as hard copy. I received the hard copy yesterday:


Philippe Fournier-Viger is a professor of Computer Science and also the founder of the open-source data mining software SPMF, offering more than 150 data mining algorithms.

Plagiarism by Bhaskar Biswas, Shashank Sheshar Singh, K. Singh, et al. in IEEE Transactions on Knowledge an Data Engineering (TKDE)

Recently, I found that K. SinghShashank Sheshar SinghAjay Kumar,
Harish Kumar Shakya and Bhaskar Biswas  from the Indian Institute of Technology (BHU) (India) have plagiarized my papers in a paper published in the IEEE TKDE (Transactions on Knowledge and Data Engineering ) journal. I will explain this case of plagiarism below.

*** Important notice: Note that “Kuldeep Singh” is a very common name. This article refers to K. Singh working at BHU University in Varanasi, India. This is not about other people with the same name working in Europe or other locations ***

But before let me explain what is plagiarism. There are two types. First, some people will copy some text word for word from another paper without using quotation marks and a citation. Journal editors can easily detect this using automatic tools like CrossCheck. Second, some people will be more careful. They will copy the ideas of another paper without citations and will rewrite the text to avoid being detected. They will then take the credit for the ideas developed by another researcher. Most of the time reviewers of top journals will detect this but sometimes it will go undetected. This is what happened in the TKDE paper that I will talk about today. That paper is:

Kuldeep Singh, Shashank Sheshar Singh, Ajay Kumar,
Harish Kumar Shakya and Bhaskar Biswas (2018) CHN: an efficient algorithm for mining closed high utility itemsets with negative utility”, IEEE Transaction on Knowledge and Data Engineering. http://doi.ieeecomputersociety.org/10.1109/TKDE.2018.2882421

https://www.computer.org/csdl/trans/tk/preprint/08540872-abs.html (downloaded from the IEEE website)

What is wrong with that paper?

That paper actually proposed a new algorithm called CHN for discovering closed high utility itemsets with negative utility values.  In that paper, they extended the EFIM-Closed algorithm that I had proposed at the MLDM 2016 conference, but they did not mention it in the paper. Basically, they copied several techniques from my EFIM and EFIM-Closed papers without mentioning that they were reusing these ideas. They even renamed some of these techniques (e.g. the “utility-bin”) with a different name (e.g. utility array) and rewrote the text. Thus, it appears as Kuldeep Singh et al. proposed several of the techniques of EFIM-Closed, which is unacceptable. Some of the techniques have been adapted in the paper for the different problem, there is a citation for some upper-bounds, but some techniques are exactly the same and not cited.

What has been plagiarized?

I will list the content that has been plagiarized in the paper and provide screenshots of a side-by-side comparison of the papers.

1) In page 4 of the paper of Kuldeep Singh et al., they copy several definitions such as Property 3.1 and Property 3.2 from our FHN paper in KBS 2016.

2) In Section 4.1, they present two techniques: (1) transaction merging and (2) database projection. But those are the same as in the EFIM-Closed paper. The authors rewrote the text. They mentioned that they could reuse a sorting technique from EFIM-Closed but failed to explain that basically all the idea in this section is copied from EFIM-Closed and unchanged from our paper!

3) In Section 4.2, they pretend to use a new technique called “utility-array” to calculate the utility, support and upper-bound of patterns. But basically, they just renamed the “utility-bin” technique of EFIM-Closed to “utility array” and rewrote the text. They copied the idea without citation and then used it to calculate utility and support in the same way, but also some other upper-bounds.

4) In Section 4.4, the techniques for finding closed patterns are all copied from the EFIM-Closed papers without modifications. EFIM-Closed proposed to use backward/forward extension checking in utility mining, by drawing inspiration from sequential pattern mining. Kuldeep Singh et al. rewrote the text and claimed that they were the first to do that and just cited the sequential pattern mining paper that we cited in our paper.

5) In Section 4.5, they present their CHN algorithm that incorporates the copied techniques and also some other modifications. But the pseudocode is very similar to EFIM-Closed since they extend that algorithm. But they never explain that they extend EFIM-Closed as the basis of their algorithm.

6) The following figure look quite familiar?

7) Besides, it is interesting that in Section 4.2, the authors claimed to have proposed a new RTWU upper-bound, while in Section 3 they had already acknowledged that it was from another paper! It is actually from our FHN paper.


So is there any new contribution in that TKDE paper?

To answer that question, I decide to search a little bit more, and I found that the authors had proposed an algorithm for high utility mining with negative utility called EHIN in the Expert Systems journal also in 2018:

Singh, K., Shakya, H. K., Singh, A., & Biswas, B. (2018). Mining of high-utility itemsets with negative utility. Expert Systems, e12296. doi:10.1111/exsy.12296

So what is the difference between the two papers of
Kuldeep Singh, Bhaskar Biswas et al. ? The only difference is the technique for checking that an itemset is closed using forward and backward extensions. But as I have shown before, this technique is copied from our EFIM-Closed paper in section 4.4 without citations. Thus, there is basically nothing new in the TKDE paper.

Now another question is whether Kuldeep Singh, Bhaskar Biswas et al. cite their Expert System paper correctly? They put a citation (see below), but they also do not explain that the TKDE paper is almost the same as their Expert System paper.


Who are the authors?

Kuldeep Singh, Shashank Sheshar Singh, Ajay Kumar, Harish Kumar Shakya, and Bhaskar Biswas are working for the Computer Science and Engg, of the Indian Institute of Technology (BHU), Varanasi, India 2210

Kuldeep Singh, Shashank Sheshar Singh, Ajay Kumar, and Bhaskar Biswas

Kuldeep SinghShashank Sheshar SinghAjay Kumar, and Bhaskar Biswas from the
Indian Institute of Technology (BHU)

Another paper retracted for plagiarism with Bhaskar Biswas

Some reader of this blog pointed out that another paper of Bhaskar Biswas was retracted (in 2011) while he was also affiliated with the Indian Institute of Technology (BHU):
https://link.springer.com/chapter/10.1007/978-3-642-22543-7_30

Here, Bhaskar Biswas is the first author, while in the TKDE paper he seems to be the supervisor of some PhD students.

What will happen if?

As usual, when I find some case of plagiarism, I report it to the journal. I have thus sent an e-mail to the editor of TKDE to report that case of plagiarism, and filled a formal complaint to IEEE to ask that they retract the paper, as soon as possible.

I also sent an e-mail to the dean of the Indian Institute of Technology (BHU) and the dean of the school of computer science and engineering to let them know about what happened.

Update 2019-01-20

The dean of the computer science and engineering school of IIT (BHU) has confirmed receiving my complaint, and told me that they will investigate this. I am waiting for them to tell me which actions they will take.
The editor-in-chief of TKDE has also informed that action will be taken quickly. Thus, I expect that the paper will be retracted soon.

Update 2019-01-23

The first author has communicated with me to tell me that the version on the TKDE website would not be the final version. But normally, it is the accepted version of the paper that the reviewers have read…

But anyway, all I want is that the problem is fixed in a satisfactory way, as I spent already a lot of time to deal with this. If the paper is retracted or fixed on the TKDE website to cite us properly and give the credit where the credit is due, I will be happy and also delete this page from the blog. Hope that this issue can be fixed quickly.

Conclusion

What is the lesson to be learned? In general, there is no problem for a researcher to extend the algorithm of another researcher. This is what Kuldeep Singh, Bhaskar Biswas et al. did in that TKDE paper. They have extended EFIM-Closed with a few ideas to support negative utility values. That would have been fine, if this had been explained. However, the authors rather chose to copy several techniques without citing them and mentioning that EFIM-Closed was extended.

Hope you have learned something from this blog post. That is all for today.

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 100 algorithms for pattern mining.

Analyzing the source code of SPMF (5 years later)

Five years ago, I had analyzed the source code of the SPMF data mining software using an open-source tool called CodeAnalyzer ( http://sourceforge.net/projects/codeanalyze-gpl/ ). This had provided some interesting insights about the structure of the project, especially in terms of lines of codes and code to comment ratio. In 2013, for SPMF 0.93, the results were as follows:

Metric                Value
——————————-    ——–
    Total Files                     280
Total Lines                   53165
Avg Line Length                  32
    Code Lines                   25455
    Comment Lines               23208
Whitespace Lines                5803
Code/(Comment+Whitespace) Ratio        0,88
   Code/Comment Ratio                1,10
Code/Whitespace Ratio            4,39
Code/Total Lines Ratio            0,48
Code Lines Per File                  90
    Comment Lines Per File              82
Whitespace Lines Per File              20

Today, in 2018 I decided to analyze the code of SPMF again to get an overview of how the code has evolved over the last few years. Here are the result for the current version of SPMF (2.35):

Metric Value
——————————- ——–
Total Files 1385
Total Lines 238938
Avg Line Length 32
Code Lines 118117
Comment Lines 91241
Whitespace Lines 32797
Code/(Comment+Whitespace) Ratio 0,95
Code/Comment Ratio 1,29
Code/Whitespace Ratio 3,60
Code/Total Lines Ratio 0,49
Code Lines Per File 85
Comment Lines Per File 65
Whitespace Lines Per File 23

Many numbers remain more or less the same. But it is quite amazing to see that the number of lines of code has increased from 25,455 to 118,117 lines. The project is thus about four times larger now. This is in part due to contributions from many people, in recent years, while at the beginning the software was mainly developed by me. The total number of lines may still not seem very big for a software. However, most of the code is quite optimized and implement complex algorithms. Thus, many of these lines of code took quite a lot of time to write.

The number of comment lines has also increased, from 23,208 to 91,241 lines. But the ratio of code to comment lines has slightly increased. Thus, perhaps that adding some more comments is needed.

What is next for SPMF? Currently, I am preparing to release a new version of SPMF, which will include about 10 new algorithms. It should be released in about 1 or 2 weeks, as I need to finish other things first.

That is all for today! If you have comments or questions, please post them in the comment section below.


Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

How to improve the quality of your research papers?

In this blog post, I talk about how to improve the quality of your research papers. This is an important topic as most researchers aim at publishing papers in top level conferences and journals for various reasons such as graduating, obtaining a promotion or securing funding.

  1. Write less papers. Focus on quality instead of quantity. Take more time for all steps of the research process: collecting data, developing a solution, doing experiments, and writing the paper.
  2. Work on a hot topic or new research problem, that can have an impact. To publish in top conferences and journals, it will help to work on a popular or recent research problem. Your literature review should be up to date with recent and relevant references. If all your references are more than 5 years old, the reviewers may think that the problem is old and unimportant. Choosing a good research topic also mean to work on something that is useful and can have an impact.  Thus, take the time to choose a good research problem before starting your work.
  3. Improve your writing skills. For top conferences and journals, the papers must be well written. Often, this can make the difference between a paper being accepted and rejected. Hence, spend more time to polish your paper. Read your paper several times to make sure that there is no obvious errors. You may also ask someone else to proofread your paper. And you may want to spend more time reading and practicing your English.
  4. Apply your research to real data or make collaboration with the industry. In some field like computer science, it is possible to publish a paper that is not applied to real applications. But if you put extra effort into showing the real application and obtain data from the industry, it may make your paper more convincing.
  5. Collaborate with excellent researchers. Try to work with researchers who frequently publish in top conferences and journals. They will often find flaws in your project and paper that could be avoided and give you feedback to improve your research. Moreover, they may help improve your writing style. Thus, choose a good research team and establish relationships with good researchers and invite them to collaborate.
  6. Submit to the top conferences and journals. Many people do not submit to the top conferences and journals because they are afraid that their papers will be rejected. However, even if it is rejected, you will still usually get valuable feedback from experts that can help to improve your research, and if you are lucky, your paper may be accepted. A good strategy is to first submit to the top journals and conferences and then if it does not work, to submit to lower level conferences and journals.
  7. Read and analyze the structure of top papers in your field. Try to find some well-written papers in your field and then try to replicate the structure (how the content is organized) in your paper. This will help to improve the structure of your paper. The structure of the paper is very important. A paper should be organized in a logical way.
  8. Make sure your research problem is challenging, and the solution is well justified. As I said, it is important to choose a good research problem. But it is important also to provide an innovative solution to the problem that is not trivial. In other words, you must solve an important and difficult problem where the solution is not obvious. You must also write the paper well to explain this to the reader. If the reviewer think that the solution is obvious or not well-justified, then the paper may be rejected.
  9. Write with a target conference or journal in mind. It is generally better to know where you will submit the paper before you write it. Then, you can better tailor the paper to your audience. You should also select a conference or journal that is appropriate for your research topic.
  10. Don’t procrastinate. For conference papers, write your paper well in advance so that you have enough time to write a good paper.

Those are my advices. If you have other advices or comments, please share them in the comment section below. I will be happy to read them.


Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

(video) Minimal Sequential Rules with RuleGrowth

This is a video presentation of the paper “Mining Partially-Ordered Sequential Rules Common to Multiple Sequences” about discovering sequential rules in sequences using the RuleGrowth algorithm.

More information about the RuleGrowth algorithm are provided in this research paper:

Fournier-Viger, P., Wu, C.-W., Tseng, V.S., Cao, L., Nkambou, R. (2015). Mining Partially-Ordered Sequential Rules Common to Multiple Sequences. IEEE Transactions on Knowledge and Data Engineering (TKDE), 27(8): 2203-2216. 

The source code of RuleGrowth and datasets are available in the SPMF software.

I will post videos about other algorithms in the near future, so stay tuned!

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 150 algorithms for pattern mining.

(video) Minimal Correlated High Utility Itemsets with FCHM

This is a video presentation of the paper “Mining Correlated High-Utility Itemsets Using the bond Measure” about correlated high utility pattern mining using FCHM

More information about the FCHM algorithm are provided in this research paper:


Fournier-Viger, P., Lin, C. W., Dinh, T., Le, H. B. (2016). Mining Correlated High-Utility Itemsets Using the bond Measure. Proc. 11 th International Conference on Hybrid Artificial Intelligence Systems (HAIS 2016), Springer LNAI, 14 pages, to appear.

The source code of FCHM and datasets are available in the SPMF software.

I will post videos about other high utility itemset mining algorithms in the near future, so stay tuned!

==
Philippe Fournier-Viger is a professor, data mining researcher and the founder of the SPMF data mining software, which includes more than 150 algorithms for pattern mining.

Report about the 2018 International Workshop on Mining of Massive Data and IoT

This week, I have attended the 2018 International Workshop on Mining of Massive Data and IoT  (2018 年大数据与物联网挖掘国际研讨会) organized by the Fujian Normal University in the city of  Fuzhou, China from the 18th to 20thDecember 2018.

workshop on mining massing massive data

I have attended the workshop to give a talk and also to meet other researchers, and listen to their talks. There was several invited expertsfrom Canada, as well as from China. Below, I provide a brief report about the workshop. The workshop was held at the Ramada Hotel in Fuzhou.

Talks

There was 11 long talks. Given by the invited experts. The opening ceremony was chaired by Prof. Shengrui Wang and featured the dean Prof. Gongde Guo.

Prof. Jian-Yun Nie from University of Montreal (Canada) talked about information retrieval from big data. Information retrieval is about how to search for documents using queries (e.g. when we use a search engine). In traditional information retrieval, documents and queries are represented as vectors and relevance of documents is estimated by a similarity function. Prof. Nie talked about using deep learning to learn representation of content and matching for information retrieval.

Prof. Sylvain Giroux from University of Sherbrooke (Canada) gave a talk about transforming homes into smart homes that provide cognitive assistance to cognitively impaired people.  He presented several projects, including a system called COOK that is designed to help people to cook using a modified oven equipped with sensors and communication abilities. He also shown another project using the Hololens to build a 3D mapping of all objects in a house and tag them with semantics (an ontology).

Prof. Guangxia Xu from Chongqing University of Posts and Telecommunications gave a talk about data security and privacy in intelligent environments.

 Prof. Philippe Fournier-Viger (me), then gave a talk about high-utility pattern mining.  It consists of discovering important patterns in symbolic data (for example, to identify the sets  of items purchased by customers that yield a lot of money). I also presented the SPMF software that I founded, which offers more than 150 data mining algorithms.

Then, there was a talk by Dr. Shu Wu about using deep learning in context recommender systems. That talk was followed by a very interesting talk by Prof. Djemel Ziou of University of Sherbrooke (Canada) about his various projects related to image processing, object recognition, and virtual reality. In particular, Prof. Ziou talked about a project to evaluate the color of light pollution from pictures.

Then, another interesting talk was by Dr. Yue Jiang from Fujian Normal University. She presented two measures called K2 and K2* to calculate sequence similarity in the context of bioinformatics. The designed approach is alignment-free and can be computed very efficiently.

On the second day, there was more talks. A talk by Prof. Hui Wang from Fujian Normal University was about detecting fraud in the food industry. This is a complex topic, which requires to use complex techniques such as a mass spectrometer. It was explained that some products such as olive oil are often not authentic with up to 20% of olive oil looking suspicious. Traditionally, food tests were performed in a lab, but nowadays handheld devices have been developed using infrared light to quickly perform food tests anywhere.

Then, there was a talk by Prof. Hui-Huang Tsu about elderly home care and sensor data analytics. He highlighted privacy issues related to the use of sensors in smart homes. 

There was a talk by Prof. Wing W.Y. Ng about image retrieval and a talk by Prof. Shengrui Wang about regime switch analysis in time series.

Conclusion

This was an interesting event. I had the opportunity to talk with several other researchers with common interests. The event was well-organized.


Philippe Fournier-Viger is a full professor working in China and founder of the SPMF open source data mining software.

Report about the ICGEC 2018 conference

I have recently attended the ICGEC 2018 conference (12th International Conference on Genetic and Evolutionary Computing) from December 14-17, 2018 in Changzhou, China. In this blog post, I will describe activities that I have attended at the conference.

About the ICGEC conference

IGCEC is a good conference on the topic of Evolutionary Computing and Genetic Computing. It is the 12th edition of  the conference. It is generally held in Asia and there is some quality papers. The proceedings are published by Springer and indexed in EI, which ensures a good visibility. Besides, the best papers are invited in various special issues of journals such as JIHMSP and DSPR.  Also, there was six invited keynote speakers, which is more than what I usually see at international conferences. I am attending this conference to give one of the keynote talks on the topic of high utility pattern mining

The conference was held partly at the Wanda Realm Hotel and the Changzhou College of Information Technology (CCIT).

Hotel location icgec 2018

Changzhou is middle-sized city not very far by train from Shanghai, Wuxi and Nanjing.  In terms of tourism, Changzhou is especially famous for some theme parks, and has also some museum and temples. The city has several universities and colleges.

Changzhou icgec
Vew of Changzhou from my hotel window

Here is a picture of the conference materials (book, bag, gifts, etc.).

icgec 2018 changzhou

Opening ceremony

The opening ceremony was held by Dr. Yong Zhou, and Prof. Jeng-Shyang Pan, honorary chairs of the conference. Also. Prof. Chun-Wei Lin, general chair briefly talked about the program. This year about 200 submissions have been received and around 36 were accepted.

Keynote talks

The first keynote was by Prof. Jhing-Fa Wang about orange technology and robots. The concept of Orange Technology is interesting. It refers to technologies that are designed to enhance the life of people in terms of (1) help, (2) happiness and (3) care. As we have the concept “green technology” to refer to environment-friendly technology, “orange technology” is proposed so that we can focus on the people.  Some example of orange technology is robots that can assist senior people.

The second talk was by Prof Zhigeng Pan about virtual reality.  Prof. Pan presented several applications of virtual reality, augmented reality, and applications.

The third talk was by Prof. Xiudeng Peng about industrial applications of artificial intelligence such as automatic inspection systems, fuzzy control systems, defect marking, etc. Prof. Peng reminded us that if we are interesting in finding potential applications of AI, there are a lot of opportunities in the industry. He also stressed the importance of developing machine learning models that can be updated in real-time to feed-back, and hav online capabilities.

The fourth keynote talk was by Jiuyong Li about causal discovery and applications. The topic of causal discovery is very interesting as it aims to find causal relationships in data rather than associations. Several models have been proposed in this field to find causal rules and causal decision trees, for example. Several software by Prof. Li are open-source, and he has published a book on this topic recently.

The fifth keynote was by myself, Philippe Fournier-Viger. I presented an overview of our recent work about pattern mining, and in particular itemset mininghigh utility pattern mining, periodic pattern mining, significant pattern mining and local pattern mining. I also presented my open-source data mining software called SPMF. Finally, I discussed what I see as current research opportunities in the field of pattern mining, and how evolutionary and genetic algorithms can be used in this field (because it is the main topics of the conference).

Then, there was a last keynote talk by Dr. Peter Peng about genetic algorithms, clustering and industry applications.

Regular talks

On the second day, there was several regular paper presentations grouped by topics, including machine learning, evolutionary computing, image and video processing, information hiding, smart living, classification and clustering, applications of genetic algorithms, smart internet of things, and artificial intelligence.

Social activities

On the first day a special reception was held for invited guests and committee members at the hotel. A buffet was held at the hotel on the evening of the second day, and a banquet on the evening of the last day of the conference. Overall, there were many opportunities for discussing with other researchers, and people were very friendly.

Next year: ICGEC 2019

Next year, ICGEC 2019 will be held in Qingdao, China, which is a nice city close to the sea. It will be organized by professors from the Shandong University of Science and Technology.

Conclusion

The ICGEC 2018 conference was well-organized, and it has been a pleasure to attend it.