How to Analyze the Complexity of Pattern Mining Algorithms?

Today, I will explain how to analyze the complexity of pattern mining algorithms for topics such as itemset mining, sequential pattern mining, subgraph mining, sequential rule mining and periodic pattern mining. This is something that is often asked by reviewers especially when submitting papers about new pattern mining algorithms to journals. I explain the main idea of how to analyze complexity below, by breaking it down into several questions.

What to analyze?

Generally, we should analyze the complexity of a pattern mining algorithm in terms of time and also in terms of space.

The complexity depends on what?

Generally, the complexity depends on the number of items, which I will call n, and the number of records (e.g. transactions), which I will call m. Sometimes, we will also consider other aspects such as the average record length, which I will call z.

How to analyze the overall complexity?

To analyze the complexity, we need to think about every operations that an algorithm is doing and how it is affected by the factors such as m and n. It may seem to be difficult to talk about the complexity of pattern mining algorithms because an algorithm might process thousands of patterns and records, and do some complex operations as well like building tree structures, reading the database, etc. But in the end, we can break this down into several steps and analyze each step one by one, and then combine what we found to get the overall complexity, and it is not so difficult 😉

Generally, the time complexity of an algorithm can be broken down into two main parts like this:

Time_complexity_of_the_algorithm = Time_Complexity_for_initial_operations + Time_Complexity_search_for_patterns

The Time_Complexity_for_initial_operations refers to some basic operations that the algorithm must do before starting to search for patterns. The Time_Complexity_search_for_patterns refers to operations that are done during the search for patterns after performing the initial operations.

For the space complexity, we can also break it down into two parts:

Space_complexity_of_the_algorithm = Space_Complexity_for_initial_operations + Space_Complexity_search_for_patterns

Now lets look at this in more details. Below, we will discuss each part separately.

How to analyze the complexity of initial operations?

First we will first talk about the complexity of initial operations, that is Time_Complexity_for_initial_operations and Space_Complexity_for_initial_operations.

Initially a pattern mining algorithm will typically read the database one or more times and also build some initial data structures before it starts searching for patterns.

As example, let’s talk about the Eclat algorithm for itemset mining. Eclat initially reads the database once to build a vertical data structure for each item. Reading the database once can be said to require O(m * z) time, since the algorithm scans m transactions and transactions have an average length of z.

If the algorithm also builds some data structure, we need to take this into account in the complexity as well.

For example, an algorithm like Eclat builds a data structure for each item during the initial database scan. We need to build this data structure for each item. And building this data structure for an item requires to do one operation for each transaction that contains the item. If we assume the worst case, where each item appears in each transaction, we can say that building the data structures for all items is O(m * n) in the worst case. I say O(m*n) because for each item (n) we must add up to m entries in its data structure.
So globally, for Eclat, we can say that the time complexity for initial operations (Time_Complexity_for_initial_operations) is O(m * z) + O(m * n) in the worst case. It is important to say “in the worst case” because this is what we analyzed.

We should also not forget to analyze the space complexity. It is done in a similar way.

If we take Eclat as example, the space complexity for initial operations is just to build the data structure for each item, so it is:
Space_Complexity_for_initial_operations) is O(m * n) in the worst case based on the previous discussion.

We could also take other algorithms as example like FPGrowth. An algorithm like FPGrowth is more complex. It will build a tree structure as initial operations. In that case, we need to talk about the operations for building the tree and also about the space for storing that tree. I will not go into details but the size of the tree is in the worst case equal to the database size (if transactions do not overlap) so it would be something like O(m * z). This a rough analysis and we could analyze this into more details as I did not talk about pointers in that tree and the header list. But that is the rough idea.

How to analyze the complexity of the search for patterns?

The second part that we need to analyze to get the overall complexity of a pattern mining algorithm is the complexity of the search for patterns. This is what I called Time_Complexity_search_for_patterns and Space_Complexity_search_for_patterns.

So let’s start. Analyzing the complexity of the search for patterns may not seem easy because usually there is some recursive function that is called to search for patterns or even some kind of iterative process (e.g. in the Apriori algorithm). But we can still analyze this. So how to do? The easiest way to think about this is to think about how many patterns there are in the search space and then what is the complexity for processing each pattern. In other words:

Time_Complexity_search_for_patterns = How_many_patterns * Time_Complexity_to_process_each_pattern

and

Space_Complexity_search_for_patterns = How_many_patterns * Space_Complexity_to_process_each_pattern

So first, we must think about how many patterns there are in the search space. Usually, we don’t really know because it depends on the input data. So we can only talk about the worst case. The number of patterns is usually like this:

  • For itemset mining problems, if there are n items, there are 2^n -1 itemsets in the search space, in the worst case. Why? This is simply the number of possible combinations that we can make from n items. I will not do the proof.
  • For association rule mining or sequential rule mining problems, there are up to 3^m – 2^(m+1) + 1 rules in the search space, in the worst case. Again, I will not do the proof. See this blog post for more details about why.

Now, after we know how many patterns there are in the search space, we need to think about the complexity for processing each pattern.

As example, lets consider the Eclat algorithm. For the Eclat algorithm, for each pattern having more than 1 item, we need to build a data structure. This data structure contain m entries in the worst case (one entry for each transaction). To build the data structure, we must do the join of the data structures of two itemsets. This operation requires to scan one time each data structure to compare them and to build the new data structure which also contains up to m entries. All of this is done in O(m) time because we scan each structure once and it has up to m entries. The space complexity of the data structure that we build for a pattern is O(m) since it has at most one entry for each transaction in the worst case.
So now, we can combine this information to get the complexity of the search for patterns of Eclat. Since we have 2^n -1 itemsets in the search space (in the worst case) and constructing the structure of each itemset is O(m) time, then Time_Complexity_search_for_patterns = O(2^n -1) * O(m) in the worst case, which we simplifies as O(2^n * m)
For the Space complexity of Eclat, it is the same:
Space_Complexity_search_for_patterns = O(2^n -1) * O(m) in the worst case, which we simplifies as O(2^n * m).
Again, it is important to say that this is the worst case.

We could analyze other algorithms in the same way. For example, for an algorithm like FP-Growth, we will need to build an FP-tree for each frequent itemset in the search space. Since this is also an itemset mining problem, there are 2^n -1 itemsets to process, in the worst case (where all itemsets are frequent). Then, we would have to analyze the operations that are performed for each itemset (building an FP-tree) and the space required (for storing the FP-tree). I will not do the details here but gives some links to some papers as example at the end of this blog post.

Let me also briefly mention how to analyze Apriori-based algorithm that perform a level-wise search. For these algorithms like Apriori, iterations are performed. The number of iterations is in the worst case equal to the number of items, that is n. So, we could also analyze the complexity in terms of these iterations. For example, if Apriori does a database scan for each iterations, then the time complexity of those database scans would be O(m * n * z) since there are n iterations, and for each iteration we scan m transactions and each transaction has on average z items. But to analyze the time complexity of Apriori we also need to think that during each iterations we need to combine each pattern with other pattern to generate new pattern. This is a bit more complicated and it depends on how the algorithm is implemented to do those combinations so I will not go into details. But I just want to mention that iterative algorithms like Apriori should be analyzed by also taking the number of iterations into account.

Lastly, related to the complexity of the search for patterns, I would like to mention something else that is important to say in a complexity analysis. It is that above, I have discussed the worst case complexity of the search for patterns. However, for many algorithms, the search will become easier as larger patterns are explored. For example, for an algorithm like FPGrowth, we build an FP-tree for each frequent itemset and in the worst case that tree can be as large as the original database. This may seem terrible, but in practice when FP-Growth searches for larger itemsets, the FP-trees become smaller and smaller, and the branches of the tree become shorter and thus have more chances to overlap reducing the tree size. Thus, we can say that in practice, the size of these trees becomes smaller. Something similar can be said for algorithms such as Eclat. For Eclat, as itemsets becomes larger, the data structures that are stored for itemsets (the transaction id lists) tend to become smaller. Thus, less memory is consumed and joining the data structures becomes less costly. This does not change the complexity, but it helps the reader to understand that after all, the algorithm can behave well.

Putting this all together, to get the overall complexity

After we have analyzed the complexity for initial operations and the complexity of the search for patterns, we can put this all together to get the overall complexity of an algorithm.

For example, for Eclat, we have the time complexity in the worst case as
Time_complexity_of_the_algorithm = Time_Complexity_for_initial_operations + Time_Complexity_search_for_patterns = O(m * z) + O(m * n) + O(2^n -1) * O(m)
and the space complexity:
Space_complexity_of_the_algorithm = Space_Complexity_for_initial_operations + Space_Complexity_search_for_patterns = O(m * n) + O(2^n -1) * O(m)

That is the main idea. We could also perhaps go into more details but this gives a good overview of the complexity of this algorithm.

Some examples of complexity analysis

Below, I have given the rough idea about how to analyze the complexity of pattern mining algorithms. If you want some examples, you may check the below papers, where there is some analysis of different types of algorithms:

  • TSPIN: I analyze the complexity of an itemset mining algorithm based on FP-Growth for mining periodic patterns. This also shows how to analyze the complexity of a top-k pattern mining algorithm.
  • EFIM: I analyze the complexity of an algorithm for high utility itemset mining that does a depth-first search and use a pattern-growth approach.
  • FHUQI-Miner: I analyze the complexity of a quantiative high utility itemset mining algorithm, which does a depth-first search, and is based on Eclat / HUI-Miner.
  • CEPB: In this paper, I analyze the complexity of a kind of sequential pattern mining algorithm that uses a pattern-growth approach, and perform a depth-first search.
  • LHUI-Miner: I analyze the complexity of another algorithm based on HUI-Miner for high utility itemset mining.

Conclusion

That was a blog post to give a rough idea about how to analyze the complexity of pattern mining algorithms. Hope that this blog post is helpful! If you have any comments, you may leave them in the comment section below. Thanks for reading.


Philippe Fournier-Viger is a distinguished professor working in China and founder of the SPMF open source data mining software.

This entry was posted in Pattern Mining and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *