Research Interests, Themes

 

The main topics of my research is the speeding up of algorithms, especially discrete algorithms. I want to clarify what techniques are efficient in reducing the time complexities of algorithms. In addition, I want to develop some general techniques and clarify the structures of fast discrete algorithms. I have examined some kinds of problems, which are enumeration, tree locations, and meta-heuristics. I have researched what techniques are to solve these problems, and got several results. Using the obtained results,  I will study problems and algorithms used in other scientific areas, and   in real world applications, especially focusing on optimization algorithms for solving problems appearing in companies.

Recently, there have been a number of studies on optimization algorithms. Many problems have been solved by using efficient algorithms, hence the question "What techniques are efficient for each problem" has been answered in many fields. However, little is known about "Which algorithms can be speeded up in efficient by what technique", which are from the point view of algorithmic aspects. I plan to research   optimizations and algorithms from an algorithmic point of view, especially enumerations, tree locations, and meta-heuristics.

In the following, for each topic, I explain my position, my goal, and the current status of my research.

 

Enumeration Algorithms

Enumeration algorithms are used for solving enumeration problems. Enumeration problems are problems of outputting ( or finding ) all elements in a given set, for example, the spanning trees of a graph, the vertices of a polytope given by linear inequations, and the subsets of given numbers such that the sum of the numbers in each subset is not greater than a given constant.

Enumeration is a  fundamental problem. It is considered for numerous objects of   fundamental combinatorial theory and graph theory, such as feasible solutions of combinatorial optimizations, elements of subset families, and facets of combinatorial polytopes. In investigating the properties of subset families, combinatorial polytopes, and optimizations, enumeration is a basic and powerful tool. Several algorithms including branch and bound are based on enumeration. When we study optimization algorithms such as branch and bound, if we have many results on enumerations, then what we have to do is to refer the literatures. Hence studies on enumerations help the study  of combinatorial algorithms.

Up to date, a number of enumeration algorithms have been proposed for various enumeration problems. Several construction schemes for enumeration algorithms have also been proposed. Almost all enumeration algorithms are based on these construction schemes. Using these schemes, the frameworks of enumeration algorithms can be explained quite simply. On the other hand, there have been studies that do not use such schemes to solve complicated ( or minor ) problems. If enumeration is to be a good tool in other scientific areas, enumeration algorithms should be written in a simple form,  thereby should be based on such construction schemes. Also they algorithmic properties should be classified from the view points of the algorithmic aspects.

The computation time of enumeration algorithms varies greatly.  The average computation time per output of fast algorithms is small ( polynomial of input size ), and that of slow algorithms is large ( exponential of input sizes ). The former are called output linear time or polynomial delay. It is easy to construct an algorithm of the latter group of algorithms for a combinatorial enumeration problem: generate all the combinations, and check the feasibility of each combination. Hence, finding an output linear algorithm for an enumeration problem is a good and interesting topic in the study of enumeration. In fact, research in this area has been going on for the past 40 years, and almost all interesting ( or important ) problems have been investigated extensively, i.e., output linear algorithms have been proposed for almost all such problems, except for several difficult problems.

From the view point of speeding up algorithms, which is my research topic, the study on enumeration algorithms has two interesting problems. One is finding a practical fast algorithm for difficult enumeration problems. There are several enumeration problems to solve which simple algorithms take quite long time while more sophisticated algorithms with not small time complexity runs in quite short time in practice. I studied the enumeration problem of minimal set coverings ( hypergraph dualization ) , and implemented a practical fast algorithm, which is much faster than naive algorithms. I intended to investigate "what kind of problems can be solved faster by what techniques."

The second problem is the problem of how much we can speed up output linear enumeration algorithms, i.e., how much can we decrease the time complexity per output. In the studies on text algorithms and graph algorithms, the problem of reducing the time complexity has been investigated extensively. Although the techniques developed in such algorithms can also be applied to enumeration algorithms, the performance is not so good. While graph algorithms are constructed to find a ( small number of ) solution(s), enumeration algorithms are constructed to find quite many solutions. Techniques and analysis for graph algorithms are specific to former algorithms, thus enumeration algorithms requires original techniques and analysis specific to them. There are several cases that techniques not working well for graph algorithms work well for enumeration algorithms. This is interesting.

Algorithms with small time complexities tend to be complicated and slow in practice. However, by investigating the structures of these algorithms, we can see "If *several conditions* hold, then the computation time decreases." From this, we can deduce that "probably, algorithms satisfying the most part of these *several conditions* are fast." By constructing simple algorithm satisfying these conditions as possible, we can obtain simple and practically fast algorithms.

I will research this problem to get general knowledge about of theoretical and practical techniques for speeding up enumeration algorithms.

My research has resulted in speeding up approximately ten interesting ( or important ) subgraph enumeration algorithms ( for example, spanning trees, and matchings ) by reducing their time complexity. I also proposed a general scheme called "Trimming and Balancing" for speeding up enumeration algorithms. Using this scheme, several algorithms that could not be speeded up with existing techniques were speeded up. I will study to what kind of enumeration algorithms, Trimming and Balancing is applicable, and will investigate what the common properties of these improved algorithms are.

 

 

Tree Algorithms

Tree algorithms are algorithms for solving problems, especially optimization problems, of tree shaped networks. Many NP-hard problems of general networks are polynomial solvable on tree networks, since tree networks are quite simple, and many schemes including divide-and-conquer, dynamic programming, bottom-up, and top-down ones, work efficiently on tree networks.

Studies on tree algorithms have a long history. A number of problems have been considered, and a number of polynomial algorithms have been proposed. Hence, almost all interesting ( or important ) problems which appear in practice or as subproblems of other combinatorial optimizations have been studied completely. However, there have been few studies on the algorithmic properties of tree algorithms. That is, many studies have proposed tree algorithms to solve their problems, yet few have examined the properties of tree algorithms themselves. Because there are many studies on the former kind, algorithms with the same structure and similar properties appear frequently. These algorithms should be considered as minor ( or modified ) versions of basic algorithms. They should also be classified by their algorithmic properties. 

If techniques and knowledges about tree algorithms, such as basic construction schemes of tree algorithms, methods of complexity analysis, properties of problems, constraints, and objective functions, are assorted and classified, then we could easily investigate algorithms when we encounter new tree location problems, by investigating several properties of the problems, and refer the techniques and knowledges. Such classification of basic algorithms will also help in designing programs.

In my study, I propose a linear time, i.e. optimal, algorithm for finding a k-tree core of a tree network. A paper of this research is published. Although using techniques obtained in my research, I found that several kinds of ( minor ) tree network optimization problems can be solved faster than the existing ways, I thought that such a minor study had to be ad hoc, so I did not submit these result to journals. Recently, I have the above opinion. Now, I am investigating general basic algorithms and how to apply the algorithms to those minor problems in a simple way.

 

Approximation Algorithms / Meta Heuristics

 

Recently, approximation algorithms and meta heuristics have been intensively studied to solve large scale combinatorial optimizations. Studies on "efficiency of each method ( local search, tabu search, genetic algorithms, etc. ) for each problem" have also been doing. In those studies, the performance in a limited time of each method are tested, but the implementations are often naive. If we use several sophisticated techniques to speed up, the performance will change. Studies of algorithms can increase the performance of approximations and heuristics. I think, each method should be tested with high-level implementations and algorithmic techniques, otherwise, we will not be able to compare the efficiency of different method.

Meta heuristics can solve ( find a "not bad" solution to be exact ) large scale problems. However, they are not designed to solve in a few minutes, or solve quite large problems. Because of recent progress in information technology, we often need to solve problems in a matter of minutes. For such problems, several new techniques are required, for example, to reduce the number of unnecessary operations without a large loss of efficiency, and to search only areas where the possibility of of existing good solutions is high. In practice, not bad solutions obtained in a short period of time are preferable to good or optimal solutions taking too long time. I believe that research in this area extends the study of optimizations, operations researches and algorithms to practical area.

Among studies on meta heuristics, few have addressed the speeding up iterations. For simulated annealings, an iteration takes constant time, so this kind of study makes no sense. On the other hand, in local searches and tabu searchs, an iteration takes much time especially if we are dealing large scale neighborhood. Operations of each iteration is quite similar, hence this will be a good application of dynamic data structures. I already researched local searches of traveling salesman problems, bin-packings and scheduling problems, and found that iterations of them can be speeded up by using several techniques arising from dynamic data structures. I will study this problem in detail, will report my results.

 

Communication with researchers in other scientific areas and companies

Recent progress on combinatorial optimizations research has enables solving large scale and complicated problems. The progress in operations research has enabled us to deal with problems with which we could not deal in mathematical ways 10 years ago. Thus, problems appearing in other scientific areas including physics, chemicals, bioinformatics, and social sciences, can be considered mathematical problems, and can be solved efficiently. Example of such problems include optimizing materials in architectonics, finding similarity of two DNAs, definition of values and powers of groups, natures, and organizations in social sciences, finding typical properties, shapes, and groups of large scale networks.

I think that at present, optimization algorithms in the above areas have been not intensively studied. This is natural to consider that there are great possibilities to develop in those problems by introducing recent technologies of operations research and algorithms. Usually, such problems could not be solved with only basic techniques described in textbooks. Several "plus alpha's" are required. To find a simple "plus alpha," skills and experience of the researches on operations research and algorithms will do a great job. I want to communicate with researchers in other scientific areas to continue my study on applications of algorithms. 

 

 

 Uno's Homepage          NII Homepage