- Tutorial 1: Evolutionary Computation: A Unified Approach by Prof. Kenneth A. De Jong
- Tutorial 2: Evolutionary Large-Scale Global Optimization: An Introduction by Prof. Xiaodong Li
- Tutorial 3: Genetic Programming: Recent Developments and Applications by Prof. Mengjie Zhang
- Tutorial 4: Evolutionary Computation and Complex Networks by Prof. Jing Liu
- Tutorial 5: Algorithm Selection – Online + Offline Techniques by Prof. Mustafa MISIR
Tutorial 1Title: Evolutionary Computation: A Unified Approach
Presenter: Prof. Kenneth De Jong
Contact Information: Krasnow Institute, George Mason University, Fairfax, Virginia, 22030, USA, email@example.com
The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to see the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.
This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.
Finally, the framework is used to identify some important open issues that need further research.
Kenneth A. De Jong received his Ph.D. in computer science from the University of Michigan in 1975. He joined George Mason University in 1984 and is currently a Professor Emeritus of Computer Science, head of the Evolutionary Computation Laboratory, and Associate Director of the Krasnow Institute. His research interests include genetic algorithms, evolutionary computation, machine learning, and complex adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as high-performance optimization techniques, and the application of EAs to the problem of learning task programs in domains such as robot navigation and game playing. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of ACM SIGEVO. He is the recipient of an IEEE Pioneer award in the field of Evolutionary Computation and a lifetime achievement award from the Evolutionary Programming Society.
Tutorial 2Title: Evolutionary Large-Scale Global Optimization: An Introduction
Presenter: Prof. Xiaodong Li
Contact Information: School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, VIC 3001, Australia, firstname.lastname@example.org
Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years, and this trend continues with an ever increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems. The recent advance in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables. It is this “curse-of-dimensionality” that has made large-scale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with black-box optimization problems.
In this tutorial, we provide an overview of recent advances in the field of evolutionary large-scale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems. This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in large-scale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm and its latest improved derivatives (such as global DG and DG2 algorithms), which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks. We also address the issue of resource allocation in cooperative coevolution and provide a detailed explanation of some recent algorithms such as the contribution-based cooperative co-evolution family of algorithms. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.
Xiaodong Li received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. Currently, he is a full professor at the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, machine learning, complex systems, multiobjective optimization, multimodal optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL'08, a Program Co-Chair AI'09, a Program Co-Chair for IEEE CEC’2012, a General Chair for ACALCI’2017 and AI’17. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary Computation Outstanding Paper Award”.
Tutorial 3Title: Genetic Programming: Recent Developments and Applications
Presenter: Prof. Mengjie Zhang
Contact Information: School of Computer Science, Victoria University of Wellington, New Zealand, email@example.com
One of the central challenges of computer science is to use a computer to do what needs to be done without telling it/knowing the specific process. Genetic programming (GP) addresses this challenge by providing a method for automatically creating a working computer program from a high-level statement of a specific task. GP achieves this goal by genetically breeding a population of computer programs using the principles of Darwinian natural selection and biologically inspired operations. This tutorial will start with an overview of GP principles, including representation, operators, search mechanisms and the evolutionary process. It will then discuss the most popular applications of GP with a focus on the evolved "models" and "generalisation" on symbolic regression and mathematical modelling, classification and clustering, and feature selection and construction. The tutorial will also discuss with some interesting demonstrations and "deep [learning] program structures" in image recognition. If time allows, a discussion will be made on GP as a hyper-heuristic technique for dynamic job shop scheduling.
Mengjie Zhang is currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group with 10 staff members and over 20 PhD students. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, a member of the Faculty of Graduate Research Board at the University, Associate Dean (Research and Innovation) for Faculty of Engineering, and Chair of the Research Committee for the School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of computer vision and image processing, multi-objective optimisation, and feature selection and dimension reduction for classification with high dimensions, transfer learning, classification with missing data, and scheduling and combinatorial optimisation. Prof Zhang has published over 400 research papers in fully refereed international journals and conferences in these areas. He has been supervising over 100 research thesis and project students including over 30 PhD students.
He has been serving as an associated editor or editorial board member for ten international journals including IEEE Transactions Emergent Topics in CI, Genetic Programming and Evolvable Machines (Springer), Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for over ten international conferences including IEEE CEC, GECCO, EvoStar and SEAL. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography.
Prof Zhang is a senior member of IEEE and a member of ACM. He is currently chairing the IEEE CIS Intelligent Systems and Applications Technical Committee. He is the immediate Past Chair for the Emergent Technologies Technical Committee and the IEEE CIS Evolutionary Computation Technical Committee, and a member of the IEEE CIS Award Committee. He is also a vice-chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction, a vice-chair of the Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Tutorial 4Title: Evolutionary Computation and Complex Networks
Presenter: Prof. Jing Liu
Contact Information: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education of China, Xidian University, China, firstname.lastname@example.org
Both evolutionary computation and complex networks have received considerable attention in recent years. Applications of complex networks to the field of evolutionary computation have received considerable attention from the research community. The majority of work studies attempts using complex networks, such as small-world and scale-free networks, as the underlying population architecture in evolutionary algorithms. Moreover, complex networks are also used to analyze fitness landscapes and designing predictive problem difficulty measures. Simultaneously, EAs were also used to solve optimization problems in the field of complex networks, such as community detection problems, network robustness optimization problems, and network reconstruction. Since both evolutionary computation and complex networks cover a wide range of research fields, instead of treating each field in isolation, this tutorial focuses on the research combining these two fields; that is, focuses on introducing the interlink between evolutionary computation and complex networks, and the advantages of combining these two fields.
Prof. Jing Liu received the B.S. degree in computer science and technology from Xidian University, Xi’an, China, in 2000, and the Ph.D. degree in circuits and systems from the Institute of Intelligent Information Processing of Xidian University in 2004. In 2005, she joined Xidian University as a lecturer, and was promoted to a full professor in 2009. From Apr. 2007 to Apr. 2008, she worked in The University of Queensland, Australia as a postdoctoral research fellow, and from Jul. 2009 to Jul. 2011, she worked in University of New South Wales as the Australian Defence Force Academy as a research associate. Her research interests include evolutionary computation, complex networks, multiagent systems, and data mining. She has co-authored more than 100 research papers which were published by international journals and conferences. Now, she is the associate editor of IEEE Trans. Evolutionary Computation, and the chair of Emerging Technologies Technical Committee of IEEE CIS.
Tutorial 5Title: Algorithm Selection – Online + Offline Techniques
Presenter: Prof. Mustafa MISIR
Contact Information: Nanjing University of Aeronautics and Astronautics, College of Computer Science and Technology, 29 Jiangjun Avenue, 211106 Nanjing, Jiangsu, China, email@example.com
Experimental studies show that there is no single algorithm that performs the best on all possible benchmarks, as also revealed by the No free lunch theorem. While an algorithm works well on a group of problem instances, it performs poorly on some others. One way to overcome this issue is to combine the strengths of multiple algorithms. This is possible by utilizing a system that can pick, hopefully, the best algorithm(s) for each target problem instance. Algorithm (portfolio) selection (AS) is the field aiming at automatically performing this task. AS has been usually applied through performance prediction models that can tell the performance of an existing algorithm on an unseen problem instance. In terms of AS, the selection operations are done in an Offline manner, thus the algorithms to be applied are chosen prior to they are being used. In addition to the traditional AS, Online AS has also been studied. Online refers to when AS takes place while a problem instance is being solved. Online AS has been referred under different names mainly including Selection Hyper-heuristics (SHHs) and Adaptive Operator Selection (AOS). SHHs have been studied to deliver problem-independent solvers which are applicable to any combinatorial search problems. AOS has been referred to mainly for choosing the operators under Evolutionary Algorithms while they have been additionaly used under SHHs. The present tutorial will cover 1) Offline AS; 2) Online AS; and 3) Online + Offline AS. Each form of AS will be discussed with a formal description, an overview of the existing approaches together with a few case studies.
Mustafa MISIR received his BSc and MSc degrees in Computer Engineering from the Yeditepe University, Turkey in 2007 and 2008, respectively. He earned his PhD degree in Computer Science from KU Leuven, Belgium in 2012 while working in the Combinatorial Optimisation and Decision Support (CODeS) Research Group. Right after graduation, he joined the Machine Learning and Intelligent Optimisation (TAO) Team at the INRIA Saclay - Universite Paris Sud XI, France as a postdoctoral researcher (ERCIM Marie Curie fellow). Afterwards, he took another postdoctoral position in the Living Analytics Research Centre (LARC), Singapore which is a joint initiative between the Singapore Management University and Carnegie Mellon University. Next, he worked as a postdoctoral researcher in the Machine Learning for Automated Algorithm Design (ML4AAD) Research Group, Department of Computer Science at the University Freiburg, Germany. Since April 2016, he has been working as an Associate Professor in the College of Computer Science and Technology at the Nanjing University of Aeronautics and Astronautics. His main research interests include Automated Algorithm Design (Machine Learning + Algorithm Design), Data Science/Analytics and Operations Research. He is the recipient of several prestigious academic awards such as the winner of the 1st Cross-domain Heuristic Search Challenge (CHeSC) and published over 30 papers in international conferences/journals.
Call For Tutorials
The 11th International Conference on Simulated Evolution and Learning (SEAL 2017) http://www.seal2017.com/ takes place November 10-13, 2017 in Shenzhen, China.
Tutorials at SEAL 2017 will be presented by domain experts to cover current topics relevant to evolutionary computation and learning. The tutorials should provide clear and focused contents covering new and emerging topics within the scope.
We encourage the inclusion of interactive activities and demos.
Tutorials will be free to all SEAL 2017 attendees.
Each tutorial proposal should include:
- A half-page extended abstract (in plain text) that includes: the title of the tutorial, the name and affiliation of the instructor(s), and a description of the tutorial scope and content.
- Short bio of the instructor(s) (about half page in plain text).
- Highly encouraged: A description of any interactive activity or demo planned within the tutorial presentation.
- Previous tutorial experience of the tutorial speaker(s) if they have.
Tutorial proposals will be reviewed by the SEAL 2017 tutorial chairs, based on the SEAL attendees' likely interest in them, the breadth and depth of the topic(s), and the expertise and credentials of the instructor(s).
Information and Submissions
Tutorial proposals should be submitted in PDF format electronically via emails to the tutorial co-chairs given below. The deadline for submissions is April 30th, 2017.
If you have any questions, please contact the tutorial co-chairs of SEAL 2017:
Frank Neumann, Australia, firstname.lastname@example.org
Han Huang, China, email@example.com