ESEC/FSE 2011 will feature Technical Briefings, an all-day venue for communicating the state of topics related to software engineering, thus providing an exchange of ideas as well as an introduction to the main conference itself.
Technical briefings track features the following internationally renowned presenters:
Andrian Marcus, Wayne State University, USA
Management of Unstructured Information during Software Evolution: Applications of Text Retrieval
Lin Tan, University of Waterloo, Canada
Tao Xie, North Carolina State University, USA
Text Analytics for Software Engineering: Applications of Natural Language Processing
Victor Pankratius, Karlsruhe Institute of Technology, Germany
Multicore Software engineering
Valérie Issarny, INRIA, France
Model-based Emergent Middleware to Meet the Challenges of
Interoperability in Pervasive Networks
Daniel German, University of Victoria, Canada
Massimiliano Di Penta, University of Sannio, Italy
Source code licensing as an essential aspect of modern software development
Sarunas Marciuska, Free University of Bozen-Bolzano, Italy
Salvatore Alessandro Sarcia, University of Rome, Italy
Alberto Sillitti, Free University of Bozen-Bolzano, Italy
Giancarlo Succi, Free University of Bozen-Bolzano, Italy
Applying Domain Analysis Methods in Agile Development
Mauro Pezze, University of Lugano, Zwitzerland
Self-healing software systems
Mark Harman, UCL, UK
Search Based Software Engineering: Automating Software Engineering
(This talk free for all ESEC/FSE and SSBSE participants. Supported by SSBSE.) Download pdf (37MB)
Technical Briefings Track Chair
- Benoit Baudry, IRISA (France)
- Jane Cleland-Huang, DePaul University (USA)
- Robert DeLine, Microsoft Research (USA)
- Ahmed Hassan, Queen’s university (Canada)
- Michele Lanza, University of Lugano (Switzerland)
- Bashar Nuseibeh, Lero (Ireland) & Open University (UK)
- Corina Pasareanu, CMU/NASA Ames (USA)
- Alexander Pretschner, Karlsruhe Institute of Technology (Germany)
During software evolution many related artifacts are created or modified. Some of these are composed of structured data (e.g., analysis data), some contain semi-structured information (e.g., source code), and many include unstructured information (e.g., natural language text). Software artifacts written in natural language (e.g., requirements, design documents, user manuals, scenarios, bug reports, developers’ messages, etc.) together with the comments and identifiers in the source code encode to a large degree the domain of the software, the developers’ knowledge about the system, capture design decisions, developer information, etc. In many software projects the amount of the unstructured information exceeds the size of the source code by one order of magnitude. Retrieving and analyzing the textual information existing in software are extremely important in supporting program comprehension and a variety of software evolution tasks. These tasks include: refactoring, feature location in software, traceability link recovery between software artifacts, change impact analysis, cohesion and coupling measurement, defect prediction, bug triage, bug assignment, software search and reuse, etc. The technical briefing will introduce the main techniques used in the retrieval and analysis of unstructured information from software (i.e., based on text retrieval) as well as their usage to support the above mentioned software engineering tasks.
The ESEC/FSE program includes a complementary technical briefing on “Text Analytics for Software Engineering: Applications of Natural Language Processing”, by Lin Tan and Tao Xie. We recommend attending both of them.
Dr. Andrian Marcus is Associate Professor in the Computer Science Department at Wayne State University, Detroit, USA. His current research interests are focused on software evolution and program comprehension. He is best known for his decade long work on using information retrieval and text mining techniques for software analysis to support comprehension tasks during software evolution, such as: concept location, impact analysis, error prediction, traceability link recovery, etc. Marcus received several Best Paper Awards and his research is funded by NSF, NIH, IBM, etc. He served as Program Co-Chair for the 26th IEEE International Conference on Software Maintenance (ICSM 2010) and for the 17th IEEE International Conference on Program Comprehension (ICPC 2009). He is the recipient of the NSF CAREER award in 2009 and a Fulbright Fellowship in 1997. More information on Andrian Marcus is available at http://www.cs.wayne.edu/~amarcus/.
Software engineering data contains a rich amount of natural language text: requirements documents, code comments, identifier names, commit logs, release notes, mailing list discussions, etc. The natural language text is essential in the software engineering process to help software engineers and researchers better understand and maintain software. Given the overwhelming amount of available natural language text, there is a high demand of text analytics including natural language processing (NLP) and text mining techniques to automatically analyze the natural language text to improve software quality and productivity. The history of applying NLP and text mining techniques to analyze software engineering data can date back to about a decade ago. In recent five years, text analytics for software engineering has become an emerging topic in the software engineering area. Various recent studies showed that automated analysis of natural language text can improve software reliability, programming productivity, software maintenance, and software quality in general.
This technical briefing (1) provides a quick overview of major text mining techniques as well as NLP techniques (e.g., Part-Of-Speech tagging, chunking, semantic labeling, semantic pattern matching, and negative-expression identification), machine learning techniques (e.g., clustering and decision-tree-based classification), and data mining techniques (e.g., frequent itemset mining); (2) introduces popular text analysis tools (e.g., WordNet and Weka); (3) summarizes major research work done in the area of text analytics for software engineering; and (4) outlines future research directions and highlights research challenges. More information on the technical briefing could be found at https://sites.google.com/site/text4se/.
The ESEC/FSE program includes a complementary technical briefing on “Management of Unstructured Information during Software Evolution: Applications of Text Retrieval”, by Andrian Marcus. We recommend attending both of them.
Lin Tan is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Waterloo, Canada since 2009 after she received her Ph.D. in Computer Science from the University of Illinois, Urbana-Champaign. Her research interests include software reliability and security with a focus on applying natural language processing and machine learning techniques to improve software system reliability. Her recent work (ICSE’11, MSR’11, ICSE’09, SOSP’07) has been on analyzing natural language text, such as code comments and commit logs, to improve software reliability and quality. URL: https://ece.uwaterloo.ca/~lintan/
Tao Xie is an Associate Professor in the Department of Computer Science at North Carolina State University, USA since 2005 after he received his Ph.D. in Computer Science from the University of Washington at Seattle. His research interests are in automated software testing and mining software engineering data including recent work (ASE’10, MSR’10, ASE’09, ICSE’08) on applying NLP and text mining on software engineering data. He co-presented a number of tutorials on mining software engineering data and software testing at past ICSEs. URL: http://www.csc.ncsu.edu/faculty/xie/
Due to stagnating clock rates, future increases in processor performance will have to come from parallelism. Inexpensive multicore processors with several cores on a chip are standard in PCs, laptops, servers, and embedded devices. Software engineers are now asked to write parallel applications of all sorts, and need to quickly grasp the relevant aspects of general-purpose parallel programming. This technical briefing outlines state-of-the-art concepts and techniques in multicore software engineering, such as basics of parallel programming, programming languages, and testing and debugging techniques for multicore. In addition, it discusses experience reports on the parallelization of real-world applications.
Dr. Pankratius heads the Multicore Software Engineering investigator group at the Karlsruhe Institute of Technology, Germany. He also serves as the elected chairman of the Software Engineering for Parallel Systems (SEPARS) international working group. Dr. Pankratius' current research concentrates on how to make parallel programming easier and covers a range of research topics including auto-tuning, language design, debugging, and empirical studies. Contact him at http://www.victorpankratius.com
Distributed systems are becoming increasingly complex. We are moving from a world where we provide domain-specific middleware platforms (e.g., for Enterprise systems, Grid, MANET, ubiquitous environments) to one where these technology-dependent islands are themselves dynamically composed and connected together to create richer, dynamically deployed systems. Existing middleware approaches and paradigms are simply unable to cope with the demands of such heterogeneous and dynamic environments. Indeed, as we move towards a world of systems of systems, we can say that middleware is in crisis, being unable to deliver on its most central promises, which is offering interoperability, i.e., the ability of one or more systems to connect, understand and exchange data with one another. This requires a fundamental re-think of the architectural principles and techniques underpinning middleware platforms. We need to turn from relatively static solutions based on promoting a particular interoperability solution or bridging strategy, to much more dynamic solutions where we generate appropriate machinery for interoperability on the fly. This promotes an approach that may be termed emergent middleware designed to solve interoperability at runtime according to what is discovered and needed in a given context.
This briefing surveys the state of the art in the area of interoperability, and in particular extensive work on protocol mediation and middleware interoperability. It will then concentrate on the key role of Models@runtime to meet the challenges of interoperability in the ever changing pervasive networking environment, reporting on the results of the CONNECT project, a collaborative initiative bringing together experts in middleware and software engineering, semantic modeling of services, and formal foundations of distributed systems, which together provide key building blocks for enabling emergent middleware.
Dr. Valérie Issarny is "Directrice de recherche" at INRIA. Since 2002, she is the head of the ARLES INRIA research project-team at INRIA-Rocquencourt. Her research interests relate to distributed systems, software engineering, pervasive computing/ambient intelligence systems and middleware. She has (co)authored over 100 technical papers in the area of distributed systems and software engineering, and has been involved in a number of European and industrial projects. She is associate editor of ACM CSUR and of the Journal of Internet Services and Applications, and is member of the Steering Committee of the ESEC/FSE and Middleware Conferences. She has been and is serving as PC member, including as PC chair, of leading international events in the areas of distributed systems, middleware, software engineering and trust management. She is currently the coordinator of the FP7 FET CONNECT project that revisits the middleware paradigm to sustain interoperability in the ever-changing pervasive networking environment. To know more about Valerie's research, please visit https://www-roc.inria.fr/arles/members/issarny.html
Legislation is constantly affecting the way in which software developers can create software systems, and deliver them to their users. This raises the need for methods and tools that support developers in the creation and re-distribution of software systems with the ability of properly coping with legal constraints.
We conjecture that legal constraints are another dimension software analysts, architects and developers have to consider, making them an important area of future research in software engineering.
This technical briefing illustrates the importance of licensing analysis in software analysis, illustrating existing techniques to support checking of licensing inconsistencies in software systems and to recommend developers with appropriate architectural connectors to comply with licensing constraints. Also, the briefing would outline relevant open research challenges in this area.
Daniel M. German is associate professor of computer science at the University of Victoria, Canada. He is a member of the Legal Network of the Free Software Foundation Europe, and recently received a Canadian NSERC DAS Award for his work on licensing. In 2010 he received the University of Victoria Faculty of Engineering Teaching Award. He has authored over 80 journal, conference and workshop papers. Further info at turingmachine.org
Massimiliano Di Penta is assistant professor at the University of Sannio, Italy. His research interests include software maintenance and evolution, reverse engineering, empirical software engineering, search-based software engineering service-centric software engineering. He is author of over 140 papers appeared on journals, conferences and workshops. He has served/serves in the organizing, steering and program committees of several software engineering-related conferences. Further info at http://www.rcost.unisannio.it/mdipenta
We designed and developed an application for the Italian Army to manage, define, monitor, and execute tasks and their respective budgets. The key challenge was to integrate this application with their existing IT infrastructure. Additionally, the system has to be extensible and maintainable to allow for feature modification in response to continually evolving requirements. We performed domain analysis to achieve this objective. We gathered the input information for the analysis from domain experts and a high level abstract requirements document. Since we used an agile development process, we frequently changed the architecture of the system according to our evolving requirements. This required to conduct the domain analysis iteratively and constantly refine and improve it.
Domain analysis was carried out using traditional domain analysis methods such as: Sherlock, FODA, and DARE. However, existing domain analysis methods were designed to support traditional software development processes where requirements are considered to be present and complete in the beginning. This means that the analysis is performed and completed before the implementation starts. Domain analysis methods have been used to identify reusable parts of a system including requirements, architecture, and test plans. The identification of reusable parts is considered to benefit the overall quality of the system. Reusability also has a positive cost effect.
In this session, we present the problems we faced. We present difficulties that might arise while executing existing domain analysis practices in an agile software development environment. The complexity of the iterative domain analysis lies in the discovery and integration of new requirements. For example, a new requirement might be outside of the domain scope of the system or might require the modification of the architecture of the system. We present techniques that we developed and applied to address these issues.
Sarunas Marciuska is a full-time Ph.D. student at the Center for Applied Software Engineering in the Free University of Bozen-Bolzano, Italy. He received his Master of Computer Science at the Free University of Bozen-Bolzano in 2010.
Salvatore Alessandro Sarcia holds a Ph.D. In Informatics and Automation Engineering from the University of Rome “Tor Vergata” (Italy). From 2006 to 2008 he was working in the Department of Computer Science of the University of Maryland (USA) as a visiting researcher. He is a researcher in the Italian Army General Staff in Rome (Italy).
Alberto Sillitti, Ph.D. is an Associate Professor at the Faculty of Computer Science of the Free University of Bozen-Bolzano, Italy. He holds a Ph.D. in Electrical and Computer Engineering received from the University of Genoa (Italy) in 2005. He is author of more than 80 papers published in international conferences and journals.
Giancarlo Succi, Ph.D. is a Professor and Dean of the Faculty of Computer Science at the Free University of Bozen-Bolzano, Italy, where he directs the Center for Applied Software Engineering. He has been Professor with Tenure at the University of Alberta, Edmonton, Alberta, Associate Professor at the University of Calgary, Alberta, and Assistant Professor at the University of Trento, Italy. Giancarlo Succi is a Fulbright Scholar.
The complexity and the dynamic evolution and changes of software systems make classic testing and analysis approaches extremely difficult, rarely cost effective and sometimes inadequate to guarantee the required quality. The impact of software systems in the everyday life of individuals and companies make classic stop-test-redeploy maintenance cycles inefficient and often inadequate. Self-healing software systems address these problems by introducing the possibility of automatically detecting failures, diagnose and fix faults at runtime thus complementing classic deploy time testing activities and reducing the needs of expensive stop-test-redeploy maintenance activities. Self-healing software systems exploit and augment approaches for automatically detecting failures, dynamically analyzing the behavior of software systems, automatically locate faults and guess possible patches that eliminate or more likely alleviate the effects of faults.
This briefing introduces the problems and the self-healing approaches, positions self-healing within the larger area of self-adaptive, self-managed and autonomic software systems, and illustrates the state of art in the field, indicating the results achieved so far, the techniques exploited in the different approaches, the many open problems and the techniques that may become the key solution in the near future.
Dr. Mauro Pezze is a Professor of Computer Science at the University of Milan – Bicocca and at the University of Lugano. Professor’s Pezze’s general research interests are in the areas of software testing and analysis, autonomic computing, self-healing software systems, service-base applications and service level agreement protection. He is interested in developing new techniques and tools to analyze complex software systems. His current focus is on applying these techniques and tools to improve the quality of software systems. Prior to join the University of Milan – Bicocca and the University of Lugano as full professor, Mauro Pezze has been teaching assistant and associate professor at Politecnico di Milano, and visiting researcher at the University of Edinburgh and the University of California Irvine. Dr. Pezze is associate editor of ACM Transactions on Software Engineering and Methodology and member of the Steering Committee of the International Conference on Software Engineering and the ACM International Conference on Software Testing and Analysis. He has been the executive chair of the IEEE Technical Committee on Complex Computer Systems, member of the Steering Committee of the IEEE International Conference on Complexity in Computing. He is the program co-chair of the International Conference on Software Engineering in 2012, and has been Program Chair of the ACM International Conference on Software Testing and Analysis in 2006. Professor Pezze is co-author of the book Software Testing and Analysis, Process, Principles and Techniques published by John Wiley in 2008, and he is the author or co-author of over 80 refereed journal and conference papers.
Search Based Software Engineering (SBSE) is an approach to software engineering in which software engineering problems are reformulated as search problems so that search based optimization algorithms can used to automatically identify solutions. Search based optimization and software engineering are a very natural fit: search based optimization techniques can cater for multiple, possibly completing objectives and/or constraints and applications where the potential solution space is large and complex. This makes search based optimization well-suited to software engineering problems, while the virtual nature of software makes it an ideal engineering material for search based optimisation. This talk will provide an overview of SBSE, presenting results from applications across the spectrum of software engineering activities, challenges and problems. The talk will also show how the search process can also yield insight, providing decision support to software engineers.
Mark Harman is professor of Software Engineering in the Department of Computer Science at University College London, where he is the head of Software Systems Engineering and director of the CREST centre. He is widely known for work on source code analysis and testing and he was instrumental in the founding of the field of Search Based Software Engineering (SBSE). He has given 18 keynote invited talks on SBSE, Source Code Analysis and Testing and is the author of over 170 refereed publications on these topics. He serves on the editorial board of 7 international journals and has served or will serve on the programme committees of 110 conferences (including ISSTA, ICST, ICSE and FSE).