ICSE 2025 Report

International Conference on Software Engineering
April 27 - May 2, 2025, Ottawa
Dennis Mancl, dmancl@acm.org

ICSE is the biggest and best international conference for software engineering. ICSE 2025 was in Ottawa, Canada, and it was an excellent show as usual. There were many interesting talks and workshops, and there was a lot to learn from academics and practitioners from around the world.

It was an easy drive to Ottawa (about a 7-hour drive from home) to attend a week of talks at ICSE 2025.

This was my fifth visit to the ICSE conference (2016, 2017, 2021 [virtual], and 2022). As in most of the years I attended, I worked with Steve Fraser to put together a short workshop paper. This year, I presented our paper at the Designing Workshop, a workshop focused on exploring the activities of software design and the challenges of teaching about design.

Also, Steve Fraser and I organized two panel session in the main conference. The Thursday panel was “looking forward” to the evolution of the software engineering field in the AI era and beyond. The Friday panel explored how ICSE research may have impact in industry and society as a whole.

I have always been interested in watching the directions of software engineering research and practice, so it was really useful to sit in on a wide range of technical paper presentations and workshop session. ICSE 2025 also had some inspiring keynote talks.

Conference logistics

Overall attendance was very strong at ICSE this year. Between the main conference, the co-located events, and workshop tutorials, over 1900 people registered and attended sessions. There were over 1400 presentations at ICSE, but about 200 presentations were “virtual”, due to the problems for some speakers to obtain visas to enter Canada or to arrange for transportation to the conference. Recent political events in the United States have increased travel risk for international scholars. Foreign students based in the US with a student visa needed to think twice before traveling to Canada because there was always a small chance they might be denied reentry into the US.

The conference organizers decided in early April to set up some hybrid meeting technology. Six of the larger meeting rooms in the Ottawa Rogers Centre were equipped to support Zoom collaboration, and registered attendees were given the Zoom meeting codes to allow them to watch keynote talks and a selection of conference paper presentations.

Almost all of the paper sessions had six presentations in a 90-minute session. This worked smoothly most of the time, but a few speakers were struggling to explain a complicated research project in 15 minutes.

Keynote topics

As expected, the biggest research topic in the conference was “AI for Software Engineering.” Many presenters were optimistic that AI tools based on Large Language Models (LLMs) could be useful in supplementing human effort on software architecture, design, coding, testing, and documentation. But many are skeptical of trusting a tool-generated design or code base.

David Parnas, the keynote speaker on Wednesday, elaborated on this theme with his talk “Regulation of AI and Other Untrustworthy Software.” Parnas criticized the current LLM-based systems as Imitation Intelligence. He lamented that AI systems do not actually include any logical reasoning: the main qualification for a system to be called AI today is that it use a Neural Network.

Parnas gave his general assessment of today’s AI applications. He claims that they tend to use poor algorithms and they are relatively inefficient. People don’t know when the systems are applicable to their problem, they can be untrustworthy, and they have surprising behavior. (A good software system should never be surprising.) Today’s AI and LLM systems are dangerous because of their statistical approach. Developers can avoid thinking about the real underlying problem.

Parnas had one main message: We should regulate all “critical software,” whether or not it is called AI. He believes that we must create regulations that ensure that the developers who create critical software have adequate knowledge and skills.

Parnas said if I wanted to be a hacker, I would use AI. It’s a never-ending grant. I can show progress and get more money. And of course, even the people warning us about the evils of AI may be looking for grants as well, to protect us from AI.

The Thursday keynote speaker was Neha Rungta from Amazon Web Services and her talk was “Engineering correctness for a domain.” Her talk explored the options for expanding the use of Automated Reasoning in large software systems. Her approach is to develop a set of core tools that can be used to extract formal specifications of modules and interfaces from the code, rather than requiring application developers to write formal logical expressions for their own code. Neha believes that Automated Reasoning is the best way to make cloud-based systems safe and secure, and she says that formal reasoning will only “scale” when domain experts are not required to be formal languages experts.

Friday's keynote was an overview of the field of Software Sustainability, with Patricia Lago of Vrije University Amsterdam as our guide. The most important crisis today in technology is the growth of energy demand for computing. Many people feel powerless to do anything to improve the situation, so they just deny that we can reduce the energy cost of computing.

Patricia explained that if we are going to save our planet, we need to learn three things. First, we must be aware of the “dimensions” of sustainability: People, Planet, and Profit. Second, we have to analyze the “order of impact” of sustainability design decisions: both the direct impact (improvements based solely on the technology choice) and the systemic impact (the behavioral changes that are supported by technology change). Third, we have to analyze every system and every technological choice in the context of a set of standard Sustainability Development goals (SDGs) set by the United Nations.

“Digital sufficiency” will become more important. Instead of over-engineering every system, we will see the development of sustainable systems that are “sufficient” instead of wasteful. Patricia is optimistic that we will do better. When our companies begin to value sustainability, that will help them focus on the standard techniques and models for reaching sustainability goals in the near term.

Panel sessions

The major highlights of the main ICSE 2025 conference program were two all-star panel sessions that discussed the future of ICSE research and the impact of ICSE research. Steve Fraser and I organized both of these sessions, and we carefully chose panelists with a lot of useful ideas on these topics.

Panel 1 was “The Future of Software Engineering Beyond the Hype of AI.” It is a difficult issue to discuss: all of the “hype” about AI is drowning out a lot of useful non-AI software engineering research. But AI has some potential for improving communication and teamwork in software development - if we can invent some new processes that mitigate some of the AI risks.

Panel 2 was “Escaped from the Lab! Does ICSE Research Make a Difference?” The panelists delved into the challenges of the relevance of ICSE in the wider world, how it might attract more industry participation in ICSE, whether it can foster better communication between researchers and industry experts, and whether it could drive more research work on industry-relevant problems. One important idea: Our 21st-century world faces a number of challenges, and both academics and industrial experts will need to reach out to serve humanity.

Steven and I are currently in the process of creating “panel reports” for these two panels. My preliminary notes are near the end of this document. I will add more news to this report soon.

Designing workshop

I attended the 2nd International Workshop on Designing Software, which was a two-day event in 2025. I presented a short paper on the obstacles to doing more low-level design planning and documentation today. The workshop session included two keynote talks, two 90-minute paper sessions, three activities related to design planning and teaching design, and a panel session at the end.

The Designing Workshop covers a lot of territory - how we think about the design process, the challenges of teaching design, and possible research topics related to design methods and tools. See the original call for participation for more details: https://designing2025.github.io/.

Designing workshop Sunday keynote

The Sunday keynote presenter was Rob van Ommerling - “When the Design is Right.” Rob explained his point of view about architecture and design activities by telling the stories of two successful projects. Rob worked on the Koala architecture description language as an architect at Philips Research Lab back in 2000. (See https://ieeexplore.ieee.org/document/825699 for more information about Koala.) Many years later, Rob was involved in the architecture of “clinical trial matching system” developed at MD Anderson Cancer Center in Houston Dana-Farber Cancer Institute in Boston.

There were many fascinating details of the design process in Rob’s account, but there were a few that were particularly useful:

Rob shared a few useful design insights - many of them came from the 2016 book “Software Design Decoded” by Marian Petre and Andre van der Hoek:

Designing workshop Monday keynote

Monday’s keynote talk by Rick Kazman (University of Hawaii at Manoa) discussed the idea of “Design as a Creative Act,” and he tried to explain how AI might be useful as an idea generator.

Rick’s notion of how to be creative: it is “Divergent Thinking” followed by “Convergent Thinking.” In Divergent Thinking, a person or a team generates multiple, novel, varied ideas in a brainstorming process. The more ideas generated the better. Of course, ideas that are merely novel are not necessarily practical. It requires Convergent Thinking for the new ideas to make sense - to find connections between ideas.

Rick explained that there is a history of “dreaming” as a source of inspiration (and it is a divergent process). There was one study of using “open monitoring mediation” (letting ideas run through your brain) to get divergent ideas to come out. On the other hand, there is a different “meditation process” that might be needed later - “focused meditation” is a practice that can improve your ability to do convergent thinking.

Rick pointed out that “mindfulness” enhances creativity. It can reduce distractions and improve emotional regulation. Even 5 minutes of daily mindfulness practice can help.

Rick explained his specific design method (called Attribute Driven Design):

  1. Ask what you know about the requirements
  2. Set a goal for the iteration process
  3. Choose an element to refine
  4. Choose a design concept that satisfies the selected driver
  5. Instantiate architectural elements, allocate responsibilities, define interfaces
  6. Sketch views, record design decisions
  7. Perform analysis of the current design

Step 4 involves divergent thinking, step 5 requires convergent thinking.

Rick also explained that the engineering process is usually “routine design” that reuses well-tried solutions. But innovative designs are going to use materials in different ways... and they are rarer than routine designs.

Rick speculated about the potential use of AI to help the design process. Maybe it is useful in the generation of design ideas, overcoming writer’s block, and expanding access and democratizing creativity. An AI tool might be able to explore vast data sets for innovative connections. AI might also be a tool for integration.

On the other hand, there is a dark side of AI. Homogenization - reproducing existing patterns - is a common characteristic of AI solutions. Also, designers risk becoming lazy and overdependent on AI tools. Students might start with something generated by ChatGPT and *never* go further.

AI tools might be like “having a creative team by your side,” but it is crucial to use them mindfully.

One last thought: Bad designs sometimes arise because the designers don’t appreciate the edge conditions of the problem - they might not have appropriate knowledge to build a good design. This is a topic that is beyond Rick’s divergence <-> convergence dimensions.

Rick is exploring the idea of asking LLMs to follow the steps of his ADD method to develop a design. When he tried this, the tool did a much better job than an LLM that was asked to create a design without any process guidance.

Other Designing workshop papers and activities

The Sunday morning session had two useful paper presentations:

The Monday morning session also had two useful paper presentations:

Two of the workshop activities were very interesting. Mary Shaw and Marian Petre (from Open University) organized a brainstorming session about Design Examples for training and coaching. Marian is collecting many examples on a website: https://modelproblems.org/. During the activity, we discussed the various dimensions for good model problems.

Yuanfang Cai (Drexel University) had a discussion of the appropriateness of using AI to make architectural decisions in software development. She created an online quiz for all of the workshop participants - we were given several hypothetical systems descriptions and we were asked to choose from a list of potential overall architecture styles for each system. After all the responses were accumulated, we were give the responses to the same architecture from several different LLMs. The overall assessment: AI does not make good architecture choices today. In the future, who knows?

Yuanfang explained her preference: AI systems could be used to create implementations from multiple software designs (with the designs created by human designers). This could help junior designers do a better job of evaluating design choices.

Other useful ICSE sessions

I attended several of the SEIP sessions (Software Engineering in Practice) and SEIS sessions (Software Engineering in Society). The conference grouped a number of great papers into sessions labelled “Human and Social” - and the atmosphere at these sessions was very creative. There were two papers that stood out: “Identifying Factors Contributing to ‘Bad Days’ for Software Developers,” presented by Jenna Butler from Microsoft. (Here is a preprint link) We all have bad days, and developers are no exception. The authors surveyed a lot of development teams: they found that the average developer has three to five bad days per month. These bad days are terrible for productivity and team morale. Developers feel more stress every day. They decide to work more overtime, but it might not help. Some developers even start to surf job listings to change jobs within their company or to change employers.

The authors wanted to learn more so the looked at interviews, surveys, diary studies, and analysis of developer telemetry data to uncover and triangulate common factors that cause “bad days” for developers. Some of it is just bad luck and nagging issues, such as slow build servers, writer’s block, blocked waiting on the work of another team member, or a confluence of bug reports falling on the same person. Some of it is the impact of personal crises. These bad days cause people to feel that ‘their agency is being taken away.”

There are many things we can learn from more research in this area. If we understand these issues better, maybe we can build predictive models to help people who are struggling.

A second great paper was “Not real or too soft? On the challenges of publishing interdisciplinary software engineering research,” presented by Sonja Hyrynsalmi from LUT University in Finland. (Here is a preprint link.) We are all worried that good research papers related to “soft issues of software engineering” might not be valued by some people in the review committee. The paper is useful because it collects together 13 years of data about ICSE reviews. The biggest challenges for soft issues:

Panel session summaries

ICSE 2025 had two interesting panels. Here are links to the official reports from those panels:

Future software panel

I was the moderator for both main conference panels. The first panel was “forward looking” with the title - The Future of Software Engineering. We wanted to talk about AI, and we wanted to talk about things other than AI.

Mary Shaw (Carnegie Mellon) was already on record as saying that AI won’t take all of our jobs away. Tom DeMarco (Atlantic Systems Guild) was less sure, because in previous eras when automation threatened jobs, there was enough investment going on to create new technology-based jobs. Landon Noll (indepedent) advocated for high standards for software developer education and professionalism. Laurie Williams (NC State) said we need AI code inspection tools to improve security, but that if we rely too much on automatic generation of code by AI, we will still have a lot of insecure code.

Laurie and Landon had a good exchange about this. Laurie proposed an experiment: “Ask your chatbot ‘Write me a program to do X.’ Then when it is finished, ask it ‘Write me a secure program to do X.’ It will be different.” Landon went one step further: “A sad thing is that you can take a program and ask ‘make me a secure program,’ and then say “identify ways to exploit and compromise the integrity of the code’ - and you’ll get more information.”

It will be a challenge to generate secure code with today’s tools.

Two panelists from IBM were working on programmer assistant tools. Adriana Meza Soria and Irene Manotas discussed the issues of increasing automation of developer tasks with AI-generated code, but they also offered some insight into the development of new software systems that will include one or more Large Language Models as part of the internal implementation. We face many challenges in building good user interfaces and monitoring the safety of AI-based computation.

The last panelist, Bertrand Meyer (Eiffel Software), raised two important points. First, just reducing the coding effort isn’t enough. A lot of software development is requirements, analysis, testing, and deployment, and generating code faster might not improve the speed of the release cycle. Second, Bertrand cited an ICSE 2024 that analyzed the effectiveness of “code completion ” tools to help developers - and the assessment was not very good. When an IDE includes an AI tool for suggesting code completions, only a small fraction of the completions are accepted by the developer, and even then, the generated code may need to be edited. The level of hype is still high.

Our three main conclusions from the panel discussion:

Escaped from the lab panel

I was the moderator for the big Friday panel - Escaped from the Lab! Does ICSE Research Make a Difference?

The main challenges: practitioners don’t usually attend ICSE, they don’t read research papers, and research ideas are not always described in a form that is useful for practitioners to apply immediately.

The conclusions of the panel session were:

Some good quotes from ICSE attendees

I spoke with a new friend from a university in Asia - he told me “AI is not afraid of going to jail.” We should be wary of trusting the outputs of AI systems because they aren’t accountable.

In the Designing workshop panel, Joanne Atley explained the difference between students delivering code for a university course and industry developers delivering a product: “For a student, the day you deliver your software is the day that the software is *dead*. For industry, delivery day is just the start!”

Information on ICSE 2026 conference

ICSE 2026 will be held the week of April 12-18, 2026 in Rio de Janeiro. There will be more information on the ICSE 2026 website.


Last modified: May 27, 2025