SC24: Technical Program Leaders Discuss Their Role and Scientific Vision

SC24: Technical Program Leaders Discuss Their Role and Scientific Vision

This article was prepared by Kevin Jackson for the annual Supercomputer Conference; SC24 will be held in Atlanta from Sunday, Nov. 17 to Friday, Nov. 22.

Science lies at the heart of the annual Supercomputing conference, and the Technical Program is one of the most important and challenging aspects of the conference. To learn more about what this program does, as well as the scientific vision that drives every decision within the program, here’s an interview with SC24 Technical Program Chair Guillaume Pallez (Inria) and Vice Chair Judith Hill (LLNL).

Q: Can you tell us about your role as SC24 Technical Program Chair?

TP: Everyone is probably aware of the breadth of the SC Technical Program.

The visible part to most attendees includes the Technical Papers, Workshops, Tutorials, Panels, Posters, and Birds of a Feather. This year we even introduced a new track called Art of HPC. The Technical Program also includes SC-specific awards such as Test of Time, and we coordinate with the computing societies (ACM, IEEE, SIGHPC, TCHPC) for their specific awards and the award ceremony.

Then there are the less visible parts, which are just as important. These include the reproducibility evaluation for the Technical Papers, as well as the coordination with the SC Student Cluster Competition to select the paper from last year that will be reproduced by the SCC teams. We also manage the proceedings for papers and workshops in cooperation with IEEE.

Our first role is ensuring that the whole process around and between these elements is well coordinated. This means not only providing an excellent scientific event to attendees, but also ensuring that during SC, people can navigate from one element to another in a stress-free environment.

Our second role is to provide a global scientific vision and oversight over the Technical Program of SC.

For all of these elements, we are supported by the respective element chairs and vice-chairs. Without their hard work and dedication to SC, it would not be possible to host an event of the size and scope of the SC Technical Program.

Q: What do you mean by providing a “scientific vision?”

Guillaume Pallez, SC24 Technical Program Chair

TP: Leading the Technical Program of SC is an opportunity to have an impact on our field, and on the science that we want to see. We believe that in this role we have a responsibility to the community to improve our scientific process.

For SC24, there are three elements that we want to highlight:

  • Providing a key venue where attendees have access to the diversity of the science that is done around HPC. This includes topics where HPC is the secondary scientific element or an enabling technology, and has historically not been submitted to SC.
  • Making sure that by doing this, we do not degrade the quality of the technical program (i.e. we want the top paper from these topics).
  • Finally, an important element is the best practices that our community should subscribe to, and SC’s leadership role in establishing those.

Q: Can you give us examples of what you mean?

TP: To further diversify the SC Technical Program, we augmented two elements to attract additional submissions that may not have otherwise felt welcome at SC:

  1. Workshops provide a unique opportunity to address specific scientific topics with early ideas that may not be mature yet, but that can be brainstormed with a set of interested colleagues. In addition, there is an opportunity to discuss topics that are interesting to the SC community, but where the main results tend not to be submitted at SC (i.e. results that would be published in high ranking conferences from other scientific domains). For these topics where SC may not be their primary venue for publication, we have tried to encourage proceedings-free symposiums, i.e. specialized events where submissions are the latest key results of their field, even if they are already published in other venues.
  2. Posters are another change that we have implemented. Posters usually visually represent a new result or a work in progress. We now allow a small portion of “Project Posters”. With these “project” posters, we are offering the opportunity to presenters to share a work or a series of works that have already been published (outside of SC conferences, for example), but that they believe would interest the SC community.
Judith Hill, SC24 Technical Program Vice Chair

In both cases, the idea is to remove the “publication” element, so that we can actually focus on the scientific contribution and foster discussion within the SC community.

With respect to best practices and improving the quality of the scientific contributions, the changes that we have made include:

  1. Reproducibility reports. The reproducibility of HPC results is a complicated problem. For a few years now we have been awarding “reproducibility badges” to papers, based on what a dedicated committee was able to do. But those badges are very binary. For instance, what happens if you do not have the full scale to reproduce a result, but were able to reproduce it at the scale you have access to? Should this paper get a “result replicated” badge? To improve this process, we have introduced a “reproducibility report”, which will describe accurately what result was reproduced and how. It will also provide recognition to committee members. On a side note, we are extremely grateful to Sascha Hunold (Reproducibility Chair) who has done an incredible job at implementing this and who probably wasn’t expecting this huge workload.
  2. A less visible procedural change is that we have asked every element to remind submitters of the definition of authorship (as defined by IEEE). Accurately representing authors is part of academic integrity. In a time where the public’s trust in Science/Academia is decreasing, it is our responsibility to keep our ethics and trustworthiness to the highest standards.

Q: Any unexpected challenges with this year’s Technical Program?

TP: An interesting problem that we have had to deal with this year is the usage of Large Language Models (LLM) in paper (and review) writing. IEEE and ACM have started to draft policies, but it is still unclear how authors use them. This is something new that is evolving quickly.

For Tech Papers and Workshops, we have added a field for authors to describe if and how they have used LLM in writing their work (per IEEE policy it should also be incorporated into the accepted paper).

For SC24, we plan to study how these tools are incorporated into the writing process, and it should be studied over time.

Another challenge that we have faced is the increase in number of submissions in Tech Papers (+30%!). This has obvious implications in terms of volunteers (you can apply here for SC25) needed to keep the quality of review and selection that SC strives for. It also has implications in the number of papers that we will be able to accept. Luckily, the convention center in Atlanta is quite large and we should be able to fit more paper sessions if needed. It also has many complicated implications: has the community increased and should we expect a much larger attendance in November (with direct implications on room sizes)?

Q: What weren’t you able to change? What’s next for the community?

TP: We have thought about having open reviews for a long time. Open review is the process where the scientific discussion between (anonymous) expert reviewers and paper authors is made public to the readers. This tactic is growing in many scientific communities, like NeurIPS, one of the major venues for ML, which has been implementing this for some years. Nature has also started to publish some review reports since 2020.

Open reviews bring extra information about a paper. They may help to understand what the limits of a contribution are, which we believe are an important part of the scientific process. With the current sad state of scientific publishing (see here for example), the quality and availability of the review are an excellent indicator of the quality of the conference and review process.

It has also side benefits – for example,  it shows new reviewers what is expected of them when reviewing an SC paper, and helps new submitters to see how a work is evaluated.

We had hoped to implement it this year, but couldn’t. Something left to do for our successors!