What We Publish in CiSE

It has been a few issues since I penned—okay, typed—the Editor-in-Chief’s message. Long ago, before becoming an academic/researcher for good, I learned during my days as a software developer the great importance of building a great team. To this end, I asked each of my associated editors to share their perspectives on computational science and the publication itself with you, our dear readers. I’m blessed with an eclectic and talented group of associate editors, not to mention a delightful editorial board and department editors who represent the platinum standard when it comes to good content. I’ve been blessed to have the opportunity to work on ensuring that CiSE continues to be a vibrant publication in the years to come, not only in my role as the EIC, but as an active volunteer in the IEEE Computer Society in general, where I also remain involved in exciting initiatives such as Computing Now and the Educational Activities Board.

Our publication is in good health overall. The content continues to be strong. And we’re bringing emerging computational scientists and engineers to the board, with the recent additions of Matthew Turk (from Columbia University), Manish Parashar (from Rutgers University), and Hojjat Adeli (from the Ohio State University). It’s my intention to keep CiSE fresh, exciting, and vibrant through strategic appointments, especially in areas that could use more coverage. You’ll be seeing—or will have seen—various introductions of these board members, especially in the coming year.

The recent appointments cover areas that are important to CiSE in general. Matthew Turk is an astrophysicist with strong interests in the boundaries of computing and astrophysics. Manish Parashar, who co-edited the Cloud Computing special issue with me, is an interdisciplinary computer scientist and former NSF program director who is on the computer engineering side, with a strong interest in cyberinfrastructure and embedding scientific/engineering applications in the clouds (and beyond). Hojjat Adeli is an engineer who works on large-scale numerical simulation and the emerging areas of computational intelligence, bringing new engineering expertise to our board (our title includes the word and our board should reflect it).

These recent appointments also have given me the opportunity to take pause and think about what CiSE could become. I think the opportunity and promise is potentially unbounded. We’re one of the precious few titles positioned at the intersection of computer science, science, and engineering. This might not seem like a big deal to long-term readers of this title—but it is. At my university, I was recently named a fellow in a new center focused on interdisciplinary thinking. Today, many universities are realizing that students want more interdisciplinary programs and interaction in general, but they don’t really know how to go about it. There are certainly success stories. There are also plenty of the less-than-successful kind. I don’t have all the answers, but I do know that CiSE is doing something right, and it provides a quality place for interdisciplinary work at the aforementioned intersection to flourish.

What We Publish

This brings me to the challenge to you, our readers, and to those who want to publish in CiSE. A well-known newspaper that requires no introduction bears the slogan, “All the news that is fit to print.” To this I say, if only we were so lucky. We’re a small title, and it would be nice if we could publish articles on literally every topic. When you think about it, if you took the words computing, science, and engineering and extracted all of the known topics on those subjects, you literally could publish on almost all topics in existence outside of the humanities. We obviously can’t do that, and I feel a need to clarify what we can publish.

First, I call on our readers to visit the “About” page for CiSE at http://computer.org/cise. You don’t have to go there now, because here’s what you’ll find as of 28 January 2014:

Physics, medicine, astronomy—these and other hard sciences share a common need for efficient algorithms, system software, and computer architecture to address large computational problems. And yet, useful advances in computational techniques that could benefit many researchers are rarely shared. To meet that need, Computing in Science & Engineering (CiSE) presents scientific and computational contributions in a clear and accessible format.

In the next editorial board meeting, I’ll be working with my editorial board to enhance/expand this description. Although it accurately reflects at a general level what CiSE does, it doesn’t give prospective authors a perfectly clear perspective on what we publish versus what might be better-suited for publication elsewhere. More importantly, it’s a bit unclear on the best way to present your work for publication in CiSE.

I’ve long believed that many prospective authors who submit general articles to CiSE (those not part of a special or theme issue) often misunderstand this description. We get a number of submissions that are otherwise good manuscripts, but are largely unresponsive to the general themes of our publication. As someone whose doctoral training is in computer science with interests in many other disciplines, I think one of the key issues I encounter involves contributions that are purely of interest to computer scientists. If it’s only interesting to me, I know there’s something wrong with the article. I also see contributions that are purely of interest to mathematicians or other scientific domains. What we’re looking for in CiSE are the application of computer science and mathematics to understand or solve important scientific and engineering problems. Although a new sorting algorithm, networking protocol, or numerical method might excite me and other editorial board members, these are often not good topics for publication in CiSE. There are a myriad of other venues to present more fundamental/theoretical work, and an author won’t be well-served by publishing such articles here. So when submitting an article that uses a cool algorithm or data structure, please make sure it’s addressing a particular scientific or engineering domain. If it’s not, it likely belongs in a publication from one of our sponsoring societies (IEEE, IEEE Computer Society, or AIP) or sister societies (ACM).

Other Considerations

So what else should potential authors consider when publishing here? We’re a periodical (magazine) that uses full peer review for all regular and special issue submissions. We differ from many other magazines in that our peer review does follow the standards and rigor of other transactions in our sister societies. (In fact, some of our recent issues had acceptance rates of around 18%, which competes with some of the best conferences/journals out there.) But we’re not looking for articles written as journal articles. We’re looking for journal-quality articles that are presented for an audience that (we hope) reads our publication as if they’re reading a magazine. It’s a tough balancing act, but one that I think we do exceptionally well.

We have an editorial staff that takes great care to work with authors to ensure that the manuscript is ultimately as comprehensive and comprehensible as possible. If we were a journal, we probably wouldn’t focus on the article being comprehensible to a wider audience. So when writing for us, please keep in mind that reworking a (possibly failed) conference or journal submission elsewhere is probably not going to lead to an accepted article in CiSE.

Previous EICs of CiSE have used the term breezy to describe the type of contributions that do best. In the end, an article that has challenging ideas but is readable will have greater success and a larger impact in CiSE. And to that point, as the EIC, I’m involved in annual meetings where we look at the articles that do best, especially in digital library download. The articles that do best are the ones where department editors and/or accepted articles have been written with the idea of being understandable by intelligent readers who perhaps have interests but not formal training in a particular domain.

Last, authors often don’t realize that editors are interested to see an abstract of your idea before we put the prospective paper through peer review (assuming we don’t reject it for being out of scope first). We’d rather hear about your idea and give you confidential, objective feedback. An email to me or other editorial board members is entirely appropriate, and doesn’t compromise the review process. It’s your idea and your paper. If I were given a choice, I’d like to see all of the papers I receive be of acceptable quality. Of course, this will never happen, but if you ask my opinion, the hope is that I can put your paper in a better position to be accepted. Based on the way the Computer Society (and all good organizations) works, I will not be reviewing your paper. Instead, I find a cognizant member of my editorial board to handle the review, and this member will find at least 2–3 outside reviewers to independently evaluate your paper. We go out of our way to ensure fairness to authors.

I hope this sheds light on CiSE’s overall process. As your humble EIC, I want to keep the quality of this publication high (which is how I found it) and do whatever I can to make it better and improve our efficiency when it comes to reviewing articles. More importantly, I have at least one EIC’s perspective (mine) on what we consider fit to print, to which I can point prospective authors. The end result is a win-win scenario for the EIC, the editorial board, our subscribers, and other digital library readers who are increasingly drawn to CiSE content. If you find yourself reading this and wanting to write for CiSE, please feel free to send me an abstract. CiSE’s department editors (for example, for the Books, Computer Simulations, Education, Novel Architectures, Scientific Programming, Visualization Corner, and Your Homework Assignment departments) are also interested in publishing interesting work that’s perhaps not in need of full peer review. Please don’t hesitate to contact us. And thank you for reading!

-George K. Thiruvathukal, Editor in Chief, Computing in Science & Engineering

Posted in Bits and Bytes, EIC message

Meet the Editors: Jeffrey Carver

Meet Jeffrey C. Carver

Associate Professor, University of Alabama

  • In what “slice” of CiSE do you work?

My work focuses on understanding how best to apply software engineering principles to the development of computational software. This work encompasses both evaluating existing approaches as well as determine new approaches and tailorings that are required.

  • What is the most exciting aspect about your work for the near future? The far future?

I see more openness and willingness to appropriate, light-weight software engineering practices. I hope that as more appropriate techniques are developed and validated, this trend will continue in the future.

  • Big Data… What’s more exciting or important (or is there anything more important)?

There are some interesting possibilities with Big Data. My biggest concern is that many in the field may be approaching their analysis in a less-than-scientific manner. Therefore, correlations or patterns identified may sometimes be of questionable value or generalizability.

  • What is the most important application of HPC/computational science/data visualization in your opinion? (Protein simulation, climate modeling, etc.) Why?

Medical informatics and drug discovery. The ability to harness the power of HPC/computational science/data visualization to understand patterns and trends in diagnosis and treatment of medical conditions has the potential to greatly improve both the overall health of the population as well as lower the overall cost of healthcare.

  • Conversely, what is the scariest?

Government surveillance. In theory some of these practices are benign, but put in the wrong hands or unchecked, these practices could lead to scary conclusions and impacts on the larger population.

  • Why do you do what you do?

As a software engineering researcher, working with scientists and engineers of various types is quite interesting. I get the opportunity to be exposed to problems and issues that I would be unlikely to encounter as a typical software engineering researcher. These encounters provide quite a different take on what I do every day.

Tagged with: , , ,
Posted in Meet the Editorial Board

The Dayside : Tracking the dynamics of individual scientific impact

My title comes from the subtitle of a newly posted paper on arXiv by Raj Kumar Pan and Santo Fortunato of Finland’s Aalto University. In their paper, the two researchers introduce a new bibliometric index, which they call the author impact factor (AIF).

The traditional impact factor (IF) of a given journal, Pan and Fortunato remind their readers, is the average number of citations from papers published in year t to papers published in the journal in the two preceding years, t − 1 and t − 2. The AIF is similar in concept, but whereas the IF applies to journals, the AIF applies to individual researchers.

Your personal AIF is the average number of citations from papers published in year t to the papers that you published between year t − 1 and year t − 5. The longer evaluation window is needed to smooth year-to-year fluctuations.

Why introduce a new metric? Pan and Fortunato point first to the IF’s shortcomings. Even inNaturePhysical Review Letters, and other prestigious journals, the median number of citations to a journal’s papers is significantly lower than the mean number of citations, thanks to a few papers that garner lots of citations.

Philip Anderson's author impact factor spans six decades and features six prominent peaks. CREDIT: AIP Emilio Segre Visual Archives, Physics Today Collection

Philip Anderson’s author impact factor spans six decades and features six prominent peaks. CREDIT: AIP Emilio Segrè Visual Archives, Physics Today Collection

For example, last year Science had an impact factor of 31.027. But a paper that contributed to that impressive figure, “Sex determination in the social amoeba Dictyostelium discoideum from the journal’s 10 December 2010 issue, has been cited just twice (I didn’t count the citation from the commentary in the same issue). To find that lightly cited paper, I didn’t have to look hard. It was the first one I clicked on in the issue’s table of contents.

Another popular metric, the h-index, has different shortcomings, Pan and Fortunato contend. A given researcher’s h-index is n if he or she has published n papers that have each garnered at least n citations. Unlike the IF, the h-index applies to individuals, but its cumulative nature masks the ups and downs of a researcher’s publishing career. What’s more, the h-index tends to change from year to year at a low and not especially illuminating rate.

According to Pan and Fortunato, their AIF has two principal advantages over the IF and theh-index. First, because the AIF is calculated over a limited time span, it tracks the evolution of a researcher’s impact. Second, because the AIF is an average, researchers would be motivated to publish papers that are likely to garner a high number of citations. Because authors would be penalized for papers that garner few or no citations, the proliferation of papers of marginal significance would be arrested.

To support the case for adopting the AIF, Pan and Fortunato calculated and plotted the AIF for 12 Nobel science laureates, including the physicists Philip Anderson, David Gross, Wolfgang Ketterle, and Steven Weinberg. Whereas the four physicists’ h-indices have more or less the same profile, their AIFs have different numbers of peaks.

Weinberg's plot

In Weinberg’s plot, three big peaks are apparent. The first peak in the late 1960s arises from two of his early papers, “Pion scattering lengths,” (Physical Review Letters, volume 17, page 616, 1966) and “A model of leptons,” (Physical Review Letters, volume 19, page 1264, 1967). The second peak, in the early 1980s, presumably arises from CERN’s detection in 1983 of the W and Z bosons, whose existence Weinberg had predicted.

I’m less sure of the origin of Weinberg’s third peak, in the mid 1990s, but it could be from his second most-cited paper, “The cosmological constant problem,” Reviews of Modern Physics, volume 61, page 1 (1989). Although the accelerating expansion of the universe, which can be attributed to a cosmological constant, was discovered a decade after the paper appeared, observations made in the early and mid 1990s by the Hubble Space Telescope and other observatories had revealed serious deficiencies in the prevailing cosmological paradigm. The earliest citations to Weinberg’s paper came from particle and gravitational theorists, but by the mid 1990s citations from cosmologists were in the majority.

Whether the AIF catches on remains to be seen. Thomson Reuters, which calculates the traditional IF, could presumably build a software tool that researchers could use to calculate their own AIFs, as could Google Scholar. Publishers could build such a tool, too.

I for one would like to see my own AIF. Wouldn’t you like to see yours?

 

This post was originally posted on The Dayside, Charles Day’s blog on Physics Today‘s website.

Tagged with: , , ,
Posted in Bits and Bytes

Meet the Editors: Konrad Hinsen

Meet Konrad Hinsen
Research Scientist at the Centre de Biophysique Moléculaire
Associated Scientist at the Synchrotron SOLEIL

  • In what “slice” of CiSE do you work?

At CiSE, I take care of the Scientific Programming department, together with Matt Turk. My research work is about molecular biophysics, in particular proteins, with an emphasis on method development.

  • What sorts of changes have you seen in the field over the years?

I have witnessed the enormous growth of computing everywhere over the last 30 years: in science, at home, in business. Scientists got ever more computing power, but with the move from mainframes to commodity hardware they lost the help of the specialists at the computing centres. We had to become our own system administrators and programmers. Next was the transition to parallel computing, which lead to a partial transfer of work back to computing centres.

Concerning software, the possibility to use scripting languages and graphical user interfaces has led to a democratization of computing in science that had its good and bad sides: easier access to advanced techniques, but also increased use of computing techniques by people who don’t fully understand their limits.

Finally, concerning the scientific practice in general, computing started out as a limited tool for well-defined subtasks (computing an integral, solving a simple differential equation), but then grew in importance to the point that today we have entire domains of research based on nothing but data and algorithms. Such research can easily get out of control, because it is very difficult to verify the results of enormous computations. Scientific validation requires comparing to observations and experiments in the real world, but today we compute many things that are not even accessible to experiment. Is that still science?

  • What is the most exciting aspect about your work for the near future? The far future?

In computational biophysics, virtual experiments are coming within reach. A virtual experiment is a simulation realistic enough to compute exactly what is being measured in a real experiment, including characteristics of the instrument and unwanted features of the sample. I hope that this will boost both simulations, which will be better validated and thus more reliable, and experiments, which can be better prepared through simulation.

In computational science in general, I am excited about the new technologies for collaboration, and for sharing and publishing computational methods and results in machine-readable form. In the long run this will lead to better science, but this also requires changes in the attitude of the scientific community.

  •  If you were to explain Computing in Science and Engineering (either the magazine or the field(s) it represents) to a five-year-old, what would you say?

It’s video games for grown-ups, who use them to explore the world through an enormous microscope, or inversely the world seen from a large distance.

  • Big Data… What’s more exciting or important (or is there anything more important)?

What’s important is doing good science. I think it’s too early to say if and how Big Data will help with that. Big Data techniques have led to some spectacular success stories in solving specific problems (Google services such as translation are perhaps the best-known applications) and it seems quite probable that they will revolutionize some fields of research, but we will have to be patient for a few more years before we know. In the meantime, these techniques are definitely exciting.

  • What is one thing that would fundamentally change the average person’s reality if he or she worked with you day to day and saw what you saw?

When talking to people who have no contact to research, I often notice that they have an idealized view of science, which they see as a source of ever increasing certain knowledge about the world. If they worked with me day to day, they’d see that scientific discovery is an erratic path along which every answer found raises a new question. They would probably have both more and less confidence in science from then on. More because it’s demystified, less because they realize it’s not perfect.

  • What is the most important application of HPC/computational science/data visualization in your opinion? (Protein simulation, climate modeling, etc.) Why?

I wouldn’t single out any one application as the most important. Protein simulations and climate modeling are important, but so are biological simulations on a larger scale (protein interaction networks, cells, cell colonies, …), simulations of new materials, or the analysis of social interactions.

  • Conversely, what is the scariest?

Again I have more than one candidate. Massive surveillance, as practiced by the NSA, is one of them, but massive analysis of data published voluntarily on social networks could be just as scary – imagine what your bank or your health insurance could try to deduce from that. Personalized medicine could become scary as well – do you really want a computer to tell that you have a 40% chance of dying from a heart attack in ten years?

  • Why do you do what you do?

I like the intellectual challenge in both science and computing. And the idea of contributing, if only a bit, to a better future for everyone on this planet.

Tagged with:
Posted in Meet the Editorial Board

Meet the Editors: Bruce Boghosian

Meet Bruce M. Boghosian, PhD
Professor, Department of Mathematics
Bromfield-Pearson Hall, Tufts University

  • In what “slice” of CiSE do you work?

I began my career in computational plasma physics at Lawrence Livermore National Laboratory. A move to industry at Thinking Machines Corporation broadened my interest in HPC, particularly as applied to fluid turbulence and quantum Monte Carlo. From there I moved to academia as a member of the faculty at Boston University’s Center for Computational Science, where I continued work on the dynamics of complex fluids. Finally, in 2000 I joined the faculty of Mathematics at Tufts University where I continue research on computational and theoretical fluid dynamics. Most recently, I have become interested in applications of kinetic theory to economics.

  • What sorts of changes have you seen in the field over the years?

In the 1970s and early 1980s, HPC was driven by scientific applications. Supercomputer manufacturers used to visit computational science working groups, for example at LLNL, to learn more about our needs and to help us use their hardware more effectively. Beginning in the late 1980s, HPC was driven by commercial applications, especially video games. While this may seem like a step backwards from a scientific standpoint, the seemingly inexorable march of Moore’s Law continues to make computational science a very exciting area of research. My cell phone is now more powerful than some of the computers that I worked on at LLNL in my early career. GPUs designed for video games are excellent SIMD parallel computers that excel at highly parallel applications, such as QCD and lattice Boltzmann models of fluids. Programming environments such as PETSc and Trilinos have made parallel scientific computing simple and transparent. The ability to mine large scientific databases will be an important development in the coming years. It is still an exciting space in which to work and I feel privileged to be part of it.

  • What is the most exciting aspect about your work for the near future? The far future?

As noted above, I have become very interested in applications of kinetic theory to economics. I hope to focus on that in the coming years.

  • If you were to explain Computing in Science and Engineering (either the magazine or the field(s) it represents) to a five-year-old, what would you say?

I would say: Scientists can describe the way that many things in nature behave. They know how fluids move, why the sky is blue and sunsets are red, and why you look a bit like your parents. The descriptions that scientists use, however, often involve lots of mathematical calculations. Most of the time, these calculations are just the kinds of things you are learning in school now—adding and multiplying numbers—but they have to do it huge numbers of times to describe real systems. So they use very powerful computers to do this.

  • Big Data… What’s more exciting or important (or is there anything more important)?

Big data is very exciting indeed. I just finished reading “Data Science for Business” by Provost and Fawcett, and enjoyed it very much. It made me realize that a good fraction of what is now called “data science” is material already well known to computational scientists. High-energy physicists were among the world’s first large-scale data miners, after all, and most of the methods used by data miners are either straight from statistics or numerical analysis textbooks, or they involve learning algorithms, such as neural networks and genetic algorithms. So computational scientists reading new books on data science will inevitably feel a bit like Molière’s Monsieur Jourdain, who was delighted to find that he had been speaking in prose all his life.

  • What is one thing that would fundamentally change the average person’s reality if he or she worked with you day to day and saw what you saw?

At the moment, I am taking a leave of absence from Tufts University to serve as the president of the American University of Armenia—an American-accredited affiliate of the University of California, located in Yerevan, Armenia. In fact, we have just initiated a new bachelor’s degree in computational science there, with an innovative curriculum that combines applied mathematics, computer science and numerical analysis.

American schools and hospitals in the developing world spread hope for economic growth and political reform far more effectively than any other US foreign policy endeavor, and I have directly seen the transformative effect of an AUA education on the lives of our graduates. So I think that most Americans would be amazed to learn that US funding for all American schools and hospitals abroad totals only $23.5 million per year — less than 1% of the cost of a single B2 bomber.

  • What is the most important application of HPC/computational science/data visualization in your opinion? (Protein simulation, climate modeling, etc.) Why?

We need to use HPC to develop and implement sustainable practices. We need sustainable engineering, sustainable economic policy, and sustainable business practices. Instead of the incessant drumbeat for growth, we should be focusing on closed-loop systems that do minimal damage to the environment, the climate and the economy. Hopefully, scientific modeling will allow us to develop alternative energy sources, economic modeling will guide us to better public policy, and big data will allow us to evaluate business practices not simply by the profit generated at the end of each quarter but also by their externalities and social costs.

  • Conversely, what is the scariest?

At this moment, I think that the scariest application of HPC is the use of data mining to create a pervasive surveillance state in which nothing is private. Of course, we know about that only because of a single whistleblower. I am confident that there are even scarier things going on of which we are unaware.

  • Why do you do what you do?

Because it’s fun and it’s useful.

  • Anything else you want to add?

To those contemplating a career in this field: Learn as much of the mathematics and the science as you can before you jump on the computer. The best mathematical modelers are people who knew real, complex and functional analysis well before they tried to learn numerical analysis. Likewise, if your goal is to model sunspots, you should spend quite a bit of time learning about them, and all the open problems associated with them, before you try to simulate them on a computer. Computer simulation, after all, ought not to be an end in itself; just as is the case with theory and experiment; it ought to be the means to an end.

Tagged with: , ,
Posted in Meet the Editorial Board, Uncategorized

Meet the Board: Rubin Landau

Meet Rubin Landau, one of our Education Department Editors

  • In what “slice” of CiSE do you work?

Education editor, along with Steve Bartlett.

  • What sorts of changes have you seen in the field over the years?

I was a basic researcher for first half of my career in subatomic few body systems, which took serious computing for the time. The second half of my career was spent more on computational physics and science educational developments, and particularly book writing and use of the Web for education (we did that in 1995).

Big changes have been in progress of QCD, the use of Web and mobile devices, massive data analysis for rare or hard-to-isolate events, but not in the traditional disciplines embracing computing  as a serious part of their work.

  • What is the most exciting aspect about your work for the near future? The far future?

The work we have done a decade ago on electronic textbooks is now coming of age. There is still a need for executable paper. On a more scientific level, the combo of particle physics, astronomy, and computation.

  • If you were to explain Computing in Science and Engineering (either the magazine or the field(s) it represents) to a five-year-old, what would you say?

Don’t worry about this kind of stuff. Learn the basics properly and then you can do things right.

  • Big Data… What’s more exciting or important (or is there anything more important)?

Big data came about, in part, because the massively parallel machines being developed, along with the improved networks, could attack these problems. In many ways we had to find new problems to match the hardware.

  • What is one thing that would fundamentally change the average person’s reality if he or she worked with you day to day and saw what you saw?

The ability to do and understand math and how it makes understanding the world simpler.

  • What is the most important application of HPC/computational science/data visualization in your opinion? (Protein simulation, climate modeling, etc.)

I would agree with Charles concerning global warming (context here), but I’d add that the combo of particle physics, astronomy, and computing is a big one as far as how we understand the universe.

  • Conversely, what is the scariest?

The lack of math understanding of the general population, and the acceptance of the education community (suggesting that you can understand things without really understanding things).

  • Why do you do what you do?

I think it is important, I enjoy it, and I enjoy being creative.

Posted in Uncategorized

Nvidia and Supercomputing

Cutting through the dull roar at Supercomputing’s Expo Floor, you can hear all manner of informal talks and pitches, and if you’re careful enough, you can hear the occasional nugget of “good stuff.” That seems to pale in comparison to the sheer volume of worthwhile content that Nvidia is giving away at SC13. From Jack Wells talking about leadership computing to James Hack speaking on the transition to heterogeneous architectures and from Jack Dongarra’s presentation on emerging technologies for high-performance computing to Thomas Sterling speaking on tomorrow’s exascale systems, they’ve got it on lockdown (in my humble opinion).

What’s even more interesting is that they’ve got the talks streaming for everyone to hear. Simply go here to watch any of their videos: http://www.nvidia.com/object/sc13-technology-theater.html

Posted in Bits and Bytes

Meet the Board: Charles Day

Meet Charles Day, one of CiSE‘s editorial board members.

 

  • In what “slice” of CiSE do you work?

I write the Last Word column and serve as the American Institute of Physics’s editorial liaison.

  • What sorts of changes have you seen in the field over the years?

As computers have become more powerful and databases bigger, scientists can now do more with more. Astronomers can simulate the early universe and compare structures they create with the real, observed universe. Geneticists can deduce entire family trees from a few incomplete DNA sequences. Particle physicists can extract rare events from a background of billions and billions of events . . .

  • What is the most exciting aspect about your work for the near future? The far future?

As Physics Today’s online editor, I don’t do research. In the world of science communication, I’m most excited by the possibilities of combining media to tell stories online. As for the far future, it would be great if the stories I produce could be automatically and faithfully translated into other languages.

  • If you were to explain Computing in Science and Engineering (either the magazine or the field(s) it represents) to a five-year-old, what would you say?

Remember when you built a castle out of wooden blocks and it fell down? With a computer, you can design and build a castle that doesn’t fall down.

  • Big Data… What’s more exciting or important (or is there anything more important)?

The most important and exciting developments in big data entail developing algorithms to extract patterns from  heterogeneous unstructured data.

  • What is one thing that would fundamentally change the average person’s reality if he or she worked with you day to day and saw what you saw?

I hope they’d acquire the ability to scan through lots and lots of sources of scientific information and identify the few papers that are worth really paying attention to.

  • What is the most important application of HPC/computational science/data visualization in your opinion? (Protein simulation, climate modeling, etc.) Why?

For the sake of the planet and its inhabitants, it has to be climate modeling. Because early climate models were so crude, skeptics could plausibly argue that warnings of disaster were unfounded. Now, we’ve almost reached the point that the case for anthropogenic climate change can be made by appealing solely to observations, not models. Still, better models are very much needed to predict the local effects of climate change and to guide mitigation policies.

  • Conversely, what is the scariest?

Using HPC to predict the genetic mutations needed to turn, say, the flu virus into a deadly weapon or to make heroin more addictive.

  • Why do you do what you do?

Because I find it challenging, intellectually satisfying and fun.

  • Anything else you want to add?

If you like the Last Word column, you might also like my blog, the Dayside.

Posted in Meet the Editorial Board

Sep/Oct Issue on Machine Learning

CiSE 5-13 cover

Machine learning has come a long way since its beginnings in the 30s and 50s. Following advances in statistics, functional analysis, and computing capability, the last decade has seen a burgeoning of applications in the field. This issue of Computing in Science & Engineering presents articles discussing recent advances in the field and their applications to new areas, particularly into solving structured learning problems like drug discovery and experiment design for systems biology. In addition to articles on climate modeling and space weather prediction, the theme articles highlight different application areas and uses of different sets of tools, hinting at the future evolution of machine learning.

Link to the digital library soon.

Posted in New Issues

July/Aug Special Issue on Cloud Computing

July/Sep

 

Cloud computing has emerged as a dominant paradigm in industry, widely adopted by enterprises. Cloud services are also joining other infrastructures (grids, clusters, and high-performance computing) as viable platforms for scientific exploration and discovery. In CiSE’s July/August special issue on Cloud Computing, guest editors George K. Thiruvathukal (also CiSE’s Editor in Chief) and Manish Parahar of Rutgers discuss how cloud platforms and abstractions can be used to support real-world science and engineering applications, suggesting that many traditional HPC applications can make effective use of cloud services.

Link coming soon!

Posted in Uncategorized