16 May 2018
Complexity and network science
Researchers have applied complexity’s tools to understanding social dynamics; from how online social networks form and grow to how big data can help shed light on all sorts of social patterns.
SpringerOpen’s journals in complexity explore all of these challenges, and more.
From Applied Network Science:
by Jennifer M. Larson
From EPJ Data Science:
by Gianni Barlacchi, Christos Perentis, Abhinav Mehrotra, Mirco Musolesi and Bruno Lepri
From Computational Social Networks:
by Marco Cremonini and Francesca Casamassima
|From Big Data Analytics:|
Two dimensional smoothing via an optimised Whittaker smoother
by Sri Utami Zuliana and Aris Perperoglou
|Crowdsourcing the Robin Hood effect in cities|
Socioeconomic inequalities in cities are embedded in space and result in neighborhood effects, whose harmful consequences have proved very hard to counterbalance efficiently by planning policies alone. Considering redistribution of money flows as a first step toward improved spatial equity, we study a bottom-up approach that would rely on a slight evolution of shopping mobility practices. Building on a database of anonymized card transactions in Madrid and Barcelona, we quantify the mobility effort required to reach a reference situation where commercial income is evenly shared among neighborhoods. The redirections of shopping trips preserve key properties of human mobility, including travel distances. Surprisingly, for both cities only a small fraction (∼5%) of trips need to be modified to reach equality situations, improving even other sustainability indicators. The method could be implemented in mobile applications that would assist individuals in reshaping their shopping practices, to promote the spatial redistribution of opportunities in the city.
by Thomas Louail, Maxime Lenormand, Juan Murillo Arias, and José J. Ramasco
|The many facets of community detection in complex networks|
Community detection, the decomposition of a graph into essential building blocks, has been a core research topic in network science over the past years. Since a precise notion of what constitutes a community has remained evasive, community detection algorithms have often been compared on benchmark graphs with a particular form of assortative community structure and classified based on the mathematical techniques they employ. However, this comparison can be misleading because apparent similarities in their mathematical machinery can disguise different goals and reasons for why we want to employ community detection in the first place. Here we provide a focused review of these different motivations that underpin community detection. This problem-driven classification is useful in applied network science, where it is important to select an appropriate algorithm for the given purpose. Moreover, highlighting the different facets of community detection also delineates the many lines of research and points out open directions and avenues for future research.
by Michael T. Schaub, Jean-Charles Delvenne, Martin Rosvall, and Renaud Lambiotte
|Predicting stock market movements using network science: an information theoretic approach|
A stock market is considered as one of the highly complex systems, which consists of many components whose prices move up and down without having a clear pattern. The complex nature of a stock market challenges us on making a reliable prediction of its future movements. In this paper, we aim at building a new method to forecast the future movements of Standard & Poor’s 500 Index (S&P 500) by constructing time-series complex networks of S&P 500 underlying companies by connecting them with links whose weights are given by the mutual information of 60-min price movements of the pairs of the companies with the consecutive 5340 min price records. We showed that the changes in the strength distributions of the networks provide an important information on the network’s future movements. We built several metrics using the strength distributions and network measurements such as centrality, and we combined the best two predictors by performing a linear combination. We found that the combined predictor and the changes in S&P 500 show a quadratic relationship, and it allows us to predict the amplitude of the one step future change in S&P 500. The result showed significant fluctuations in S&P 500 Index when the combined predictor was high. In terms of making the actual index predictions, we built ARIMA models with and without inclusion of network measurements, and compared the predictive power of them. We found that adding the network measurements into the ARIMA models improves the model accuracy. These findings are useful for financial market policy makers as an indicator based on which they can interfere with the markets before the markets make a drastic change, and for quantitative investors to improve their forecasting models.
by Minjun Kim and Hiroki Sayama
|Instagram photos reveal predictive markers of depression|
Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners’ average unassisted diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally-generated features. These results suggest new avenues for early screening and detection of mental illness.
by Andrew G. Reece and Christopher M. Danforth
|A roadmap for the computation of persistent homology|
Persistent homology (PH) is a method used in topological data analysis (TDA) to study qualitative features of data that persist across multiple scales. It is robust to perturbations of input data, independent of dimensions and coordinates, and provides a compact representation of the qualitative features of the input. The computation of PH is an open area with numerous important and fascinating challenges. The field of PH computation is evolving rapidly, and new algorithms and software implementations are being updated and released at a rapid pace. The purposes of our article are to (1) introduce theory and computational methods for PH to a broad range of computational scientists and (2) provide benchmarks of state-of-the-art implementations for the computation of PH. We give a friendly introduction to PH, navigate the pipeline for the computation of PH with an eye towards applications, and use a range of synthetic and real-world data sets to evaluate currently available open-source implementations for the computation of PH. Based on our benchmarking, we indicate which algorithms and implementations are best suited to different types of data sets. In an accompanying tutorial, we provide guidelines for the computation of PH. We make publicly available all scripts that we wrote for the tutorial, and we make available the processed version of the data sets used in the benchmarking.
by Nina Otter, Mason A. Porter, Ulrike Tillmann, Peter Grindrod, and Heather H. Harrington
|Data-driven modeling of collaboration networks: a cross-domain analysis|
We analyze large-scale data sets about collaborations from two different domains: economics, specifically 22,000 R&D alliances between 14,500 firms, and science, specifically 300,000 co-authorship relations between 95,000 scientists. Considering the different domains of the data sets, we address two questions: (a) to what extent do the collaboration networks reconstructed from the data share common structural features, and (b) can their structure be reproduced by the same agent-based model. In our data-driven modeling approach we use aggregated network data to calibrate the probabilities at which agents establish collaborations with either newcomers or established agents. The model is then validated by its ability to reproduce network features not used for calibration, including distributions of degrees, path lengths, local clustering coefficients and sizes of disconnected components. Emphasis is put on comparing domains, but also sub-domains (economic sectors, scientific specializations). Interpreting the link probabilities as strategies for link formation, we find that in R&D collaborations newcomers prefer links with established agents, while in co-authorship relations newcomers prefer links with other newcomers. Our results shed new light on the long-standing question about the role of endogenous and exogenous factors (i.e., different information available to the initiator of a collaboration) in network formation.
by Mario V. Tomasello, Giacomo Vaccario, and Frank Schweitzer
Physics blog posts
Our Interview Partners
Editors-in-Chief of Cancer Nanotechnology
We are convinced that through nanotechnology-based investigations involving basic underpinning principles from physics and chemistry we have an opportunity to enhance our understanding of malignant diseases.