Michele Coscia - Connecting Humanities

Michele Coscia I am an associate prof at IT University of Copenhagen. I mainly work on algorithms for the analysis of complex networks, and on applying the extracted knowledge to a variety of problems. My background is in Digital Humanities, i.e. the connection between the unstructured knowledge and the coldness of computer science. I have a PhD in Computer Science, obtained in June 2012 at the University of Pisa. In the past, I visited Barabasi's CCNR at Northeastern University, and worked for 6 years at CID, Harvard University.

13 October 2014 ~ 1 Comment

In the Media

Another quick pause this month from my written blabbering about my research. Because it is time for some spoken blabbering about my research!

First and foremost: I was invited to register a 100-seconds audio segment for the Academic Minute. The Academic Minute is a radio program of the WAMC Northeast Public Radio that gives to scholars around the world a chance for a very brief presentation of their work. My segment is going to air tomorrow at 7:34 AM (Eastern time) and, if you do not want to get up early, also at 3:56 PM. The segment is going to be about my work on memetics. If you do not have a radio (duh) you can live stream from their website. The live stream might work also if you are not in the US. but I haven’t really checked. However, once it’s done, you can probably download the podcast (although I am not really sure why somebody would get into so much trouble just to listen to my delirious thoughts for 100 seconds). A big thanks to Matthew Pryce, who organizes the program and was so kind to invite me for a segment.

That is not the only way to hear about my work on memes. The paper that I recently published in Scientific Reports was also the subject of a lighting talk I gave at the Digital Umanities Forum at Kansas University (I talked about the Forum a couple of weeks ago). Brian Rosenblum was kind enough to upload a video of my talk to Youtube. So here it is:

One speaker had to cancel her presentation, and people were invited to fill the gap. So excuse my lack of fluency, but I didn’t know I was going to present until the day itself! This is it for now, I promise that I’ll write something more about this paper in the future.

25 September 2014 ~ 0 Comments

Digital Humanities @ KU: Report

Earlier this month I had the pleasure of being invited to hold a workshop with Isabel Meirelles on complex network visualization and analysis at the Digital Humanities 2014 Forum, held at Kansas University, Lawrence. I figured that this is a good occasion to report on my experience, since it was very interesting and, being quite different from my usual venues, it adds a bit of diversity. The official page of the event is useful to get an overall picture of what was going on. It will be helpful also for everything I will not touch upon in this post.

net_ex19

I think that one of the main highlights of the event was the half of our workshop curated by Isabel, with the additional keynote that she gave. Isabel is extremely skilled both in the know-how and in the know-what about information visualization: she is not only able to create wonderful visualizations, but she also has a powerful critical sense of what works and what doesn’t. I think that the best piece of supporting evidence for this statement is her latest book, which you can find here. As for my part of the workshop, it was focused on a very basic introduction to the simplest metrics of network analysis. You can take a glance at the slides here, but if you are already somewhat proficient in network terminology do not expect your world to be shattered by it.

The other two keynotes were equally fascinating. The first one was from Steven Jones. His talk gravitated around the concept of the eversion of the virtual into reality. Many works of science fiction imagined human beings ending up in some more or less well defined “virtual reality”, where everything is possible as long as you can program it. See for example Gibson’s “Neuromancer”, or “The Matrix”, just to give a popular example that most people would know. But what is happening right now, observes Jones, is exactly the opposite. We see more and more examples of virtual reality elements being introduced, mostly playfully, into reality. Think about qonqr, where team of people have to physically “fight” to keep virtual control of an actual neighborhood. A clever artistic way to depict eversion is also:

The last keynote was from Scott Weingart. Scott is a smart guy and he is particularly interested in studying the history of science. In the (too few!) interactions we had during the forum we touched upon many topics also included in his talk: ethic responsibility of usage of data about people, the influence of the perspective you use to analyze human activities and, a must exchange between a historian of science and yours truly formed as scientist in Pisa, Galileo Galilei. I feel I cannot do justice to his very eloquent and thought-provoking keynote in this narrow space. So I redirect you to its transcript, hosted on Scott’s blog. It’s a good read.

Then, the contributed talks. Among all the papers you can explore from the official forum page, I’d like to focus particularly on two. The first is the Salons project, presented by Melanie Conroy. The idea is to map the cultural exchange happening in Europe during the Enlightenment years. A great role for this exchange was played by Salons, where wealthy people were happy to give intellectuals a place for gathering and discussing. You can find more information on the Salons project page. I liked it because it fits with the idea of knowledge creation and human advancement as a collective process, where an equal contribution is given by both intellect and communication. By basing itself on richly annotated data, projects like these can help us understanding where breakthroughs come from, or to understand that there is no such a thing as a breakthrough, only a progressive interconnection of ideas. Usually, we realize it only after the fact, and that’s why we think it happened all of a sudden.

Another talk I really enjoyed was from Hannah Jacobs. Her talk described a visualization tool to explore the evolution of the concept of “New Woman”, one of the first examples of feminism. I am currently unable to find an online link to the tool. What I liked about it was the seamless way in which different visualizations are used to tell the various points of view on the story. The whole point of information visualization is that when there is too much data to show at the same time, one has to select what to highlight and what to discard. But in this framework, with a wise choice of techniques, one can jump into different magnifying glasses and understand one part of the story of the term “New Woman” at a time.

Many other things were cool, from the usage of the Unity 3D engine to recreate historic view, to the charming visualizations of “Enchanters of Men”. But my time here is up, and I’m left with the hope of being invited also to the 2015 edition of the forum.

28 August 2014 ~ 0 Comments

The Curious World of Network Mapping

Complex networks can come in different flavors. As you know if you follow this blog, my signature dish is multilayer/multidimensional networks: networks with multiple edge types. One of the most popular types is bipartite networks. In bipartite networks, you have two types of nodes. For example, you can connect users of Netflix to the movies they like. As you can see from this example, in bipartite networks we allow only edges going from one type of nodes to the other. Users connect to movies, but not to other users, and movies can’t like other movies (movies are notoriously mean to each other).

m1

Many things (arguably almost everything) can be represented as a bipartite network. An occupation can be connected to the skills and/or tasks it requires, an aid organization can be connected to the countries and/or the topics into which it is interested, a politician is connected to the bills she sponsored. Any object has attributes. And so it can be represented as an object-attribute bipartite network. However, most of the times you just want to know how similar two nodes of the same type are. For example, given a movie you like, you want to know a similar movie you might like too. This is called link prediction and there are two ways to do this. You could focus on predicting a new user-movie connection, or focus instead on projecting the bipartite network to discover the previously unknown movie-movie connections. The latter is the path I chose, and the result is called “Network Map”.

It is clearly the wrong choice, as the real money lies in tackling the former challenge. But if I wanted to get rich I wouldn’t have chosen a life in academia anyway. The network map, in fact, has several advantages over just predicting the bipartite connections. By creating a network map you can have a systemic view of the similarities between entities. The Product Space, the Diseasome, my work on international aid. These are all examples of network maps, where we go from a bipartite network to a unipartite network that is much easier to understand for humans and to analyze for computers.

ps4

Creating a network map, meaning going from a user-movie bipartite network to a movie-movie unipartite network, is conceptually easy. After all, we are basically dealing with objects with attributes. You just calculate a similarity between these attributes and you are done. There are many similarities you can use: Jaccard, Pearson, Cosine, Euclidean distances… the possibilities are endless. So, are we good? Not quite. In a paper that was recently accepted in PLoS One, Muhammed Yildirim and I showed that real world networks have properties that make the general application of any of these measures quite troublesome.

For example, bipartite networks have power-law degree distributions. That means that a handful of attributes are very popular. It also means that most objects have very few attributes. You put the two together and, with almost 100% probability, the many objects with few attributes will have the most popular attributes. This causes a great deal of problems. Most statistical techniques aren’t ready for this scenario. Thus they tend to clutter the network map, because they think that everything is similar to everything else. The resulting network maps are quite useless, made of poorly connected dense areas and lacking properties of real world networks, such as power-law degree distributions and short average path length, as shown in these plots:

m2

m3

Of course sometimes some measure gets it right. But if you look closely at the pictures above, the only method that consistently give the shortest paths (above, when the peak is on the left we are good) and the broadest degree distributions (below, the rightmost line at the end in the lower-right part of the plot is the best one) is the red line of “BPR”. BPR stands for “Bipartite Projection via Random-walks” and it happens to be the methodology that Muhammed and I invented. BPR is cool not only because its network maps are pretty. It is also achieving higher scores when using the network maps to predict the similarity between objects using ground truth, meaning that it gives the results we expect when we actually already know the answers, that are made artificially invisible to test the methodology. Here we have the ROC plots, where the highest line is the winner:

m4

So what makes BPR so special? It all comes down to the way you discount the popular attributes. BPR does it in a “network intelligent” way. We unleash countless random walkers on the bipartite network. A random walker is just a process that starts from a random object of the network and then it jumps from it to one of its attributes. The target attribute is chosen at random. And then the walker jumps back to an object possessing that attribute, again choosing it at random. And then we go on. At some point, we start from scratch with a new random walk. We note down how many times two objects end up in the same random walk and that’s our similarity measure. Why does it work? Because when the walker jumps back from a very popular attribute, it could essentially go to any object of the network. This simple fact makes the contribution of the very popular attributes quite low.

BPR is just the latest proof that random walks are one of the most powerful tools in network analysis. They solve node ranking, community discovery, link prediction and now also network mapping. Sometimes I think that all of network science is founded on just one algorithm, and that’s random walks. As a final note, I point out that you can create your own network maps using BPR. I put the code online (the page still bears the old algorithm’s name, YCN). That’s because I am a generous coder.

31 July 2014 ~ 0 Comments

The (Not So) Little Shop of Horrors

For this end of July, I want to report some juicy facts about a work currently under development. Why? Because I think that these facts are interesting. And they are a bit depressing too. So instead of crying I make fun of them, because as the “Panic! At the Disco” would put it: I write sins, not tragedies.

So, a bit of background. Last year I got involved in an NSF project, with my boss Ricardo Hausmann and my good friend/colleague Prof. Stephen Kosack. Our aim is to understand what governments do. One could just pull out some fact-sheets and budgets, but in our opinion those data sources do not tell the whole story. Governments are complex systems and as complex systems we need to understand their emergent properties as collection of interacting parts. Long story short, we decided to collect data by crawling the websites of all public agencies for each US state government. As for why, you’ll have to wait until we publish something: the aim of this post is not to convince you that this is a good idea. It probably isn’t, at least in some sense.

p1

It isn’t a good idea not because that data does not make sense. Au contraire, we already see it is very interesting. No, it is a tragic idea because crawling the Web is hard, and it requires some effort to do it properly. Which wouldn’t be necessarily a problem if there wasn’t an additional hurdle. We are not just crawling the Web. We are crawling government websites. (This is when in a bad horror movie you would hear a thunder nearby).

To make you understand the horror of this proposition is exactly the aim of this post. First, how many government websites are really out there? How to collect them? Of course I was not expecting a single directory for all US states. And I was wrong! Look for yourselves this beauty: http://www.statelocalgov.net/. The “About” page is pure poetry:

State and Local Government on the Net is the onle (sic) frequently updated directory of links to government sponsored and controlled resources on the Internet.

So up-to-date that their footer gets only to 2010 and their news section only includes items from 2004. It also points to:

SLGN Notes, a weblog, [that] was added to the site in June 2004. Here, SLGN’s editors comment on new, redesigned or updated state and local government websites, pointing out interesting or fun features for professional and consumer audiences alike and occasionally cover related news.

Yeah, go ahead and click the link, that is not the only 404 page you’ll see here.

p3

Enough compliments to these guys! Let’s go back to work. I went straight to the 50 different state government’s websites and found in all of them an agency directory. Of course asking that these directories shared the same structure and design is too much. “What are we? Organizations whose aim is to make citizen’s life easier or governments?”. In any case from them I was able to collect the flabbergasting amount of 61,584 URLs. Note that this is six times as much as from statelocalgov.net, and it took me a week. Maybe I should start my own company 🙂

Awesome! So it works! Not so fast. Here we hit the first real wall of government technological incompetence. Out of those 61,584, only 50,999 actually responded to my pings. Please note that I already corrected all the redirects: if the link was outdated but the agency redirected you to the new URL, then that connection is one of the 50,999. Allow me to rephrase it in poetry: in the state government directories there are more than ten thousand links that are pure, utter, hopeless garbage nonsense. More than one out of six links in those directories will land you exactly nowhere.

p2

Oh, but let’s stay positive! Let’s take a look at the ones that actually lead you somewhere:

  • Inconsistent spaghetti-like design? Check. Honorable mention for the good ol’ frameset webdesign of http://colecounty.org/.
  • Making your website an image and use <area> tag for links? Check. That’s some solid ’95 school.
  • Links to websites left to their own devices and purchased by someone else? Check. Passing it through Google Translate, it provides pearls of wisdom like: “To say a word and wipe, There are various wipe up for the wax over it because wipe from”. Maybe I can get a half dozen of haiku from that page. (I have more, if you want it)
  • ??? Check. That’s a Massachusetts town I do not want to visit.
  • Maintenance works due to finish some 500 days ago? Check.
  • Websites mysteriously redirected somewhere else? Check. The link should go to http://www.cityoflaplata.com/, but alas it does not. I’m not even sure what the heck these guys are selling.
  • These aren’t the droids you’re looking for? Check.
  • The good old “I forgot to renew the domain contract”? Check.

Bear in mind that this stuff is part of the 50,999 “good” URLs (these are scare quotes at their finest). At some point I even gave up noting down this stuff. I saw hacked webpages that had been there since years. I saw an agency providing a useful Google Maps of their location, which according to them was the middle of the North Pole. But all those things will be lost in time, like tears in the rain.

26 June 2014 ~ 0 Comments

NetSci 2014 Report

NetSci, the top global conference about network science, never fails to be a tornado of ideas. Now that the dust has settled, I feel a bit easier to put this year’s thoughts on this post. Yes, this is yet another conference report by yours truly.

Let’s first get over the mandatory part of the report: an evaluation of the awesomeness of the Multiple Networks satellite I co-organized with my friends scattered around Europe. As said, this year’s edition was open to submissions and we received 17 of them. I think that, as a start, that is a good figure. Also, the attendance was more than satisfactory, and it appears scattered only because we got the largest room of the conference! Here’s proof!

DSC_0544.JPG

The overall event was a great success. The talks were very interesting and we had a great unexpected bonus point. One of our keynotes, as you might remember, was Mason Porter. Well, the guy actually got the Erdos-Renyi prize this year! The Erdos-Renyi prize has been established in 2012 and it goes to outstanding young researchers in network science. Well, make a note of this: speaking at the Multiple Networks satellite will eventually get you some important awards. After all, everybody knows that correlation = causation.

My favorite satellite (besides the one I organized, obviously) continues to be the Arts, Humanities and Complex Networks symposium. This year it was a little bit tougher than usual, with a lot of qualitative stuff that not everybody can appreciate. However, their keynote by Lada Adamic was nothing short of outstanding. She is currently working at Facebook, a position that gives her a privileged vantage point over memes and viral events. You know that those things tickle my curiosity very strongly, and Lada’s work is really great. She presented her work, where she proves that meme evolution and mutation on Facebook follows very closely the same mechanics of evolution and mutation we find in the biological world. Good news for my old paper, which was heading in the same direction!

Which brings me to the main conference, because one of the best talks I attended was from Jon Kleinberg, who collaborated with Lada on another memes-meet-Facebook work. In that case, there is less good news for me. My research plan is to use meme content to predict virality. However, the Kleinberg-Adamic dream team showed that content is actually a very weak factor! (Here’s a blog post about it).

There is still hope, though. My way to deal with content is fundamentally different than theirs. Plus the problem they are studying is slightly different from mine: they are analyzing memes that are already going viral and they want to know how popular they will get. I’m more focused on knowing if the meme is going to be popular at all, and I’m not that concerned about whether everybody will know it or only a niche group.

Virality of content was a very hot topic this year, because there were two other fantastic talks about it. One was by Sinan Aral, and he talked about how much we are influenced by a post’s popularity when we read it. Controlling for content (and believe me when I say that Sinan is one of the best experiment designers out there), if we know that a post is popular we are more likely to upvote it. This is so true that Reddit itself decided, for some subreddits, to hide the post score for the first few hours, so that real good content will eventually flow to the top once the discussion is settled.

On top of that, also James Gleeson talked about a theoretical model that can account for the popularity distribution of memes. The model sounds simple. You just assume that a person has a box containing all the memes they saw in the past. With some probability, the person will either come up with something new or reshare a meme from their box. When resharing from the box, there is a memory effect for which more recent memes are more likely to be reshared. Whenever you share something, regardless if it is new or not, it ends up in your friend’s boxes. Even if it looks so simple, the actual solution of the model isn’t it at all and James is so good he defies belief. And, at the end of the day, everything works like a charm. Again, this does not bother me too much, because it only predicts the distribution of popularity, not which memes are going to be popular, a different problem.

Besides all this work meme popularity, there were other very interesting talks. I mention:

  • The very elegant talk by Chris Moore on community discovery, which also has the by-product of providing witty one liners for many occasions (for example “Physicists like to minimize functions because, you know, rocks fall”);
  • The nice talk by Frank Schweitzer on the role of active individuals in collaboration networks, who have the side effect of making the networks more unstable and prone to breaking apart (damn you, hyper-active people!);
  • The usual fun of the lighting talks (they could not call them ignite talks because of copyright issues). My favorite for this year was from Max Schich, with a really great panorama of the art market in London, Paris and Amsterdam from the Getty dataset. Aaron Clauset and Roberta Sinatra deserve to be mentioned too, with two great talks about climbing the greasy pole in academia (is it really worth it to shoot for big name universities? Short answer: no).

That’s it! You can see that also this year there was a lot to see and to think about. I am already looking forward for next year!

22 May 2014 ~ 0 Comments

The NetSci Multiple Networks Menu

Friends, scientists, network fanatics, lend me your eyes: I come to announce the program of the Multiple Network Modeling, Analysis and Mining symposium, introduced some months ago on these pages. To give you a quick recap: this is a satellite event which will happen at the 2014 edition of NetSci, a major network science event of the year. The symposium will take place on Monday June 2nd, while the conference itself will start on June 4th and it will last until the end of the week. Differently from last year, we now have space for contributed talks and I like the program we were able to set up. So, I’ll boast about it here.

You can find the overview of the entire event on the official website, but let me give you the highlights.

We have four invited speakers: Frank Schweitzer, Renaud LambiotteNitesh Chawla and Mason Porter. They come from different backgrounds (System Design, Mathematics and Computer Science) which is a great plus for the event. They are going to:

  • Tackle the mathematical foundations of multiple networks;
  • Describe models for multiple networks;
  • Analyse them, both in the flavour of bipartite temporal social networks and in the extension of the classic link prediction problem. Usually in link prediction we are interested in evaluating the likelihood of seeing “a” connection between two nodes. Since in multiple networks there are different types of connections, we are also interested in predicting “which” connection we will observe.

As for the contributed talks, we have a pretty good team, including (but not limited to) works signed by David Lazer from Northeastern University, Juyong Park from KAIST, Eugene Stanley from Boston University and many more. We had such a positive reaction to our call for papers, that we had to increase the slots for contributed talks from 5 to 7 and still reject presentations that we really wanted to see. Among my favourites works there are:

  • Multiple network applications to study the productivity of countries and predicting their growth;
  • The study of evolution of different relations among almost 2000 students from 14 US universities;
  • A network-based approach for ranking the performances of sport teams;
  • Novel way to classify nodes in complex networks where multiple different relations are present;
  • … and more!

For completeness, here’s the detailed schedule, I hope to see many of you there!

Session I

9.00 – 9.30 Registration / Set Up
9.30 – 9.50 Introduction: Welcome from the organizers, presentation of the program
9.50 – 10.30 Keynote I: Frank Schweitzer, Professor for Systems Design at ETH Zurich
Analysing temporal bipartite social networks
10.30- 11.00 Coffee Break

Session II

11.00 – 11.40 Keynote II: Renaud Lambiotte, Associate Professor, Department of Mathematics at University of Namur
Non-Markovian Models of Networked Systems
11.40 – 12.00 Daniel Romero, Nina Mishra and Panayiotis Tsaparas
Estimating the Relative Utility of Networks for Predicting User Activities
12.00- 13.30 Lunch

Session III

13.30 – 14.10 Keynote III: Nitesh Chawla, Associate Professor, Department of Computer Science & Engineering at the University of Notre Dame
Predicting links in heterogeneous social networks
14.10 – 14.30 Katherine Ognyanova, David Lazer, Michael Neblo, Brian Rubineau and William Minozzi
Ties that bind across contexts: personality and the evolution of multiplex networks
14.30 – 14.50 Neave O’Clery
A Multi-slice Approach to Understanding the Evolution of Industrial Complexity and Growth
14.50 – 15.30 Coffee Break

Session IV

15.30 – 16.10 Keynote IV: Mason Porter, Associate Professor at the Oxford Centre for Industrial and Applied Mathematics
Mathematical Formulation of Multilayer Networks
16.10 – 16.30 Seungkyu Shin, Sebastian Ahnert and Juyong Park
Degree-Neutralizing Weighted Random Walk Ranking in Competition Networks
16.30 – 16.50 Tomasz Kajdanowicz, Adrian Popiel, Marcin Kulisiewiecz, Przemysław Kazienko and Bolesław Szymański
Node classification in multiplex networks
16.50 – 17.10 Francesco Sorrentino
Stability of the synchronous solutions for networks with connections of different types
17.10 – 17.30 Andreas Joseph, Irena Vodenska, Eugene Stanley and Guangron Chen
MLR Fit-Networks: Global Balance of Payments

Conclusion and final announcements

17.30 – 18.00

24 April 2014 ~ 1 Comment

Data: the More, the Merrier. Right? Of Course Not

You need to forgive me for the infamous click-bait title I gave to the post. You literally need to, because you have to save your hate for the actual topic of the post, which is Big Data. Or whatever you want to call the scenario in which scientists are flooded with so much data that traditional approaches break, for one reason or another. I like to use the Big Data label just because it saves time. One of the advantages of Big Data is that it’s useful. Once you can manage it, simple analysis will yield great profits. Take Google Translate: it does not need very sophisticated language models because millions of native speakers will contribute better translations, and simple Bayesian updates make it works nicely.

Of course there are pros and cons. I am personally very serious about the pros. I like Big Data. Exactly because of that love, honesty pushes me to find the limits and scrutinize the cons of Big Data. And that’s today’s topic: “yet another person telling you why Big Data is not such a great thing (even if it is, sometimes)” (another very good candidate for a click-bait title). The occasion for such a shameful post is the recent journal version of my work on human mobility borders (click for the blog post where I presented it). In that work we analysed the impact of geographic resolution on mobility data to locate the real borders of human mobility. In this updated version, we also throw temporal resolution in the mix. The new paper is “Spatial and Temporal Evaluation of Network-Based Analysis of Human Mobility“. So what does the prediction of human mobility have to do with my blabbering about Big Data?

Big Data is founded on the idea that more data will increase the quality of results. After all, why would you gather so much data at the point of not knowing how to manage them if it was not for the potential returns? However, sometimes adding data will actually decrease the research quality. Take again the Google Translate example: a non native speaker could add noise, providing incorrect translations. In this case the example does not really hold, because it’s likely that the vast majority of contributions comes from people who are native speakers in one of the two languages involved. But in my research question about human mobility it still holds. Remember what was the technique in the paper: we have geographical areas and we consider them nodes in a network. We connect nodes if people travel from an area to another.

Let’s start from a trivial observation. Weekends are different from weekdays. There’s sun, there’s leisure time, there are all those activities you dream about when you are stuck behind your desk Monday to Friday. We expect to find large differences in the networks of weekdays and in the networks of weekends. Above you see three examples (click for larger resolution). The number of nodes and edges tells us how many areas are active and connected: there are much fewer of them during weekends. The number of connected components tells us how many “islands” there are, areas that have no flow of people between them. During weekends, there are twice as much. The average path length tells us how many connected areas you have to hop through on average to get from any area to any other area in the network: higher during weekdays. So far, no surprises.

If you recall, our objective was to define the real borders of the macro areas. In practice, this is done by grouping together highly connected nodes and say that they form a macro area. This grouping has the practical scope of helping us predict within which border an area will be classified: it’s likely that it won’t change much from a day to another. The theory is that during weekends, for all the reasons listed before (sun’n’stuff), there will be many more trips outside of a person’s normal routine. By definition, these trips are harder to predict, therefore we expect to see lower prediction scores when using weekend data.

The first part of our theory is proven right: there are indeed much less routine trips during weekends. Above we show the % of routine trips over all trips per day. The consequences for border prediction hold true too. If you use the whole week data for predicting the borders of the next week you get poorer prediction scores. Poorer than using weekday data for predicting weekday borders. Weekend borders are in fact much more volatile, as you see below (the closer the dots to the upper right corner, the better the prediction, click for higher resolution):

In fact we see that the borders are much crazier during weekends and this has a heavy influence on the whole week borders (see maps below, click for enjoying its andywarholesque larger resolution). Weekends have a larger effect on our data (2/7), much more than our example in Google Translate.

maps

The conclusion is therefore a word of caution about Big Data. More is not necessarily better: you still need theoretical grounds when you add data, to be sure that you are not introducing noise. Piling on more data, in my human mobility study, actually hides results: the high predictability of weekday movements. It also hides the potential interest of more focused studies about the mobility during different types of weekends or festivities. For example, our data involves the month of May, and May 1st is a special holiday in Italy. To re-ignite my Google Translate example: correct translations in some linguistic scenarios are incorrect otherwise. Think about slang. A naive Big Data algorithm could be caught in between a slang war, with each faction claiming a different correct translation. A smarter, theory-driven, algorithm will realize that there are slangs, so it will reduce its data intake to solve the two tasks separately. Much better, isn’t it?

20 March 2014 ~ 0 Comments

When Dimensions Collide

The literature about community discovery, which deals with the problem of finding related groups of nodes in a network, is vast, interesting and full of potential practical applications. However, if I would have to give one critique of it, it would be about its self referential character. Most community discovery papers I read in computer science and physics journals are mainly about finding communities. Not much time is spent thinking about what to do with them, or what they mean. My first post in this blog was about a community discovery algorithm. Recently, an extended version of that paper has been accepted in a computer science journal. Since that first post, I (mainly) added some crucial modifications and features to the algorithm. I don’t want to talk about those here: they are boring. I also didn’t bring up this paper to boast about it. Okay, maybe a little. I did it because the paper touches upon the issue I am talking about here: it tries to do something with communities, it tries to explain something about them. Namely, it asks: why do communities overlap?

First of all: communities do overlap. When trying to detect them, many researchers realized that hard partitions, where each node can belong to one and only one community, are not always a good idea. Most of them found this a problem. Others were actually very happy: the problem gets harder! Nice! (Researchers are weird). Blinded by their enthusiasm, they started developing algorithms to deal with this overlap. Not many asked the question I am trying to answer here: why do communities overlap? As a result, some of these algorithms detect this overlap, but using approaches that do not really mean anything in real life, it’s just a mathematical trick. Others, instead, build the algorithm around a core hypothesis.

This hypothesis is nothing unheard of. Communities overlap because people have complex lives. Some of your college mates also attend your yoga class. And you know your significant other’s colleagues, which puts you in their community. All these communities have you as common member, and probably some more people too. The beauty of this is that it is not only intuitive: it works well in finding communities in real world social networks. So well that it is the assumption of my approach and of many others outstanding algorithms (this and this are the first two that pop into my mind, but there are probably many more). Another beautiful thing about it is that it is almost obvious, and so it is probably true. But here we hit a wall.

The fact that it is simple, reasonable and it works well in practice proves nothing about its property of being true. There are things that are not simple nor reasonable, but nevertheless true (hello quantum physics!). And there is practical knowledge that does not quite correspond to how things work (in my opinion, most computer science is a patch and nobody really knows why it works). Unless we test it, we cannot say that this nice practical principle actually corresponds to something happening in reality. So how do we go on and prove it? In the paper I proposed a first step.

This brings me back to another old love of mine. Multidimensional networks. They are networks in which we put multiple relations in a cage together in mating season and see what happens (research is fun). The idea behind the paper is that multidimensional networks give us the perfect tool to test the hypothesis. In monodimensional networks you have no clue why two people are connecting besides the obvious “they know each other”. In a multidimensional network, you know why they know each other, it’s information embedded in the type of the relation. So, the hypothesis is that different types of relations are the cause of the community overlap, and with multidimensional networks we can look at how communities distribute over relations. First, let us take a look at what two overlapping communities look like in a multidimensional network.

We collected a multidimensional social network putting together relationships between users in Facebook, Twitter and Foursquare. We then used DEMON to extract overlapping communities from each dimension. We then took two communities with extensive overlap in the Facebook dimension (picture below).

We then looked at the very same set of nodes, but now in the Foursquare network. In the picture below, we kept the edges, and the node positioning, of the Facebook network to make the comparison easier, but keep in mind that the edges in the Foursquare dimension are different, and they are the ones that decide to what community the nodes belong.

Very interesting. The communities look a lot alike, although the shared (and non shared) nodes are slightly different. Now node 7369 is shared (it wasn’t in Facebook) while node 8062 isn’t (whereas it was before). Let’s put another nail on the coffin and see the communities these nodes belong in Twitter (same disclaimer applies):