Deep work: the ability to focus without distraction on a cognitively demanding task. Our world increasingly puts cognitive pressure on our jobs. Gone are the days of manual, repetitive drudgery, the hazardous physical work that we implicitly associate with the very word work. In entering the information era, we enter a market of possibilities, but most of them are information work. It is the office job, the sales job, the service job, even the student. What these have in common is the focus on cognition. All this is obvious. What is less obvious is that merely performing a cognitive task is not what brings success. According to Newport, there are two types of cognitive tasks: shallow and deep, and only deep work is what propels us forward. Deep work cannot be multitasked and cannot be performed distractedly. What’s worse, the whole world is changing in a way that makes deep work harder than ever before. The rise of the internet, instant messaging, and smartphones all contribute to a decreased attention span, with a distraction machine available a swipe away. Therefore, to succeed today we must hone our ability to do deep work and do it well, at the same time when it is getting ever harder to do so. The book is broadly separated into two parts. First, it defines deep work and convinces you why it is important. Then it identifies all type of shallow work that keeps us and makes us busy, but is not worth the time: […]
In early 2015, we formed a team, Biolab Ljubljana, to enter a competition on predicting odor of molecules. Given 4000+ features providing information about the chemical structure of a molecule, the task was to predicts its intensity, pleasantness and 19 semantic odor categories ranging from garlic and fishy to spicy, and musky. Our team created a ensemble of different machine learning methods, including gradient-boosted trees, ridge regression and random forest. We achieved 3rd place, and the final aggregated model was close to the theoretical limits of prediction (compared to an individual’s test-retest internal variance). The report was published in Science, where you can find more information about the task. I can now say I’ve published in Science! (although you’ll have to dig into the supplemental material to find me listed as one of the additional authors). Link to the full paper: http://science.sciencemag.org/content/sci/early/2017/02/17/science.aal2014.full.pdf
Original post on Zemanta’s blog, reproduced here for posterity: It’s every advertiser’s worst nightmare: advertising on a seemingly legitimate site only to realize that the traffic and/or clicks from that site are not the coveted genuine human interest in the ad. Instead they find fake clicks, unintentional traffic or just plain bots. In a never-ending quest for more ad revenue, website publishers scramble for ways to impersonate their more successful counterparts. However, not all approaches are as respectable as improving readability and SEO. One pernicious tactic is sharing traffic between two or more sites. Of course, almost all websites share some of their visitors, but this percentage is small. Moreover, as the site accumulates more visitors, the probability of a large overlap occurring by chance becomes infinitesimal. This tactic is commonly used by botnets, so that the sites employing this traffic can also be unwitting targets of such schemes. For example, a botnet can, among the suspicious sites, add several well-known and respected websites, so that the apparent credibility of the malicious sites is artificially boosted. The question is thus, can we identify these traffic-sharing websites? And if so, then how? The answer to the first question is yes, and to the second is this blog post. Our problem lends itself nicely to a network approach called a covisitation graph. We will construct a graph, such that the sites that share traffic will be tightly connected. Especially if visitors are shared between several sites, as is usually the case. We can […]
I recently finished a 10-week research visit at Stanford, working under Prof. Jure Leskovec. Here’s a short summary of my visit, and you’ll soon be able to read more about the research I did there. You can also check out my facebook photo album. After settling in on campus in the graduate residences, I went around to look at everything Stanford has to offer. Its architecture is unifying and gives it a very distinguished look, Some of its most beautiful buildings are the Huang Engineering Quad, the Oval, the church, and the main Quad. One of our main excursions was the visit to San Francisco. Me, Vid, Jose, and Klemen Kotar gathered at Uber headquarters for a workshop, after which we toured all around the city, walking along the famous Market St with its abundance of skyscrapers, the financial district, as well as going all around the coast on Embarcadero St, passing by the seals, from where we could see the infamous Alcatraz, and even get a glimpse of the Golden Gate bridge from afar, shrouded in the characteristic San Franciscan fog that envelops the tall buildings even midday. We also visited Lombard St, the “crookedest” street in SF, as well as the chocolate factory Ghirardelli, where we got some free chocolate! The week after, me and Vid went with France and Mia Rode to a picnic to the Twin Pines Park in Belmont, where many 1st, 2nd or 3rd generation Slovenians gathered for an afternoon of pleasant company and good food. […]
The Kangaroo Math Competition is a well-known international math competition designed for kids up to and including high school. However, for the first time this year, Slovenia’s math and physics society decided to host one for college students as well. Consisting of a both regional and state level, the first one was a qualifying match to get one of the spots in the state one (the whole of Slovenia). Luckily, I managed to get a bronze medal in the regionals and got to go the finals. Even more luckily, I snagged a silver medal at the state level, even though I probably solved half as many questions! In any case, it was a really great experience (both competitions) and I got to practice my math and logic skills, which were getting rusty in a computer science major /s.
My first journal submission just got accepted! It was a final improved and polished version of the segmentation work I presented at ERK last year. The arXiv preprint is available for now, but the final version will be published when the paper appears in this year’s Elektrotehniški vestnik (Journal of Electrical Engineering and Computer Science). The main differences between this version and the previous conference paper are the improved accuracy, and the added different pre-processing algorithms, as well as a more overall polished method.
Data Scientist: The Sexiest Job of the 21st Century Now that I got your attention…It seems like everyone and their manager wants a data scientist in their company to boost profits and use #bigdata, yet there does not seem to be a good definition of what a data scientist is supposed to do or even what kind of knowledge and expertise he/she must possess. From Drew Conway’s famous Venn diagram that probably oversimplifies things, to the recent length discussion on CrossValidated, the aptly-named stack exchange for statisticians, that probably overcomplicates it, I will not try to present a succinct, yet encompassing definition which is just going to get lost in the sea of failed attempts. But we can at least enumerate the plethora of inter-disciplinary skills that data scientists are expected to have. The degree requirements alone showcase the versatility of this position, ranging from a degree in any of the following: Computer Science, Statistics, Applied Math, Physics, Engineering, or basically any quantitative field. On top of this, the degree can also be either a BSc, MSc or PhD in any of these areas. Now, turning to the skills, we can split them into a few broad areas of expertise, and the more the better when it comes to a candidate possessing them. So basically, you’re expected to be familiar with every concept described below. Computer Science R & Python – You want a scripted language for fast prototyping, and these two are equipped with excellent data manipulation (numpy, pandas) and visualization (ggplot, matplotlib), in addition to machine […]
This is one of those books that completely change your outlook on a topic, and the topic of this books was none other than evolution. I had no idea this was Dawkins’s most famous book before reading it. And I had no idea he was such an accomplished biologist. This book could be best described as one long and incredibly detailed account of how the core of evolution is the replication of genes. Interestingly, this book had been written as both an original academic text and a popularization of the same theory, so it is probably as close as we could get to reading actual research in this area. Even though it is almost 40 years old, it does not feel dated at all. As with Steven Pinker’s book on the decline of violence, the whole text serves to hammer a single important point. In this one, it is that the main replicators are the genes (and not the species, as commonly imagined). He tries to conjure up all possible criticisms of this theory, far beyond a simple strawman, and then addresses each one. This is one of those books that really stick with you, and I definitely recommend it.
This year I received an award given out to the students with the highest grade average of the past year. I was honored to receive the award, of course, but I was also delighted about the book I got (where they put the certificate as the first page). In an incredible coincidence the book awarded — Richard Dawkin’s “The Selfish Gene” — was exactly the book I was currently reading on my e-book reader. Although I was halfway through the book at the time, it was a joy to have the rare experience of reading a book in paper format. A full review of the book will be coming soon.
Presented a short paper on machine learning algorithms at this year’s Information Society multiconference. It was a continuation of a project for my Machine Learning course. Prof. Bosnić and I looked at which feature selection techniques and which machine learning algorithms work best for gene microarray data, which has very few observations and many features (genes). The most interesting finding was that genes that were predictive of one cancer were also predictive on other data sets with different types of cancer. Our paper can be found in the proceedings under the Intelligent Systems section (Volume A, pp. 17). More info at my research page.