The Global Source for Social Media Researchers

SMRA Blog 

  • 15 Mar 2019 8:49 AM | Kathy Doering (Administrator)

    Dark Social is often an overlooked area of data for some brands. We can learn a lot from following brands like Starbucks who go beyond the social media threshold to uncover ways to engage with their customers in unique ways. #socialmediaresearch 

    dark web

    The coffee giant is trying to bring its marketing and product development closer together so it can jump on trends faster and get better insight on what resonates.

    By Sarah Vizard 14 Mar 2019 1:40 pm

    Starbucks on social media

    Starbucks is exploring how it can use private groups and accounts on social media to better engage with consumers around product development and testing as it looks to evolve its social media strategy.

    Speaking at an event held yesterday (13 March) by social media consultancy The Social Element, Reuben Arnold, Starbucks’ vice-president of marketing and product in EMEA, said the moves are helping it have “deeper conversations” with some customers and bring product and marketing closer together.

    “What I’m most excited about [on social media] is some of the possibilities around private groups and private accounts on social media channels,” he said. “When we think about the crossover between product and marketing, it really allows us to have a much deeper conversation with certain customers who really do care about our brand, who can then get much more involved in things like product development and testing, and we can use the audience in a much more meaningful way.”

    The groups are mostly on Facebook at the moment, although Arnold told Marketing Week Instagram is also “starting to take shape”. And while they won’t replace more traditional market research practices, Starbucks hopes this can offer a more natural way of engaging and enable it to go deeper with consumers.

    “Because I look after marketing and product, that really is quite an exciting opportunity,” he explained. “Like a lot of brands, we use trend analysis but social is a great channel to look at where those trends are starting to gain traction. Then through our own audiences it’s about how we create a highly engaged audience.

    The crossover between product and marketing allows us to have a much deeper conversation with customers who really do care about our brand, who can then get much more involved in things like product development.

    Reuben Arnold, Starbucks

    “We do that with online panels but [that] is a less natural way for people to communicate and it’s not so interactive. You can do conjoint analysis on an online panel but I find that when it’s in more of a conversational environment, it allows us to be more dynamic with our questioning and explore what resonates.”

    Starbucks is also using social media to listen not just to what customers are saying about the brand and its products but also trends in the market. For example, with dairy-free alternatives, social media allowed Starbucks to see that the conversation was shifting from a focus around intolerance to one around health and wellness, and then taste.

    “That really helped us think, we need to get ahead of this, particularly given some of our development timelines,” he said, adding: “Listening outside our own channels is important.”

    Measuring the ROI of social

    While investment in social media advertising is increasing, up to 13.8% of marketers’ overall budgets according to the latest CMO Survey, marketers are struggling to measure its impact. Just a quarter (24.7%) of marketers say they can prove the impact of social activity quantitatively, while 39.3% say they are unable to show any impact at all.

    Yet Starbucks won a silver IPA Effectiveness award last year for its social media strategy, which it calculates returned almost £4 in additional profit for every £1 spent.

    “What we were trying to demonstrate is that as a brand it’s about building an audience and building a connection,” said Arnold. “It wasn’t just a one-off big campaign, it was about building a connection with a particular segment audience and then doing interesting and different things each year to continue to engage with that audience and grow that audience base.”

    Starbucks uses econometrics in its more mature markets, including the UK, to judge success, saying it can see a “very direct correlation” between social media activity and footfall in-store and ladder that up to how many products it sold versus how much was invested.

    It also measures brand equity through measures such as follower counts in new markets, and sentiment and engagement in more mature ones. Key, said Arnold, it to set very clear KPIs by market and being focused on who is being targeted.

    “When we launch in a new market like Italy or South Africa, followers is quite an interesting link to brand awareness and brand affinity, whereas somewhere like the UK it’s going to be much more about things like mentions, sentiment, engagement,” he commented.

    “The answer is setting very clear KPIs by market and being very clear on who it is you are trying to target, rather than having broad or general metrics around how many likes, shares or follows.”

    Staying in control of social media

    The scale of social media can make measuring impact difficult. According to a survey of 60 senior marketers by The Social Element, 55% cite a lack of resource versus the size of the challenge as a big concern, while a third are concerned about controlling and engaging with content and 29% feel they have too many channels to manage.

    At Starbucks, this challenge is made greater by the fact the EMEA region covers 45 markets, each at different stages of maturity and with different local needs. To manage this, Starbucks uses a combination of guidelines and guidance, providing local social media managers with content in a toolkit format that can be edited, in a controlled way, for local language and relevance.

    “Our challenge is, how do we show up as a globally consistent brand but also show up in a market in a relevant way? Given most of our communication will be through digital and social media channels, that comes with a lot of risk as we hand over responsibility for activations in markets that aren’t directly operated by ourselves [but by franchisees],” he said.

    “Being clear on what is on-brand but also what is not on-brand is really important.”

    Our challenge is, how do we show up as a globally consistent brand but also show up in a market in a relevant way?

    Reuben Arnold, Starbucks

    For Shell, which was speaking at the same event, holding evaluation sessions is key to ensuring social media is doing the job it is meant to and that ladders up to the business strategy.

    “It’s not about having a social media strategy, it’s about having social support your business strategy and what you are trying to do. People often get confused by the two,” said Lee Goodger, social media business strategist at Shell.

    “Sometimes there’s an expectation that social should be the answer to everything; it can do many things but it can’t do everything.”

    Having a clear strategy, added Tamara Littleton, CEO at The Social Element, can mean brands understand they don’t have to do everything on social and instead focus on what is important to them.

    “People think there is a nirvana of all brands should be monitoring everything that’s being said, jumping into the conversation and being witty and doing positive responses, analysing what’s being said, creating amazing content. All of that is really important but you shouldn’t be doing it all the time,” she concluded.

    “By having that control that links back into the strategy you can be really clear on when is the best time to create the most compelling content, how do you then engage on the back of that, how you use social listening to identify if that was successful. We need to try and slow down and to do that you need good governance and strategy.”


  • 5 Mar 2019 3:36 PM | Kathy Doering (Administrator)
    We had to share this one!  One never knows what they will discover when they monitor social media for brand mentions! 

    The rocker recently posted an ad for Vanity Fair napkins to Instagram, after missing out on the other Vanity Fair's Oscar party

    By Olivia RaimondePublished on March 04, 2019.

    Credit: johnmayer via instagram

    If John Mayer isn't receiving as much love as he'd like from Vanity Fair magazine, at least he is from the napkin brand of the same name.

    Mayer reportedly threw his own "fake Vanity Fair" Oscars party after he was unsure whether he had an invite to the famous annual shindig. Now it looks like Mayer has parlayed that into a new side hustle: inspiring creative for Vanity Fair napkins.

    On Monday Mayer posted an ad for the napkin brand, which was created by them after he discussed the idea on his Instagram Live talk show, Current Mood.

    Apparently, Vanity Fair napkins--which is not affiliated with the Condé Nast magazine, or the lingerie brand--liked it so much that they produced the commercial and posted it to their Instagram with the caption: "Brilliant idea courtesy of #NapkinInfluencer and Creative Director, @johnmayer. "

    The animated ad features a character, who resembles a bearded John Mayer, asking a woman for her number. When she offers to write it on a Vanity Fair napkin, he laments "If you're going to deface Vanity Fair napkins, I don't think I want it anymore."

    Melissa Kinard, communications manager for Vanity Fair, and a personal fan of John Mayer, had seen the video of his faux Vanity Fair Oscars party and his idea for the ad while watching Current Mood.

    "It's obviously very silly. But we thought that would be a really fun opportunity for us to make that come to life," she says. "For us a brand and as a company, this kind of personalization is a key strategy for how we want to communicate with consumers of any kind, so this was a really fun opportunity for us to really take that personalization to another level."

    If Mayer keeps pitching gems like this, then maybe he'll finally earn that official invite to next year's Oscar party. Until then, the Napkin Influencer title will have to do.



  • 1 Mar 2019 8:47 AM | Kathy Doering (Administrator)

    Article published in Greenbook Blog by our member, Michalis Michael, Chief Executive Officer, DigitalMR Ltd


    Michalis Michael

    Monday 21 January 2019, 7:00 am

    Editor’s Note: The latest GRIT findings, previewed in these pages earlier this month, show that use of social media analytics continues to grow, especially among client-side practitioners. Despite this continuing growth, optimistic forecasts from earlier this decade suggested that its use would be even more widespread by this point. Michalis Michael discusses some of the reasons for this state, and argues that growth has turned a corner. The advancements in the science underpinning social analytics have been key. Given these, can we finally consider social media analytics mainstream?

    I think every market research professional will agree, that it is unthinkable for a company to be selling products in major retailer chains and not be tracking sales, volume and value shares, pricing and distribution, by subscribing to a retail measurement report on the relevant product category. Isn’t it therefore only natural that social media listening reports ‒ for brand share of voice ‒ will one day be as important as retail measurement reports ‒ for brand market share ‒ to the Marketing Director of a CPG company?

    It took a bit longer than expected but 2018 was a good year in terms of progress. Fellow agencies and end clients alike started to dabble in online data sources and high precision analytics using machine learning, on the path to insights discovery, campaign evaluation, brand reputation management, even micro-influencer identification or lead generation. Social media listening & analytics traction… at last!

    Humans tend to think in a linear way. We also tend to overestimate the short term and as a result we underestimate the long term. Case in point, I “bought” Joan Lewis’ (then SVP of Global Consumer and Market Knowledge at P&G) prediction when at a 2011 ARF conference she said:

    “Survey research will decline dramatically in importance by 2020, with social media listening replacing much of it and adding new dimensions.”

    What I extrapolated from this statement back then, was that by 2020 at least 30%-40% of the market research budgets would be spent on social listening & analytics, moving away from surveys and other traditional MR methods. So here we are today, just under two years to go until the end of 2020, and with the social analytics market being – according to Reuters – only US$3.4 Billion in 2017. The total market research market was US$76 Billion in the same year; social listening accounting for a mere 4.5%.

    Maybe you have heard of the 10-years-to-overnight-success pattern; it became a thing after Jeff Bezos said it for the first time; it looks like social analytics may be another case to prove the pattern. In the graph below you can see the flat linear growth between 2010 and 2015 and after that, the 2020 forecast for US$9 Billion (by DigitalMR in January 2017) and the 2023 forecast for US$16 Billion (by Reuters in February 2018). It looks like we are finally on a path to the exponential growth we’ve been expecting year after year.

    There are many reasons the market research industry has been so slow to embrace social listening & analytics, and fear of the unknown may actually be a part of it, but I think it was mainly due to:

    Loss of trust ‒ buyers of market research tried social media monitoring tools developed by pure technology companies a few years back, they found the accuracy of the analysis to be low and decided this approach is simply inappropriate for the purpose of market research.

    Existing subscriptions ‒ client-side market research departments are sometimes using the social media monitoring tool their media agency or digital marketing department subscribes to, in a very superficial way as they believe that this discipline sits somewhere else – in another silo. As in point 1 they believe that it has limited applicability to them as insights professionals.

    Where are the insights? ‒ whether ad-hoc reports from their media agency or analysis received directly from the aforementioned DIY tools, MR buyers have been left underwhelmed in the past with the reports from social – even if we ignore the accuracy problems. Technology can only go so far, you need someone with consumer insights expertise to make the most out of the analysed posts.

    Let’s stick to what we know ‒ potentially related to the individual’s long experience and years spent working with traditional MR methods, there are still colleagues out there who refuse to let go of notions such as “sampling” and “respondents”; they find it too risky to accept social intelligence as a new and valid market research method that can complement and sometimes completely replace existing methodologies.

    Most social media monitoring tools can achieve 60% accuracy at best (many way lower than that) on any of the important metrics i.e. brand relevance, sentiment or topic. This is not due to inexperience or lack of knowledge or options, but purely a choice made by companies to maximise profits. It actually takes a lot of effort to create custom machine learning models ‒ which is still the only way to reach accuracies of over 80% if done right ‒ but the extra effort pays off in the end. Even machine learning can get it wrong if the model is trained on low quality (or the wrong) data! In a 2013 face-off of market research versus four traditional social media monitoring tools, with Carlsberg as the client, we found that over 90% of the online posts analysed by the four were not even about the brand and the digital campaign which was to be evaluated. In another face-off for SABMiller, 80% of posts classified by the social media monitoring tool as negative were found to be positive, and 56% of the posts classified as neutral were in fact negative. A third party appointed by the client found the overall sentiment accuracy of the market research approach to be 87% vs 44% for the social media monitoring tool approach.

    Growth will only continue to happen if the social listening and analytics vendors out there can prove to non-users that there is real value to their business, by demonstrating that they can accurately interpret what customers are saying on social media and translate it into actionable insights. On the buyer side of things, there needs to be more open dialogue between the various departments of an organisation. There needs to be an alignment of interests across the business as a whole, and a realisation that everyone can benefit from such solutions more or less equally, just in a different way. Social listening & analytics is not just for the marketing department, it’s not just for PR crises, and not only about monitoring the success of corporate social media posts; social insights can serve many purposes, and a “one stop shop” that works for all departments is better for the company as a whole.

    Beyond ‘social’ as a standalone source of data and insights, there is ample evidence that integrating social listening with surveys and real consumer behaviour such as purchases or website visits brings impressive value to the table, particularly when it comes to discovering correlations between social sentiment and sales. As Schweidel and Moe also suggest in their paper Listening in on Social Media: A Joint Model of Sentiment and Venue Format Choice, published in the American Marketing Association’s Journal of Marketing Research in 2014, “examination of this sentiment could provide guidance to brands on which domains are most critical to monitor and actively engage social media contributors”. In a R&D study conducted in 2017 and presented at the ESOMAR MENAP conference, it was impressive to discover that the beta coefficient for positive sentiment and sales was double than that of negative sentiment and sales! Of course, all the effort and resource you put in social analytics and data integration is a complete waste if the relevance of the data and the accuracy of annotating the posts with sentiment and topic of conversation is lower than it should be.

    Progress has been painfully slow up to now, and according to the forecasts described above “listening” will only account for around 20% of the total market research market by 2023, but hey, much better than the 4.5% it was in 2017.


  • 18 Feb 2019 12:05 PM | Kathy Doering (Administrator)

    In today’s world of what’s trending now…Instagram is leading the charge. But what many users and businesses have found is that building an audience and trying to connect with people on Instagram can be difficult. People are definitely more selective on Instagram.

    When Facebook and Twitter were new, you accepted friend requests from everyone. You sent a friend request to all the new people you met at a party and you followed thousands of profiles on Twitter in the hopes that they'd follow you back. While you can build large audiences this way, what also happens is that you have a feed full of throw away information and a steady flow of random updates from folks you barely remember.

    On Instagram, people don't follow as readily. This poses a new challenge for brands, making them work harder for attention. But once you do win over new followers, they're far more likely to become customers, because they're following based on actual interest in what you post.

    So how do you win over more resistant Instagram audiences and build an engaged following?

    Instagram has added half a billion new users within the last two years, while Instagram Stories, launched in August 2016, just hit a new milestone of 500 million daily users. It's clear that Instagram is getting more attention but a particularly relevant finding of the study for marketers is that many users are actually going to Instagram to connect with brands.

    "When we asked people what they associated with Instagram, some of their top responses were that the platform allows interaction with celebrities and influencers. Additionally, 2 in 3 respondents (66%) told us that Instagram is a platform that allows interaction with brands. These interactions might take the form of a snack company using polls in Instagram Stories to let fans vote on a new chip flavor, or a fashion brand reposting photos of the chic outfit of the day."

    While Facebook has been actively working to boost interaction with friends and family members in order to increase in-platform engagement, the findings for Instagram almost reflect the opposite. Users are coming to see content from influencers and brands, and welcoming that within their feed - when they choose to follow those profiles.

    So what do Instagram users expect to see from brands on the platform and how can you reach those users?

    Create short and well-targeted posts and focus on eye-catching content. Familiarize yourself with what works, and not just in terms of posting like-baiting inspirational quotes, but in regards to creating content that both works to promote your business, and connects with what your target audience responds to.

    Do you know the importance of using Instagram Stories?

    The newest trend on Instagram that businesses need to incorporate is the use of Instagram Stories. Instagram Stories is a feature that lets users post photos and videos that vanish after 24 hours.

    The feature feels much like Snapchat Stories, a Snapchat feature that was introduced in 2013 and a pivotal part of the company’s growth. And like Snapchat, the photos and videos shared in your Instagram Story are ephemeral and can’t be viewed once 24 hours have elapsed.

    With Stories, brands have a chance to take their followers on a journey and tell the story behind the posts in their feed. Live video is extremely engaging, and though Instagram Stories won’t allow for a long, uninterrupted broadcast like Facebook Live or Periscope, it could allow brands to make their Instagram accounts the place to go for live, interactive content.

    Conclusion

    Given the habitual changes, building an audience on Instagram is not easy, it's not as simple as liking every post from every person who engages with a certain hashtag, or using the old 'follow back' approach to build up your numbers. Research the platform, look at what others in your industry are posting, what their audiences are responding to, and scan through the relevant hashtag lists to see what people are posting. Once you've got a handle on what you should be sharing, stick to a schedule and build up response data.

    It takes time, but with the right focus, you can tap into the rising Instagram trend.


  • 12 Feb 2019 6:46 AM | Kathy Doering (Administrator)

    As published this week in Forbes,  Kalev Leetaru gives an interesting point of view regarding how social media data fits in with big data for researching purposes. 

    Kalev Leetaru

    Contributor

    AI & Big DataI write about the broad intersection of data and society.

    Getty Images.GETTY

    Social media has become synonymous with “big data” thanks to its widespread availability and stature as a driver of the global conversation. Its massive size, high update speed and range of content modalities are frequently cited as a textbook example of just what constitutes “big data” in today’s data drenched world. However, if we look a bit closer, is social media really that much larger than traditional data sources like journalism?

    We hold up social media platforms today as the epitome of “big data.” However, the lack of external visibility into those platforms means that nearly all of our assessments are based on the hand picked statistics those companies choose to report to the public and the myriad ways those figures, such as “active users,” are constantly evolved to reflect the rosiest image possible of the growth of social media as a whole.

    Much of our reverence for social platforms comes from the belief that their servers hold an unimaginably large archive of global human behavior. But is that archive that much larger than the mediums that precede it like traditional journalism?

    Facebook announced its first large research dataset last year, consisting of “a petabyte of data with almost all public URLs Facebook users globally have clicked on, when, and by what types of people.” Despite its petabyte stature, the actual number of rows was estimated to be relatively small. In all, the dataset was projected to contain just 30 billion rows when it was announced, growing at a rate of just 2 million unique URLs across 300 million posts per week, once completed.

    To many researchers, 30 billion rows sounds like an extraordinary amount of data that they couldn’t possibly analyze in their lifetime. By modern standards, however, 30 billion records is a fairly tiny dataset and the petabyte as a benchmark of “big data” is long passé.

    In fact, my own open data GDELT Project has compiled a database of more than 85 billion outlinks from worldwide news outlet homepages since March 2018, making it 2.8 times larger than Facebook’s dataset in just half the time.

    Compared to news media, social media isn’t necessarily that much larger. It is merely that we have historically lacked the tools to treat news media as big data. In contrast, social media has aggressively marketed itself as “big data” from the start, with data formats and API mechanisms designed to maximize its accessibility to modern analytics.

    In its 13 short years Twitter has become the defacto face of the big data revolution when it comes to understanding global society. Its hundreds of billions of tweets give it “volume,” its hundreds of millions of tweets a day give it “velocity” and its mix of text, imagery and video offer “variety.”

    Just how big is Twitter anyway?

    The company itself no longer publishes regular reports of how many tweets are sent per day or how many tweets have been sent since its founding and it did not immediately respond to a request for comment on how many total tweets have been sent in its history. However, extrapolating from previous studies we can reasonably estimate that if trends have held there have been slightly over one trillion tweets sent since the service’s founding 13 years ago.

    At first glance a trillion tweets sounds like an incredibly large number, especially given that each of those trillion tweets consists of a JSON record with a number of fields.

    However, tweets are extremely small, historically maxing out at just 140 characters of text. This means that while there are a lot of tweets, each of those tweets says very little.

    In reality, few tweets come anywhere near Twitter's historical 140-character limit. The average English tweet is around 34 characters while the average Japanese tweet is 15 characters, reflecting the varying information conveyed by a single character in each language.

    Moreover, while raw Twitter data can be quite large (a month of the Decahose was 2.8TB in 2012), just 4% of a Twitter record is the tweet text itself. The remaining 96% is a combination of all of the metadata Twitter provides about each tweet and JSON’s highly inefficient storage format.

    Since most Twitter analyses focus on the text of each tweet, this means the actual volume of data that must be processed to conduct common social analytics is quite small.

    Assuming that all one trillion tweets were the maximum 140 characters long, that would yield just 140TB of text (the actual number would be slightly higher accounting for UTF8 encoding).

    In 2012 the average tweet length Twitter-wide was 74 bytes (bytes, unlike characters, account for the additional length of UTF8 encoding of non-ASCII text), which would mean those trillion tweets would consume just 74TB of text: a large, but hardly unmanageable collection.

    If we extrapolate from the 2012-2014 Twitter trends to estimate that somewhere in the neighborhood of 35% of all trillion tweets have been retweets (assuming no major changes in retweet behavior), then using that 74-byte average length would yield just 48TB of unique text.

    Of course, this is before the hyperlinks found in roughly a third of tweets are removed (again, assuming trends have held since 2014). It also ignores the prevalence of “@username” references in tweets that do not contribute to their analyzable text.

    For comparison, the 2010 Google Books NGrams collection representing 4% of all books ever published totaled 500 billion words (361 billion English words) and was estimated to be around 3TB in size. That would make it 25 times smaller than the totality of Twitter. The Internet Archive’s collection of English language public domain books totals around 450GB of text, making it around 86 times smaller than Twitter.

    The Google and Internet Archive digitized book collections include only a single copy of each book, making it unfair to compare them against Twitter with its myriad retweets. Filtering out retweets, we find that Twitter is just 16 times larger than the Google Books NGrams source collection, while the Internet Archive’s public domain books collection is around 54 times smaller.

    It is a remarkable commentary on the digital era that just 13 years of tweets is larger than the two centuries of digitized books available to researchers today.

    Partially this is due to the fact that such a small portion of our history has been digitized (less than 4% of known published books are represented in the Google Books NGrams dataset). In essence we are comparing the totality of 13 years of Twitter against just a 4% sample of two centuries of books.

    A bigger factor is the fundamentally altered economics of publishing in the digital age. Through the two centuries of printed books in the two collections above, the cost of publishing a book was so substantial that very few authors were rewarded for their efforts with published volumes and every word of a book mattered.

    In contrast, in the Twitter era one’s publishing volume is limited only by the speed one can type (or have a bot type on one’s behalf).

    This means that to truly compare Twitter’s size to other datasets we should compare it against a similar born digital collection. Given that the news dataset above ended up being almost three times larger than the equivalent Facebook dataset in just half the time, how does Twitter stack up against traditional journalism?

    Over the period November 2014 to present, the GDELT Project monitored roughly 3TB of news article text (counting just the article text itself, not the hundreds of terabytes of surrounding HTML, CSS, JavaScript and imagery).

    Over that same time period, we can estimate based on previous trends that Twitter likely published in the neighborhood of 600 billion tweets, of which 330 billion were not retweets (assuming trends have held with retweet volume increasing over time).

    This would work out to around 84TB of text during that period if every single tweet were the maximum 140 characters or around 44TB using a 74-character average tweet length. Excluding retweets this would fall to just 24TB of text, assuming an average tweet length.

    News content can contain syndicated wire stories that are republished by multiple outlets, but the volume of such republication as a percent of the totality of daily journalistic output is unlikely to come close to the prominence of retweeting.

    Counting all trillion tweets sent 2006-present and assuming all of them were the maximum 140 characters, the Twitter archive would be just 47 times larger than global online news output 2014-present as monitored by GDELT. Using the more realistic average tweet length, Twitter would be just 25 times larger and removing retweets it would be just 16 times larger.

    Of course, those numbers compare a 13 year stretch of Twitter to just 4 years of news.

    Comparing the two over the same four-year period, we find that Twitter was around 15 times larger than news, but just 8 times larger if retweets are removed.

    Thus, if one had access to the complete Twitter firehose 2014-present, the total volume of text would likely be only around 8 times larger than the total volume of online news content over the same time period.

    Seen in this light, Twitter is large, but it isn’t that much larger than global journalism, reminding us of just how much news is published each day across the world.

    Precious few researchers have access to the full firehose, so the largest academic research is typically conducted with the Twitter Decahose, which contains around 10% of daily tweets.

    The total Decahose output 2014-present is just 1.5 times larger than news. Removing retweets, the situation is reversed and news is actually 1.2 times larger than Twitter’s Decahose.

    Few universities have the financial resources to subscribe to the Twitter Decahose, so the overwhelming majority of academic Twitter research is conducted either with Twitter’s search API or its 1% streaming API that makes available roughly 1% of daily tweets.

    News is actually 6.7 times larger than the Twitter 1% stream over this period. If retweets are removed, news rises to 12.2 times larger than Twitter.

    Thus, in terms of the 1% data that most academics work with, Twitter over the last four years is actually several times smaller than worldwide online news output over the same time period. Those academics lucky enough to work with the Decahose still have less content than they would get from news. Yet, even if one had the entire firehose at one’s disposal, the totality of that content would be just 8 times larger than news content. Filtering out all of the hyperlinks and username references would drop that number even further.

    In short, Twitter is certainly a large dataset, but in terms of the actual textual tweet contents that most analyses focus on, we see that a trillion tweets don’t actually work out to that much text due to their tiny size. In many ways, Twitter is more akin to behavioral messaging data than a traditional content-based platform, especially with the way its retweet behavior corresponds to the “like” and “engagement” metrics of other platforms.

    Most importantly, we see that even at the full firehose level, Twitter isn’t actually that much larger than the traditional contemporary datasets that precede it like news media. Twitter may be faster but it isn’t that much larger. In terms of the Decahose and 1% products that most researchers work with, news media actually offers a larger volume of analyzable content with far better understood provenance, stability and historical context.

    Putting this all together, it has become the accepted wisdom of the “big data” era that the social media giants reign supreme over the data landscape, their archives forming the very definition of what it means to work with “big data.” Yet, as we’ve seen here, a trillion tweets quickly become just a few tens of terabytes of actual text, reminding us that high velocity small message streams like Twitter may consist of very large record counts, but very little actual data that is relevant to our analyses.

    Just as importantly, we see that traditional data sources like news media are actually just as large as the social archives we work with, reminding us of the immense untapped data sources beyond the glittering novelty of social media.

    Twitter certainly meets all of the definitions of “big data” but if we look closely, we find that good old traditional journalism is not far behind. The difference is that social media has aggressively marketed itself as “big data” while journalism has failed to rebrand itself for the digital era.

    In the end, rather than mythologize social media as the ultimate embodiment of “big data,” perhaps the biggest lesson here is that we should think creatively about how to harness the untapped wealth of data that surrounds us and bring it into the big data era.




  • 7 Feb 2019 2:08 PM | Kathy Doering (Administrator)


    The Social Intelligence Lab needs your help to define the future of Social Media Intelligence. This starts by understanding what you are doing with social data today. If you analyse, interpret social data or action social insights, we'd love for you to complete our survey. All responses are anonymous and results will be collated to produce the final results.

    Please take some time to share your input & insight, and share with others in the industry - it's valuable research that will provide great insights into the social intelligence & research industry.


    Take the survey now

  • 2 Feb 2019 7:28 AM | Kathy Doering (Administrator)


    This is a great tool for B2B marketers to engage and learn from. Great new Linkedin feature. 

    By: Andrew Hutchinson@adhutchinson

    Jan. 31, 2019 TWEET

    With LinkedIn currently seeing 'record high levels' of engagement (LinkedIn recently reported that the number of updates viewed in LinkedIn feeds has jumped 60% year-over-year), the professional social network has this week launched a new initiative to help marketers maximize their content performance, by highlighting the most popular, most engaging LinkedIn Publisher posts each month.

    Called the 'Water Cooler', LinkedIn will publish a new listing of the most popular Publisher posts from the previous month, starting with the top posts in December, which is as follows:

    1. Be the Spark By Diane Fennig
    2. 5 Books I Loved in 2018 By Bill Gates
    3. Your Most Important Assets Aren’t Your Clients; It’s Your Employees By Brigette Hyacinth
    4. Black Woman Named Deputy Director of NASA’s Johnson Space Center, Making History By Teddy Grant, Black Press USA
    5. Happy Employees Are More Productive. As Simple as That! Agree? By Oleg Vishnepolsky
    6. The World’s Most Successful People Don’t Actually Start Work at 4 a.m. They Wake and Work Whenever the (Heck) They Decide, By Jeff Haden, Inc.
    7. What I Learned at Work This Year By Bill Gates
    8. Warren Buffet Says You Should Hire People With These 3 Traits, but Only 1 Will Point to a Truly Successful Employee By Marcel Schwantes, Inc.
    9. When People Ask How You Are, Stop Saying ‘Busy’ By Robert Glazer
    10. Leading From Hurt Versus Leading From Heart By Brene Brown

    Along with the listing, LinkedIn will also include notes on what marketers can take away from the listings to help guide their own on-platform content approach. 

    "The articles attracting the attention of LinkedIn members have two common threads. First, LinkedIn members engaged with articles about company culture. The top article in December, “Be the Spark,” examined the power of gratitude at work. Other articles in the top 10 showcased employees as crucial assets, celebrated diversity and pondered empathic leadership. Second, LinkedIn members engaged with articles featuring advice from Bill Gates and Warren Buffett, which indicates business professionals are always looking for ways to improve themselves and the companies they work for."

    The latter point is probably less useful - two of the top ten most engaged with posts were written by Bill Gates, and every post Bill Gates writes will likely garner a lot of attention. You can't get Bill Gates to write for your company - though the 'lessons from Warren Buffet' post (number 8 on the list) points to a potential opportunity to utilize the knowledge of leaders within your own content in order to boost attention.

    As noted, LinkedIn engagement is increasing, and with the platform's recent change to its algorithm to ensure more voices are heard, the opportunities to reach your LinkedIn audience are greater than ever. That's not to say all your LinkedIn Publisher posts are going to 'go viral' (remember when LinkedIn first launched its Publisher platform and every post saw massive reach?), you will need to stick with it, and likely weather a few average performers as you go about posting. But by using the Water Cooler as a guide, and implementing a consistent LinkedIn posting strategy, you could reap significant benefits.    

  • 23 Jan 2019 12:35 PM | Kathy Doering (Administrator)

    Phycology Today published an article a few years back on, "Facebook Personalities. Which One Are You?" In the article the author identifies the following personality types:Voyeurs, Informers, Me Mees and Evangelists. This study was based on what people posted and the frequency of posts. It did not include comments made on friend's posts, what they "liked" or how many friends they had. While this study is certainly a glimpse of human behavior, I would go one step further to say that a person's friend list can be even more revealing. Many times this reveals much more than most people think it does.

    In fact, a new study from researchers at the University of Vermont and the University of Adelaide found that they could predict a  person's posts on social media with 95 percent accuracy -- even if they never had an account to begin with. When you think about it, it makes sense. No one ever asks you whether or not you posted anything about them in social media; they just post.

    But, let's dig even deeper. If a friend of yours uses an app in Facebook that allows them to upload their contact list, then your data may be part of that upload, even if you don't have a profile. So there are some real privacy concerns within the social network, as we have seen in the media last year. The loopholes have allowed for a lot of social media targeting for marketing and advertising purposes. This is something that we hear about all the time and most people understand they are going to see sponsored content based on their profile demographics. This is why consumers don't pay for subscriptions. We get that.

    However, we are getting to a point where more and more people are searching Facebook to learn more about a new neighbor, a volunteer, a potential employee or someone you just started to date. To make it more interesting, let's use the dating scenario. If I were single and began dating someone, I would look them up in social media. Not only would I look at his profile, I would also take a good look at who his friends are. Many times who you hang out with in your free time says a lot about you. Do you enjoy heavy drinking every weekend? Are you religious? Are you a Democrat or Republican? What do they stand for? You get the idea! We have all become very good at "stalking".

    Social media investigations have become very important in tracking down criminals of all kinds over the years. With the help of robust software platforms, one can identify a lot about a person and even zero in on their whereabouts.

    On the flip side, criminals can steal your online identity as well. Entrepreneur magazine just published, "Why Googling Yourself Is Not Just for Fun Anymore." It reveals some startling statistics:

    "There are real-world consequences to what’s out there about you online, even if you had nothing to do with it. It may surprise you to learn:

    33 percent of Google search results are influenced by other individuals of the same name

    20 percent of people find outdated or flat out inaccurate information

    12 percent are “unpleasantly surprised” by what they find, though it may not be necessarily incorrect

    8 percent unfortunately find embarrassing or reputation damaging information"

    So reputation management is not just for business anymore. We must all take an active role in our own reputation management. Have you ever had online identity theft? Do you Google yourself on a regular basis?



  • 14 Jan 2019 9:49 AM | Kathy Doering (Administrator)

    8th of January 2019

    Michalis Michael The Future of MR

    social media research


    As one would expect, social media intelligence (or just social intelligence) came up as a subject at the “Social Intelligence World” conference in London back in November 2018. More specifically, it came up in the context: how does it differ from social media listening?

    This question took me back several years, when we published our first eBook about “web listening”, our label of choice at the time which was a buzzword; its most popular version was “social media monitoring”. Social media intelligence did not come up at all back then, albeit in hindsight it is odd that it didn’t. I am not sure how we missed it then, but now, when someone asks what is the difference between intelligence and listening, the answer seems quite obvious!

    Social media listening or social media monitoring is simply about harvesting the online posts and maybe even annotating them with a topic and/or sentiment. If the annotation is accurate then it answers questions like ‘what are people talking about online’ or ‘how do they feel about my brand’? Social intelligence on the other hand, is about understanding the deeper meaning of what people choose to post - although sometimes there isn’t one - and link it to a business question; notice how the term ‘actionable insights’ has not come up yet? Another buzzword that is overused in the market research sector, and another one for which we published numerous blog posts with our own - very concrete - definition of what it really is!

    When we say ‘social media’ in this context we don’t just mean social media platforms, but rather any public online source of text and images which might express consumer or editorial opinions and/or facts. A side note: things would be a lot easier if we meant what we say in a literal way. People who coin phrases or titles or headings tend to take a lot of freedoms on the altar of “crispness” or “snappy creativity”!

    listening247 - an aspiring state-of-the-art DIY SaaS looks at the world of social intelligence via four lenses:

    Source (verb)

    Annotate

    Analyse

    Visualise

    We would be remiss if we didn’t mention text and image analytics as a standalone discipline when the source is not social media or other online sources. In such a case the only difference is that the source is not the online web but any other source of text and images. Perhaps if the source is not the online web it should just be called Business Intelligence, which is an old and very familiar discipline within organisations.

    Back to the 4 modules, they have the power to generate intelligence derived from unstructured data - which make up 80-90% of the human knowledge, produced since the beginning of time. Structured data which are effectively numbers in tables or graphs only account for 10-20% of all our knowledge as a species.

    Source

    Unstructured data can be harvested from the web and if we want to stay out of jail we will stick to public data (as opposed to private conversations or personal data). They can be harvested through APIs that the sites which contain the data make available for pay or for free, and through scrapers which can crawl a website and find specific consumer or editorial posts. Responses to open ended questions in surveys, transcripts of focus groups or even call centre conversations are also great sources of opinions and facts (i.e. unstructured data).

    Annotate

    In order to make sense of big unstructured data, machine learning is a good place to start. Supervised machine learning requires humans to annotate a big enough sample of the available data. The annotated data-set is then used to train a machine learning algorithm to create a model that does a specific job really well; the aim is to get over 80% relevance, precision and recall. Unsupervised machine learning is making great strides but cannot replace the supervised approach currently.

    Analyse

    Once we have a trained model and our data-set we need to process the latter and annotate it in its entirety. The data can be filtered and navigated in many ways. Structured data can be produced in the form of tables, making the analysis of the data-set possible. The goal here is of course to enable human analysts to uncover actionable insights - since machines are not there yet.

    Visualise

    Data visualisation is typically done on dashboards or PPT presentations. The most appropriate types are drill-down and query dashboards. There are multiple delivery mechanisms and use cases, e.g.

    Annotated data via an API

    Access to an online or offline dashboard to interrogate the data

    Executive summaries and periodic reports

    Email alerts

    Fixed dashboards for war-rooms

    Social media intelligence has multiple use cases for multiple departments as shown in the list below, annotated as multipurpose ‘intelligence’ or specific ‘actions’:

    Market research: find out how customers think and feel about products and campaigns (intelligence)

    Advertising: use positive posts as testimonials or ideas for Ads (action)

    PR: amplify positives and appear to have good answers to negative comments, brand ambassador communities (action)

    Customer Care: respond to online comments (action)

    Operations: fix customer reported product issues (action)

    Product development: discover new product trends, missing product features (intelligence)

    Sales: identify sales leads based on expressed purchase intent (action)

    The many departments involved and the many use cases ultimately create a confusion as to who the owner should be within an organisation. Maybe Social Intelligence should simply be part of the Business Intelligence or the Market Research department, offering custom user interfaces to the various action players with only the information they need specifically to take action.

    Having a Business Intelligence or Market Research Department is a privilege reserved only for large organisations. For small and medium enterprises (SMEs or SMBs) that do not have a business intelligence department a different approach and possibly nomenclature should be employed; but this is the stuff for another blog post. In the meantime let us know where you stand on all this by emailing us or tweeting to @DigitalMR.

     


  • 2 Jan 2019 4:49 PM | Kathy Doering (Administrator)

    By: Barney Cotton, Author at Business Leader Magazine 

    At the SMRA we have discussed how social media data can be used in business due diligence prior to an acquisition. I find this article interesting in that it may be used in part for investment decision making.   


    Alternative data is becoming the new normal for investors, with 79% of traders currently using if for investment decision making – that’s according to Dataminr’s ‘The New Data Paradigm for Traders’ report.

    Social media is a prime example of an alternative data source, one which gives investors a competitive edge allowing them to capitalise on the insights.

    But with a plethora of alternative data sources and the use of social media data, how can traders marry together data sources to find the right real-time insights that will set them apart from competitors?

    With that in mind, Business Leader spoke with VP at Datamir Ed Oliver about the report and the sector as a whole.

    CAN YOU GIVE ME AN OVERVIEW OF THE COMPANY?

    Dataminr is a real-time information discovery company that generates alerts on activity across multiple, publicly available sources, such as social media, blogs, information sensors and the dark web. Relied on 24/7 by thousands of clients in over 70 countries worldwide, it is providing individuals with insights and context on breaking events, giving them the opportunity to make more informed decisions.

    As a company, Dataminr has pioneered a groundbreaking Artificial Intelligence and Machine Learning technology that identifies, classifies and determines the significance of real-time information, and delivers relevant alerts to clients across the Finance, Corporate Security, PR/Communications, News and the Public Sector.

    HOW HAS THE SECTOR CHANGED OVER THE LAST FEW YEARS?

    We live in an always-on world, dominated by social media and driven by data. This means anyone with a smartphone can report real-time information to a global audience, breaking major news before anyone else. This is the biggest change the sector has seen in recent years.

    However, the public information landscape does have its limitations; it’s difficult to pinpoint where or when relevant news will break, and targeted keyword searches or well-curated feeds can easily miss the mark. The key is the use of a trusted source that can push relevant information through a sea of online noise and provide early, reliable and differentiated insights.

    WHAT DOES THE FUTURE HOLD FOR THE COMPANY?

    Looking to the future, Dataminr will continue to evolve with the opportunities that real-time information brings to our clients, whilst we continue to expand our global client footprint. We will continue exploring public data sets from multiple sources o discover breaking events across the world. Ultimately, we’d like to remain at the front line of real-time information awareness, ensuring that people have the knowledge they need to act with confidence.

    HOW DO YOU DEFINE ALTERNATIVE DATA, AND WHY SHOULD TRADERS BE PAYING ATTENTION?

    By definition, alternative data is any set of data that is outside of the traditional data stack for financial services. Traditional data is market data, pricing data, volume data for different asset classes, earnings reports, press releases, economic data that comes out on a scheduled basis, etc. Another way to think about traditional data is data that is effectively scheduled or is announced on a regular basis.

    For traders, alternative data really adds an additional layer of increasingly critical information to their workflow, providing a new level of understanding of what’s going on in the market. Very often, alternative data can explain movements in the market. In other cases, alternative data can effectively give you a heads up before the market moves.

    HOW CAN ALTERNATIVE DATA HELP TRADERS STAY AHEAD OF THE MARKET?

    There are a few ways, but broadly alt data allows traders to see a more complete picture. You’re not just seeing pricing data. You’re not just seeing information that’s being sent by official sources — via press releases, by corporations, by a government department releasing data — but you’re also seeing information coming from a variety of unique and differentiated sources.

    Asymmetry of information creates advantage. The magic is in being able to connect the dots among unique pieces of information to identify the moments that matter, derive insight, and then make better decisions. Alternative data can inform a real-time trade or a strategy. Often traders will have a thought or a view in the market and will build a strategy based on that view. By leveraging alternative data, traders gain a new layer of information and can confirm or adjust strategies in real time, with the right tools.

    Finally, alternative data can help traders mitigate risk and more fully understand the impact of events as they occur. For example, on February 14, 2018, South African President Jacob Zuma resigned when faced with a motion of no confidence in Parliament. The timeline of the events that ultimately lead to Zuma’s resignation began as early as February 7th when Dataminr updated clients to rumours around the pending resignation and then again when Zuma confirmed his resignation nearly seven days later.

    WHAT ARE THE DIFFERENT TYPES OF ALTERNATIVE DATA SETS? AND WHAT’S THE VALUE FOR EACH?

    There are many. There’s social media. There are other types of web-based data, like blogs or even retail sites. There are information sensors — essentially the output of devices that detect and respond to some type of input from the physical environment. There is satellite information and credit card data. The range is pretty broad. The value depends on the use case. Not every type of alternative data set is right for every investor. And often, it’s the combination of data points that leads to great insights, and where the ability to truly differentiate trading strategies comes into play.

    WHAT ARE THE CURRENT CHALLENGES FOR TRADERS AND FIRMS WHEN LEVERAGING OR INCORPORATING ALTERNATIVE DATA INTO THEIR EXISTING PROCESSES?

    I think the biggest challenge is that there’s so much data out there right now. It’s almost unlimited, and it’s hard for traders and firms to, first, capture it all and, second, gather insights in that massive sea of data. Creating some structure to these generally unstructured data sets is not always easy, and then once it’s structured you still have to make it useful and find what matters.

    Another challenge is integrating this information into existing workflows. A particular data set may seem interesting or may seem valuable on the surface, but without the ability to incorporate that data into the existing workflow the value is more limited.

    HOW ARE TRADERS INCORPORATING ALTERNATIVE DATA INTO THEIR WORKFLOW?

    It really depends on the type of data. Dataminr provides a tool that alerts our clients to information that they care about from within complex, unstructured data sets, by effectively providing a “tap on the shoulder.”

    We do that via our web-based platform, real-time emails, pop-ups on your computer, as well as a mobile app. We offer a number of different ways so clients can pick the best delivery mechanism for themselves and to also make sure that they don’t miss information at critical times.

    HOW HAVE YOU SEEN THE ADOPTION OF ALTERNATIVE DATA CHANGE IN THE PAST FEW YEARS?

    Three years ago, the conversation really centred around “what is alternative data”. Now people understand the variety of alt data sets, and the dialogue has shifted to how to use alt data effectively.

    For example, recent research from EY showed that over three-quarters of hedge funds currently or expect to use non-traditional data to support investments. Furthermore, research from Greenwich Associates has shown that alternative data spending is expected to increase in 2018 for both hedge funds and asset managers, with 74% of hedge funds planning to boost their alt data spending in 2018.

    Over the last few years, there have been a wave of large hedge funds hiring teams of data scientists and developers to find value in alt data. I think that’s really key right now. Firms are exploring how to use these data sets for alpha, for edge, for situational awareness and for risk mitigation.

    CAN YOU TELL ME ABOUT A TIME WHEN ALTERNATIVE DATA SOURCES BROKE NEWS THAT DIRECTLY AFFECTED MARKETS BEFORE THE MEDIA WERE ABLE TO PICK UP ON IT?

    One of the best examples that we see is in the energy space. We’ve had a number of cases where a group of individuals saw a refinery, or some sort of gas installation, have a major disruption, like an explosion or fire.

    They saw it, then they posted about it on Twitter, often with pictures or videos. A great example of this is the Husky Energy oil refinery explosion in April 2018. Roughly 20 people were injured after a tank exploded at the Superior, Wisconsin oil refinery. Dataminr first alerted clients to the blast via an eyewitness who shared a photo from on the ground.

    These posts were surfaced by Dataminr and delivered to our clients ahead of major news, giving users a time advantage before it affected the markets. We also see a lot of macro stories being broken on social media. After Turkey President Recep Tayyip Erdogan appointed his son-in-law, Berat Albayrak, as chief of the country’s new treasury and finance ministry in July 2018, the Turkish lira fell nearly 4%.

    Dataminr alerted clients to the event ahead of major news by surfacing a scoop by a local reporter. Social media and other public information platforms can be very powerful ways to broadcast an event.


Copyright 2017, Social Media Research Association. All rights reserved

Powered by Wild Apricot Membership Software