Monitoring and logging: the eyes and ears of security
Why do monitoring and logging matter?
Although this is a foundational question in cybersecurity and networking, National Cybersecurity Awareness Month makes this a great time to (re)visit important topics.
Monitoring and logging are very similar to security cameras and home alarm systems. They help you keep an eye on what’s happening in your applications and systems. If – when – something unusual occurs, analysts can leverage information from monitoring and logging solutions to respond and manage potential issues.
In this blog, I explore some tips from my experience as a DevOps and Systems Engineer.
10 tips for effective monitoring and logging:
Set up alerts for unusual activity
Use monitoring tools to set up alerts for machine or human behaviors that don’t seem right. This could be, for example, a user who has experienced multiple failed logins attempts or a server with a sudden spike in traffic. This way, you can prioritize and quickly investigate suspicious activities.
If it’s important, log it
Adversaries are becoming clever in hiding their tracks. This makes logging key events, such as user logins, changes to business-critical data, and system errors, important. The information gleaned from logs can help shed light on a bad actor’s trail.
Regularly review log
Don’t just collect logs—make it a habit to review them regularly. Collaborate with your team and experts to capture and understand details from logs. Look for patterns or anomalies that could indicate a security issue.
Leverage SIEMs
Security Information and Event Management (SIEM) tools are great to collect and analyze log data from different sources, helping you detect security incidents more efficiently.
Retain logs for digital forensics
Your industry regulations may already require this, but storing your logs will not only keep you compliant but can also help you perform security investigations. SIEMs can be expensive depending on the throughput of your organization. Security data fabrics, such as DataBee, can help you decouple storage and federate security data to a centralized location like a data lake, making it easier to search through raw logs or optimized datasets to help you catch important information.
Establish a response plan
Ideally before a security event occurs, your team should have a plan in place to respond to an incident. This should include who to contact and the steps to contain any potential threats.
Educate your team
Make sure everyone on your team understands the importance of monitoring and logging. Training can help them recognize potential security threats and respond appropriately.
Keep your tools updated
Regularly update your monitoring and logging tools to ensure you’re protected against the latest threats. Outdated tools might miss important security events.
Test your monitoring setup
Running tabletops can help you test your monitoring systems and response plans to ensure they’re working correctly. Simulate incidents to see if your alerts trigger as expected.
Stay informed
Keep up to date with the latest security trends and threats. This knowledge can help you improve your monitoring and logging practices continuously.
By following these tips, you can enhance your organization's security posture and respond more effectively to potential threats. Monitoring and logging might seem like technical tasks, but they play a vital role in keeping your systems safe!
Read More
How Continuous Controls Monitoring (CCM) can make cybersecurity best practices even better
Like many cybersecurity vendors, we like to keep an eye out for the publication of the Verizon Data Breach Investigations Report (DBIR) each year. It’s been a reliable way to track the actors, tactics and targets that have forced the need for a cybersecurity industry and to see how these threats vary from year to year. This information can be helpful as organizations develop or update their security strategy and approaches.
All of the key attack patterns reported on in the DBIR have been mapped to the Critical Security Controls (CSC) put out by the Center for Internet Security (CIS), a community-driven non-profit that provides best practices and benchmarks designed to strengthen the security posture of organizations. According to the CIS, these Controls “are a prescriptive, prioritized and simplified set of best practices that you can use to strengthen your cybersecurity posture.”
Many organizations rely on the Controls and Safeguards described in the CIS CSC document to guide how they build and measure their security program. Understanding this, we thought it might be useful to map the Incident Classification Patterns described in the 2024 DBIR report, to the guidance provided in the CIS Critical Security Controls, Version 8.1, and then to the CSC Controls and Safeguards that DataBee for Continuous Controls Monitoring (CCM) reports on. As you’ll see, CCM – whether from DataBee or another vendor (😢) – is a highly useful way to measure progress toward effective controls implementation.
The problem, proposed solutions, and how to measure their effectiveness
The 2024 DBIR identifies a set of eight patterns for classifying security incidents, with categories such as System Intrusion, Social Engineering, and Basic Web Application Attacks leading the charge. Included in the write-up of each incident classification is a list of the Safeguards from the CIS Critical Security Controls that describe “specific actions that enterprises should take to implement the control.” These controls are recommended for blocking, mitigating, or identifying that specific incident type. CIS Controls and Safeguards recommended to combat System Intrusion, for example, include: 4.1, Establish and Maintain a Secure Configuration Process1; 7.1, Establish and Maintain a Vulnerability Management Process2; and 14, Security Awareness and Skills Training3. Similar lists of Controls and Safeguards are provided in the DBIR for other incident classification patterns.
Continuous Controls Monitoring (CCM) is an invaluable tool to measure implementation for cybersecurity controls, including many of the CIS Safeguards. These might include measuring the level of deployment for a solution within a population of assets, e.g., is endpoint detection and response implemented on all end user workstations and laptops? Or has a task been completed within the expected timeframe, such as the remediation of a vulnerability, closure of a security policy exception, or completion of secure code development training? While reporting on these tasks individually may seem easy enough, CCM takes it to the next level by reporting on a large set of controls through a single interface, rather than requiring users to access a series of different interfaces for every distinct control. Additionally, CCM supports the automation of data collection and then refreshing report content so that the data being reported is kept current with significantly less effort.
Doing (and measuring) “the basics”
The CIS CSCs are divided into three “implementation groups.” The CIS explains implementation groups this way: “Implementation Groups (IGs) are the recommended guidance to prioritize implementation of the CIS Critical Security Controls.” The CIS defines Implementation Group 1 (IG1) as “essential cyber hygiene and represents an emerging minimum standard of information security for all enterprises.” In the CIS CSCs v8.1, there are 56 Safeguards in implementation group 1, slightly more than a third of the total Safeguards. Interestingly, most of the Safeguards listed by Verizon in the DBIR are from implementation group 1, the Safeguards for essential cyber hygiene, that is, “the basics.”
Considering “the basics,” a few years ago, the 2021 Data Breach Investigations Report made this point:
“The next time we are up against a paradigm-shifting breach that challenges the norm of what is most likely to happen, don’t listen to the ornithologists on the blue bird website chirping loudly that “We cannot patch manage or access control our way out of this threat,” because in fact “doing the basics” will help against the vast majority of the problem space that is most likely to affect your organization.” (page 11)
Continuous controls monitoring is ideally suited to help organizations measure their progress when implementing essential security controls. That is, those controls that will help against “the vast majority of the problem space.” These essential controls are the necessary foundation on which more specialized and sophisticated controls can be built.
Moving beyond the basics
Of course, CCM is not limited to reporting on the basics. As Verizon notes, the CIS Safeguards listed in the 2024 DBIR report are only a small subset of those which could help to protect the organization, or to detect, respond to, or recover from an incident. Any control which lends itself to measurement, especially when expressed as a percentage of implementation, is a viable candidate for CCM. Additionally, the measurement can be compared against a target level of compliance, a Key Performance Indicator (KPI), to assess if the target is being met, exceeded, or if additional work is needed to reach it.
The Critical Security Controls from CIS provide a pragmatic and comprehensive set of controls for organizations to improve their essential cybersecurity capabilities. CCM provides a highly useful solution to measure the progress towards effective implementation of the controls, both at the organization level, and the levels of management that make up the organization.
Mapping incident classification patterns to CIS controls & safeguards to DataBee for Continuous Controls Monitoring dashboards
DataBee’s CCM solution provides consistent and accurate dashboards that measure how effectively controls have been implemented, and it does this automatically and continuously. Turns out, it produces reports on many of the Controls and Safeguards detailed in the CIS CSC. Here are some examples:
The DBIR recommends Control 04, "Secure Configuration of Enterprise Assets and Software," as applicable for several Incident Classification Patterns, namely System Intrusion, and Privilege Misuse. The Secure Configuration dashboard for DataBee for Continuous Controls Monitoring reports on this CSC Control and many of its underlying Safeguards.
Control 10, “Malware Defenses,” is also listed as a response to System Intrusion in the DBIR. The Endpoint Protection dashboard supports this control. It shows the systems protected by your endpoint detection and response (EDR) solutions and compares them to assets expected to have EDR installed. DataBee reports on the assets missing EDR and which consequently remain unprotected.
“Security Awareness and Skills Training,” Control 14, is noted in the DBIR as a response to patterns System Intrusion, Social Engineering, and Miscellaneous Errors. The DataBee Security Training dashboard can provide status on training from all the sources used by your organization.
In addition to supporting the controls and safeguards listed in the DBIR, the DataBee dashboards also report on CSC controls such as Control 01, “Inventory and Control of Enterprise Assets.” While the DBIR does not list Control 01 explicitly, the information reported by the Asset Management dashboard in DataBee is needed to support Secure Configuration, Endpoint Protection, and other dashboards. That is, the dashboards that do support the CIS controls listed in the DBIR.
With the incident patterns in the 2024 Verizon Data Breach Investigations Report mapped to the Critical Security Controls and Safeguards provided by the Center for Internet Security, security teams are given a great start – or reminder – of the best practices and tools that can help them avoid falling ‘victim’ to these incidents. Continuous controls monitoring bolsters an organization’s security posture even more by delivering dashboards that report on the performance of an organization’s controls; reports that provide actionable insights into any security or compliance gaps.
If you’d like to learn more about how DataBee for Continuous Controls Monitoring supports the Controls and Safeguard recommendations provided in the CIS CSC, be in touch. We’d love to help you get the most out of your security investments.
Read More
Status Update: DataBee is a now an AWS Security Competency Partner
We are proud to announce that DataBee is recognized as one of only 35 companies to achieve the AWS Security Competency in Threat Detection and Response. We have worked diligently to help customers gain faster, better insights from their security data, making today meaningful to us as a team. This exclusive recognition underscores the value and impact that DataBee’s advanced capabilities bring to customers.
Achieving an AWS Security Competency requires us to have deep technical AWS expertise. Inspired by Comcast's internal CISO and CTO organization, the DataBee platform connects disparate security data sources and feeds, enabling customers to optimize their AWS resources. Our AWS Competency recognition validates our ability to leverage our internal, technical AWS knowledge so that customers can achieve the same proven-at-scale benefits.
More importantly, earning this badge is a testament to the success we have achieved in partnering with our customers, and validates that DataBee has enabled customers to transform vast amounts of security data into actionable insights for threat detection and response.
Our continued collaboration with AWS reflects our dedication to driving innovation and delivering high-quality security solutions that meet the evolving needs of our customers. We are proud to be recognized for our efforts and remain committed to helping our customers achieve their security goals efficiently and effectively.
Read More
Live sports video: A winning playbook for content delivery and management
For streaming, broadcasting, or advertising, live sports represent some of the most exciting and most complex events to manage — and some of the most lucrative, particularly when it comes to breaking into new markets.
But when you’re only live once, the pressure to deliver flawless real-time content is immense, and the coordination required between technology, vendors, and teams is absolutely critical to bring home the win. As part of our Content Circle of Life series, Bill Calton, VP of Operations for Comcast Technology Solutions, sat down with David Wilburn, Vice President of TVE & VOD Software Engineering, Peacock, and Jorge de la Nuez, Head of Technology and Operations at the Olympic Channel, to discuss some of the key aspects of delivering/managing live sports content.
Read the highlights below or watch the full conversation here for expert perspectives on some of the most critical topics in live sports media, including:
Vendor/technology partner selection
AI in sports
Infrastructure challenges
The role of social media and advertising
The importance of vendor selection in live sports video
Bill Calton, VP of Operations, Comcast Technology Solutions: We’ve just been through the 2024 Summer Olympic Games in Paris, where over 3 billion viewers consumed some portion of this event’s content globally. And we all know that when we prepare for live sporting events, like the Olympics, we often work without a net.
What is your process for getting a vendor in place prior to these events?
Jorge de la Nuez, Head of Technology and Operations, Olympic Channel:
Vendor selection for events like the Olympics is a complex and lengthy process. It often takes years to build relationships with trusted vendors, particularly in such high-stakes environments.
Auditing and competitive bidding processes are key elements, but personal relationships and knowledge of a vendor's track record are just as important.
Preparing for an event like the Olympics requires testing for load capacity and auditing the entire ecosystem to ensure no part of the infrastructure can fail.
David Wilburn, Vice President of TVE & VOD Software Engineering, Peacock:
Vendor relationships are critical, but so is ensuring they are true partners who understand the needs of the business and adapt to them.
It’s important to choose partners who are innovators and are evolving alongside the business, not just following their own roadmaps.
Partnerships should be flexible and scalable, with real-time communication that allows for quick issue resolution.
The role of AI in live sports content management and delivery
The possibilities seem endless in terms of how people consume sports content, from where they're consuming it to what type of device they're consuming the content on. And we're starting to see AI tools be leveraged to support this.
How have you seen AI play a role in the live sports media arena?
David Wilburn, Peacock:
AI is beginning to assist in automating sub-clipping for highlights and identifying players using jersey numbers or facial recognition. However, these systems are still evolving, so it will be interesting to see how this continues to develop. One of the challenges in leveraging AI for live video streaming is latency, especially in high-stakes environments like sports betting, where near-zero latency is a must.
Jorge de la Nuez, Olympic Channel:
AI has been integrated into Olympic content since Tokyo 2020, particularly in highlight creation and video content management.
AI tools are really useful for generating short-form video content for social media and search platforms
On the back end, AI helps home in on user profiling and content recommendation, which is driven by data analytics.
Infrastructure to ensure reliability for sports video
When the lights come on and fans tune in, your infrastructure needs to be ready. When you’re only live once, you need to effectively deal with redundancies, both from a cloud or on-premises perspective.
What do you do to ensure that these events go off without any interruption?
Jorge de la Nuez, Olympic Channel:
Managing infrastructure for the Olympics is a blend of cloud-based and on-premises systems. While the cloud offers flexibility, there are still challenges with cost and multi-vendor coordination.
Keeping the infrastructure simple when possible and relying on trusted partners for redundancy helps us manage complexity when the stakes are high.
David Wilburn, Peacock:
Having active-active multi-region services and auto-detection of issues is critical for high uptime, particularly during events like the Olympics or NFL games.
Chaos and load testing are essential to ensure that the systems can handle the massive demand of live events. Multi-CDN strategies also ensure that content delivery is never interrupted.
Social media, branding, and advertising for live sports content
People are grabbing sports content and leveraging it on their own social media.
Are you seeing any trends in advertising related to live sports that you think would be relevant?
Jorge de la Nuez, Olympic Channel:
The International Olympic Committee has integrated social media platforms like YouTube into its distribution strategy. It’s a balancing act between delivering free content and driving audiences to their own platforms.
At the same time, branded content is becoming more important than traditional ads, particularly on digital and social platforms.
David Wilburn, Peacock:
Advertising is changing. We’re going to be seeing a lot more of the use of "squeeze-back" ads, where video continues to play while ads are displayed on the screen. This allows viewers to stay engaged with live sports while still being exposed to relevant content from advertisers.
As the live sports media landscape continues to evolve, so will the technologies that support it. At the same time, the importance of strong partnerships and well-prepared teams cannot be overstated. If you’re looking for a teammate to help you adapt, innovate, and maintain excellence in sports content delivery and management, connect with us here.
Read More
Mastering DORA compliance and enhancing resilience with DataBee
Recently, DataBee hosted a webinar focused on the Digital Operational Resilience Act (DORA), a pivotal piece of EU legislation that is set to reshape the cybersecurity landscape for financial institutions. The talk featured experts Tom Schneider, Cybersecurity GRC Professional Services Consultant at DataBee and Annick O'Brien, General Counsel at CybSafe, who delved into the intricacies of DORA, its implications, and actionable strategies for compliance.
5 Key Takeaways for mastering DORA compliance and enhancing resilience:
In an effort to open dialogue and help organisations that need to comply with the DORA compliance legislations, we are sharing the takeaways from our webinar.
The Essence of DORA: DORA is not just another cybersecurity regulation; it addresses the broader scope of operational risk in the financial sector. Unlike frameworks that focus solely on specific cybersecurity threats or data protection, DORA aims to ensure that organisations can maintain operational resilience, even in the face of significant disruptions. This resilience means not just preventing breaches but also being able to recover swiftly when they occur.
Broad Applicability: DORA's reach extends beyond traditional banks, capturing a wide array of entities within the financial ecosystem, including insurance companies, reinsurance firms, and even crowdfunding platforms. The act emphasizes that any organisation handling financial data needs to be vigilant, especially as DORA becomes fully enforceable in January 2025.
Third-Party Risks: A significant portion of the webinar focused on the risks associated with third-party service providers, particularly cloud service providers. DORA places the onus on financial institutions to ensure that their third-party vendors are compliant with the same rigorous standards. This includes having robust technical and operational measures, conducting regular due diligence, and ensuring these providers can maintain operational resilience.
Concentration of Risk: DORA introduces the concept of concentration risk, which refers to the potential danger when an entire industry relies heavily on a single service provider. The webinar highlighted recent incidents, such as the CrowdStrike and Windows issues, underscoring the importance of not only identifying these risks but also diversifying to mitigate them.
Principles-Based Approach: Unlike prescriptive regulations, DORA is principles-based, focusing on the outcomes rather than the specific methods organisations must use. This approach requires financial institutions to continuously assess and update their operational practices to ensure resilience in a rapidly evolving technological landscape.
Moving Forward:
As the January 2025 deadline approaches, organisations are urged to review their existing compliance frameworks and identify how they can integrate DORA's requirements without reinventing the wheel. Many of the principles within DORA overlap with other frameworks like GDPR and NIST, providing a foundation that organisations can build upon.
For those grappling with the complexities of DORA, the webinar emphasized the importance of preparation, regular testing, and continuous improvement. By leveraging existing policies and procedures, financial institutions can align with DORA's objectives and ensure they are not only compliant but also resilient in the face of future challenges.
Databee can significantly enhance compliance with DORA by streamlining the management of information and communication technology (ICT) assets. DataBee for Continuous Controls Monitoring (CCM) offering weaves together data across multiple sources, enabling organisations to automate the creation of a reliable asset inventory. By providing enriched datasets and clear entity resolution, Databee reduces complexity of managing and monitoring ICT assets, improves auditability, and ensures that compliance and security measures are consistently met across the enterprise, ultimately supporting the resilience and security of critical business operations.
Request a demo today to discover how DataBee can help you become DORA compliant.
Read More
Market opportunities in MENA for streaming, advertising, and sports
With a subscription video on demand (SVOD) services market expected to surpass $1.2 billion by the end of this year, it’s no wonder that entertainment and media companies are looking at the Middle East and North Africa (MENA) with increased focus. While the region presents unique opportunities for expansion and greater content monetization, reaching this diverse and often fragmented audience presents distinct challenges.
At our 2024 MENA Monetization Summit in Dubai, industry leaders discussed the innovative strategies they’ve used to thrive in this dynamic market. Read on to learn the drivers behind their success and gain strategic insights for effective content monetization in this rapidly evolving region.
MENA’s streaming and advertising market: highlights to know
Opportunities in MENA are evolving rapidly, driven by a young, tech-savvy population and increasing digital penetration. Consider the following statistics from global analysts Omdia:
MENA’s SVOD services market generated over $1 billion in revenues in 2023 and is expected to surpass $1.2 billion in 2024.
Online video advertising in MENA is expected to grow by 67% in revenue by 2028 while online video subscription is expected to grow by 19%.
Already, the Free Ad-Supported TV (FAST) market in MENA has topped $7 million — with the potential to quadruple in the next five years.
Saudi Arabia sees the highest consumption of YouTube videos globally.
So, how can businesses capitalize on these burgeoning markets? There are a few key considerations when evaluating opportunities for content monetization in MENA:
First, underserved sports content is a great avenue to explore. The popularity of sports such as cricket, rugby, mixed martial arts, and fighting sports in the region opens up significant opportunities to stake out new territory. Since these sports are not as heavily contested by major players, new market entrants can quickly and effectively carve out a niche.
When expanding into MENA, localized content will be particularly important for success. Content tailored to local tastes, cultural norms, and preferences is crucial. This means finding opportunities to produce and broadcast local sports, creating region-specific reality shows, and emphasizing local celebrities and events. Platforms that offer a mix of local productions and international content stand a better chance of engaging the audience.
Similar to findings from other parts of the world, FAST channels are becoming increasingly popular in MENA, providing an alternative to traditional pay-TV. These channels attract large audiences by offering free content supported by advertising. This CTS webinar dives deeper into FAST channel technology.
FAST channel revenues in MENA reached $7.2 million in 2023 and are projected to quadruple in the next 5 years.
Success stories from MENA
Several companies are successfully navigating the MENA market challenges by leveraging specific strategies and focusing on underserved segments. STARZ PLAY Arabia and Shahid together make up approximately 40% of the total over-the-top (OTT) services market in the region according to Q4 2023 Omdia data.
Shahid and STARZ PLAY lead the MENA streaming video market.
STARZ PLAY: With over 3.5 million subscribers and growing, this highly successful SVOD service has seen tremendous success by focusing on underserved sports and licensed Hollywood content. Here’s the strategy at a glance:
Securing sports rights including UFC, Cricket World Cup, ICC tournaments
Enhancing the user experience with sport-specific UI features
Shahid, part of the MBC Group, has established itself as a leading platform in the MENA region by:
Leveraging its extensive library of premium Arabic content
Growing both its ad-supported video on demand (AVOD) and SVOD services in tandem but with a heavier emphasis on advertising
Key trends to watch across the region
Shahid, STARZ PLAY, AWS, FreeWheel, and other media companies that have seen considerable success in MENA have tapped into key trends in the region.
Shifting toward hybrid models: In the Middle East, pay-TV is still important, but online advertising is growing rapidly. Giant entertainment companies such as Netflix and Amazon Prime are exploring ad-supported content. The expected growth in subscriptions combined with the increasing importance of advertising revenue in the region highlight this trend.
Importance of data and personalization: AI is revolutionizing content monetization in MENA by enhancing personalization and operational efficiency. AI is helping content providers with deeper understanding in user behavior and preferences, allowing for highly targeted and contextual advertising. By using AI to analyze vast amounts of data, companies can predict churn behaviors, personalize content recommendations, and optimize advertising strategies. Employing AI in content production processes, such as automated subtitling in multiple languages, is becoming a cost-effective way to make content more accessible and widen reach.
Rise of sports: The growing popularity of esports and niche sports presents a lucrative opportunity. The addition of new sports in the Olympic program could lead to increased engagement and monetization.
Looking to grow in MENA? Here’s your roadmap.
To thrive in the competitive MENA market, industry leaders recommend adopting the following strategies:
Identify and focus on areas underserved by major players, such as specific sports or localized content. This could mean using unique entertainment formats, such as live sports events, to attract audiences, foster brand loyalty, and maximize monetization potential.
Leverage partnerships and form strategic alliances with local telecom operators, device manufacturers, and opportunities for managed channel origination to boost visibility and distribution opportunities.
Invest in technology and infrastructure to ensure robust technology and infrastructure to handle high concurrent user traffic. In particular, cloud services and scalable solutions are essential for maintaining a seamless user experience and meeting the expectations of the region’s audiences.
Enhance fan engagement by leveraging AI and data analytics to create personalized and interactive experiences. It’s a great way to boost real-time engagement during live events, fantasy sports integration, key moments and highlights, and tailored content recommendations.
Diversify revenue streams and combine subscription services with advertising. Explore opportunities in branded content and sponsorships to maximize revenue potential. The future of content monetization lies in hybrid models that combine subscriptions with advertising. The expected growth in the number of subscriptions and the increasing importance of advertising revenue highlight this trend.
There is immense potential for success in MENA provided that companies can navigate its complexities and leverage its unique opportunities. Understanding local market dynamics and tailoring strategies accordingly will be key to capitalizing on the opportunities at hand.
Looking for a trusted partner to help support your strategies? Contact us today.
Read More
Bee sharp: putting GenAI to work for asset insights with Beekeeper AI™
Artificial intelligence (AI) and music are a lot alike. When you have the right components together, like patterns in melodies and rhythms, music can be personal and inspire creativity. In my experience having worked on projects that developed AI for IT and security teams, data can help recognize patterns from day-to-day activities and frustrations that can be enhanced or automated.
I started working in AI technology development nearly a decade ago. I loved the overlaps between music and programming. Both begin with basic rules and theory, but it is the human element that brings AI (and music) to life.
Recently, we launched BeeKeeper AI™ from DataBee, a generative AI (genAI) tool that uses patent-pending entity resolution technology to find and validate asset and device ownership. Inspired by our own internal cybersecurity and operations teams struggles of chasing down ownership, which sometimes added up to 20+ asset owner reassignments, we knew there was a better way forward. Through integrations with enterprise chat clients like Teams, BeeKeeper AI uses your data to speak to your end users, replacing the otherwise arduously manual process of confirming or redirecting asset ownership.
What’s the buzz about BeeKeeper AI from DataBee?
Much like how a good song metaphorically speaks to the soul, BeeKeeper AI’s innovative genAI approach is tuned to leverage ownership confidence scores that prompt it to proactively reach out to end users. Now, IT admins and operations teams don’t have to spend hours each day reaching out to asset owners who often become frustrated over having their day interrupted. Further, by using BeeKeeper AI for ‘filling in the blanks’ of unclaimed or newly discovered assets, you have an improved dataset of who to reach out to when security vulnerabilities and compliance gaps appear.
BeeKeeper AI, a part of DataBee for Security Hygiene and Security Threats, uses an entity resolution technology to identify potential owners for unclaimed assets and devices based on a few factors such as comparing authentication logs.
BeeKeeper AI is developed with a large language model (LLM) that features strict guardrails to keep conversations on track and hallucinations at bay when engaging these potential owners. This means that potential asset owners can simply respond “yes” or suggest someone else and move on with their day.
Once users respond, BeeKeeper AI can do the rest – including looking for other potential owners, updating the DataBee platform, and even updating the CMDB, sharing its learnings with other tools.
Automatic updates to improve efficiency and collaboration
Most IT admins and operations teams heave a sigh every time they have to manually update their asset inventories. If you’ve been using spreadsheets to maintain a running, cross-referenced list of unclaimed devices and potential owners, then you’re singing the song of nearly every IT department globally.
This is where BeeKeeper AI harmonizes with the rest of your objectives. When BeeKeeper AI automatically updates the DataBee platform, everyone across the different teams have a shared source of data, including:
IT
Operations
Information security
Compliance
Unknown or orphaned assets are everyone’s responsibility as they can become a potential entry point for security incidents or create compliance gaps. BeeKeeper AI can even give you insights from its own activity, allowing you to run user engagement reports to quantify issues like:
Uncooperative users
Total users contacted and their responses
Processed assets, like validated and denied assets
Since it automatically updates the DataBee platform, BeeKeeper AI makes collaboration across these different teams easier by ensuring that they all have the same access to cleaner and more complete user and asset information that has business context woven in.
Responsible AI for security data
AI is a hot topic, but not all AI is the same. At DataBee, we believe in responsible AI with proper guardrails around the technology’s use and output.
As security professionals, we understand that security data can contain sensitive information about your people and your infrastructure. BeeKeeper AI starts from your clean, optimized DataBee dataset and works within your contained environment. Unique to each organization’s data, BeeKeeper AI’s guardrails keep sensitive data from leakage.
This is why BeeKeeper AI sticks to what it knows, even when someone tries to take it off task. Our chatbot isn’t easily distracted and refocuses attempts to engage back to its sole purpose - identifying and finding the right asset owners.
Making honey out of your data with BeeKeeper AI
BeeKeeper AI leverages your security data to proactively reach out to users and verify whether they own assets. With DataBee, you can turn your security data into analytics-ready datasets to get insights faster. Let BeeKeeper AI manage your hive so you can focus on making honey out of your data.
If you’re ready to reduce manual, time-consuming, collaboration-inhibiting processes, request a custom demo to see how DataBee for Security Hygiene can help you sing a sweeter tune.
Read More
DataBee: Who do you think you are?
2024 has been a big “events” year for DataBee as we’ve strived to raise awareness of the new business and the DataBee Hive™ security, risk and compliance data fabric platform. We’ve participated in events across North America and EMEA including Black Hat USA, the Gartner Security & Risk Management Summits, FS-ISAC Summit, Snowflake Data Cloud Summit and AWS re:Inforce, and of course, the RSA Conference. At RSA, we introduced to the community our sweet (haha) and funny life-size bee mascot, who ended up being a big hit among humans and canines alike.
Participation in these events has been illuminating on many important fronts. For the DataBee “hive” it’s been invaluable, not only for the conversations and insights we gain from real users across the industry, but also for the feedback we receive as we share the story of DataBee’s creation and how it was inspired by the security data fabric that Comcast’s Global CISO, Noopur Davis, and her team developed. In general, we’ve been thrilled with the response that DataBee has received, but consistently, there’s one piece of attendee feedback that really gives us pause:
“Why would Comcast Technology Solutions enter the cybersecurity solutions space?”
In other words, “what the heck is Comcast doing here?”
This statement makes it pretty clear: Comcast might be synonymous with broadband, video, media and entertainment services and experiences, but may be less associated with cybersecurity.
But it should be. While Comcast and Xfinity may not be immediately associated with cybersecurity, Comcast Business, a $10 billion business within Comcast, has been delivering advanced cybersecurity solutions to businesses of all sizes since 2018. With our friends at Comcast Business, the DataBee team is working hard to change perceptions and increase awareness of Comcast’s rich history of innovation in cybersecurity.
Let’s take a quick look at some of the reasons why the Comcast name should be synonymous with cybersecurity
Comcast Business
Comcast Business is committed to helping organizations adopt a cybersecurity posture that meets the diverse and complex needs of today’s cybersecurity environment. Comcast Business’ comprehensive solutions portfolio is specifically engineered to tackle the multifaceted challenges of the modern digital landscape. With advanced capabilities ranging from real-time threat detection and response, Comcast Business solutions help protect businesses. Whether through Unified Threat Management systems that simplify security operations, cloud-based solutions that provide flexible defenses, or DDoS mitigation services that help preserve operational continuity, Comcast Business is a trusted partner in cybersecurity. Comcast Business provides the depth, effectiveness, and expertise necessary to enhance enterprise security posture through:
SecurityEdge™
Offering advanced security for small businesses, SecurityEdge™ is a cloud-based Internet security solution that helps protect all connected devices on your network from malware, phishing scams, ransomware, and botnet attacks.
SD-WAN with Advanced Security
Connect users to applications securely both onsite and in the cloud
Unified Threat Management (UTM)
Delivered by industry leading partners, UTM solutions provide an integrated security platform that combines firewall, antivirus, intrusion prevention, and web filtering to simplify management and enhance visibility across the network.
DDoS Mitigation
Security for disruption caused by Distributed Denial of Service attacks by helping to identify and block anomalous spikes in traffic while allowing for desired functionality of your services.
Secure Access Service Edge (SASE)
Integrating networking and security into a unified cloud-delivered service model, our SASE framework supports dynamic secure access needs of organizations, facilitating secure and efficient connectivity for remote and mobile workers.
Endpoint Detection and Response (EDR)
Help safeguard devices connected to your enterprise network, using AI to detect, investigate, remove, and remediate malware, phishing, and ransomware
Managed Detection and Response (MDR)
Extend EDR capabilities to the entire network and detect advanced threats, backed up with 24/7 monitoring by a team of cybersecurity experts.
Vulnerability Scanning and Management
Helps identify and manage security weaknesses in the network and software systems, a proactive approach that helps protect potential entry points for threat actors.
Comcast Ventures
Did you know that Comcast has a venture capital group that backs early-to-growth stage startups that are transforming sectors like cybersecurity, AI, healthcare, and more?
Some of the innovative cybersecurity, data and AI-specific companies that Comcast Ventures has invested in include:
BigID
SafeBase
HYPR
Resemble AI
Bitsight
Uptycs
Recently, cybersecurity investment and advisory firm NightDragon announced a strategic partnership with Comcast Technology Solutions (CTS) and DataBee that also included Comcast Ventures. As a result of this strategic partnership, CTS, Comcast Ventures and DataBee will gain valuable exposure to the new innovations coming from NightDragon companies.
Comcast Cybersecurity
As I write this, Comcast Corporation is ranked 33 on the Fortune 500 list, so – as you might guess – it has an expansive internal cybersecurity organization. With $121 billion+ in annual revenues, over 180,000 employees around the globe, and a huge ecosystem of consumers and business customers and partners, Comcast takes its security obligations very seriously.
Our cyber professionals collectively hold and are awarded multiple patents each year. We lead standards bodies, and we participate and provide leadership in multiple policy forums. Our colleagues contribute to Open-Source communities where we share our security innovations. We are an integral part of the global community of cybersecurity practitioners – we present at conferences, learn from our peers, hold multiple certifications, and publish in various journals. We are a contributing member of the Communications ISAC, and the CISA Joint Cyber Defense Collaborative. A sampling of internal research and development efforts within Comcast’s cybersecurity organization include:
One-time secure secrets sharing
Security data fabric (Note: the inspiration for DataBee®)
Anomaly detection
AI-based secrets detection in code
AI-based static code analysis for privacy
Crypto-agility risk assessment
Machine-assisted security threat modeling
Scoping of threats against AI/ML apps
Persona-based privacy threat modeling
PKI and token management systems
Certificate lifecycle management and contribution to industry IoT stock
R&D for BluVector Network Detection and Response (NDR) product
The Comcast Cyber Security (CCS) Research team, “conducts original applied and fundamental cybersecurity research”. Selected projects that the team is working on include research on security and human behavior, security by design, and emerging technologies such as post quantum cryptography. CCS works with technology teams across Comcast to identify and explore security gaps in the broader cyber ecosystem.
The Comcast Cybersecurity team’s work developing and implementing a security data fabric platform was the inspiration for what has become DataBee. Although the DataBee team has architected and built its commercial DataBee Hive™ security, risk and compliance data fabric platform from “scratch” (so to speak), it was Comcast’s internal platform – and the great results that it has, and continues, to deliver – that proved such a solution could be a game-changer, especially for large, complex organizations. While DataBee Hive has been designed to address the needs and scale of any type of enterprise or IT architecture, we were fortunate to be able to tap into the learnings that came from the years and countless person hours of development that went into building Comcast’s internal security data fabric platform, and then operating it at scale.
DataBee Cybersecurity Suite
Besides being home to the DataBee Hive security data fabric platform and products, it’s worth noting that the DataBee business unit of Comcast Technology Solutions is also home to BluVector, an on-premises network detection and response (NDR) platform. Comcast acquired BluVector in 2019, which was purpose-built to protect critical government and enterprise networks. BluVector continues to deliver AI-powered NDR for visibility across network, devices, users, files, and data to discover and hunt skilled and motivated threats.
Comcast and cybersecurity? Of course.
So, the next time you come across DataBee, from Comcast Technology Solutions, and you think to yourself “why is Comcast in the enterprise security market with DataBee?!” – think again.
From small and mid-size organizations to large enterprises and government agencies; and from managed services to products and solutions; and from on-premises to cloud-native… Comcast’s complete cybersecurity “portfolio” covers the gamut.
Want to connect with someone to determine what’s right for your organization? Contact us, and in “Comments”, let us know if you’d like to evaluate solutions from both DataBee and Comcast Business. We’ll look forward to exploring options with you!
Read More
Compliance Takes a Village: Celebrating National Compliance Officer Day
If the proverb is, it takes a village to raise a child, then the corollary in the business world is that it takes a village to get compliance right. And in this analogy, compliance officers are the mayor of this village. Compliance officers schedule audits, coordinate activities, oversee processes, and manage documentation. They are the often-unsung heroes whose work acts as the foundation of your customers’ trust, helping you achieve certifications and mitigate risk.
While your red teamers and defenders get visibility because they sit at the frontlines, your compliance team members are strategizing and carving paths to reduce risk and enable programs. For this National Compliance Officer Day, we salute these mayors of the compliance village in their own words.
Feeling Gratitude
There is a great amount of pride when compliance officers are able to help you build trust with your customers, but there is also an immense amount of gratitude from the compliance teams for the internal relationships built within the enterprise
Yasmine Abdillahi, Executive Director of Security Risk and Compliance and Business Information Security Officer at Comcast, expressed gratitude for executive leader Sudhanshu Kairab whose ability to grasp the core business fundamentals have allowed Comcast to implement robust compliance frameworks that mitigate risks and support growth and trust.
“[Sudhanshu] consistently demonstrates a keen awareness of industry trends, enabling us to stay ahead of emerging challenges and opportunities. His ability to sustain and nurture a strong network, both internally and externally, has proven invaluable in fostering collaboration and ensuring we remain at the forefront of GRC best practices. His multifaceted approach to leadership has not only strengthened our risk posture but has also positioned our GRC function as a key driver of innovation and business growth.”
Compliance professionals rely on their strategic internal business partners to succeed. When enterprise leaders empower the GRC function, compliance and risk managers can blossom into their best business enabling selves.
In return, compliance leaders allow the enterprise to provide customers with the assurance they need. In today’s “trust but verify” world, customers trust the business when the compliance function can verify the enterprise security posture.
Collaboration, Communication, and Education
At its core, your compliance team acts as the communications glue that binds together the various cybersecurity functions.
For Tom Schneider, who is a part of the DataBee team as a Cybersecurity GRC Professional Services Consultant, communication has been essential to his career. When working to achieve compliance with a control, communicating clearly and specifically is critical, especially when cybersecurity is not someone’s main responsibility. Clear communication educates both sides of the compliance equation.
“Throughout my career, I have learned from the many people I’ve worked with. They have included management, internal and external customers, and auditors. I’ve learned from coworkers that were experts in some specific technology or process, such as vulnerability management or identity management, as well as from people on the business side and how things appear from their perspective.”
GRC’s cross-functional nature makes compliance leaders some of the enterprise’s most impactful teachers and learners. Compliance officers collaborate across different functions - security, IT, and senior leadership. As they learn from their internal partners, they, in turn, educate others.
Compliance officers are so much more than the controls they document and the checklists they review. They facilitate collaboration because they can communicate needs and build a shared language.
Compliance Officers: Keeping It All Together
A compliance officer’s role in your organization goes far beyond their job descriptions. They are cross-functional facilitators, mentors, learners, leaders, enablers, and reviewers. They are the ones who double check the organization’s cybersecurity work. Every day, they work quietly in the background, but for one day every year, we have the opportunity to let them know how important they are to the business.
DataBee from Comcast Technology Solutions gives your compliance officer a way to keep their compliance and business data together so they can communicate more effectively and efficiently. Our security data fabric empowers all three lines of defense - operational managers, risk management, and internal audit - so they can leave behind spreadsheets and point-in-time compliance reporting relics of the past. By leveraging the full power of your organization’s data, compliance officers can implement continuous controls monitoring (CCM) with accurate compliance dashboard and reports for measuring risk and reviewing controls’ effectiveness.
From our Comcast compliance team to yours, thank you for all you do. We see you and appreciate you - today and every day.
Read More
Best practices for PCI DSS compliance...and how DataBee for CCM helps
For planning compliance with the Payment Card Industry Data Security Standard (PCI DSS), the PCI Security Standards Council (SSC) supplies a document that provides excellent foundational advice for both overall cybersecurity, and PCI DSS compliance. Organizations may already be aware of it, but regardless, it is a useful resource. And, it is interesting to read with Continuous Controls Monitoring (CCM) in mind.
The document lists 10 Recommendations for best practices which are useful, not just for PCI DSS compliance, but for overall security and compliance with organizational policies as well as frameworks and regulations to which the entity is subject. The best practices place a strong emphasis on ongoing, continuous compliance. That is, for organizations “to protect themselves and their customers from potential losses or damages resulting from a data breach, they must strive for ways to maintain a continuous state of compliance throughout the year rather than simply seeking point-in-time validation.”
While the immediate goal may be to attain a compliant Report on Compliance (ROC), that immediate goal, and the longer-term viability of the security program, are aided by establishing a program around continuous compliance and the ability to measure it.
Here are the SSC’s 10 Best Practices for Maintaining PCI DSS Compliance:
Develop and Maintain a Sustainable Security Program
Develop Program, Policy, and Procedures
Develop Performance Metrics to Measure Success
Assign Ownership for Coordinating Security Activities
Emphasize Security and Risk Management to Attain and Maintain Compliance
Continuously Monitor Security Controls
Detect and Respond to Security Control Failures
Maintain Security Awareness
Monitoring Compliance of Third-Party Service Providers
Evolve the Compliance Program to Address Changes
Some detail around the 10
The first recommendation, “Develop and Maintain a Sustainable Security Program” is short, but notes that, “Any cardholder data not deemed critical to business functions should be removed from the environment in accordance with the organization’s data-retention policies… In addition, organizations should evaluate business and operating procedures for alternatives to retaining cardholder data.” Outsourcing the processing of cardholder data to entities that specialize in this work is an option that many organizations take. When that is not a viable option, minimizing the amount of data collected, and securely deleting it as specified in the organization’s data retention policy is the next best option.
“Develop Program, Policy, and Procedures” is the second recommendation. Along with developing and maintaining these documents, accountability must be assigned “to ensure the organization's sustainable compliance.” Additionally, PCI DSS v4.0 has a requirement under each of the twelve principal requirements stating that “Roles and responsibilities for performing activities” for each principal requirement “are documented, assigned, and understood.” If this role does not already exist, something for organizations to consider would be designating a “compliance champion” for each business unit. The compliance champions could work with their management to assume accountability for the control compliance for assets and staff assigned to the business unit.
“Develop Performance Metrics to Measure Success” follows. This recommendation includes “Implementation metrics” (which measure the degree to which a control has been implemented, and are usually described as percentages), and “Efficiency and Effectiveness Measures” (which evaluate attributes such as completeness, consistency, and timeliness). These metrics show if a control has been implemented over the expected range of the organization’s assets, if it has been implemented consistently, and is being executed when expected. These metrics play a key role in assessing compliance in a continuous way.
Measurement of implementation metrics and effectiveness metrics for completeness and consistency are core components of DataBee for CCM. For example, in the case of Asset Management, users can see if assets in scope for PCI DSS are flagged as being in scope correctly, if the asset owner is accurate, and if other data points such as physical location are present. The ability to see continuously refreshed data on a CCM dashboard, as opposed to having to create a point in time report, or have the knowledge to access this data through a product specific portal, makes it practical for teams to see accurate metrics in an efficient way.
The fourth recommendation is to “Assign Ownership for Coordinating Security Activities.” An “individual responsible for compliance (a Compliance Manager)” is the main point of this recommendation. However, the recommendation notes that Compliance Manager should be “given adequate funding and resources… and granted the proper authority to effectively organize and allocate such resources.” The effective organization of resources could include delegating tasks throughout the organization to managers over units within the larger organization. This recommendation ends by noting that the organization must ensure that “the goals and objectives of its compliance program are consistently achieved despite changes in program ownership (i.e., employee turnover, change of management, organization merger, re-organization, etc.). Best practices include proper knowledge transfer, documentation of existing controls and the associated responsible individual(s) or team(s).”
Using the DataBee for CCM dashboards to assign accountability for assets and staff to the appropriate business units helps with this recommendation.
It clarifies the delegation of responsibility for assets and staff to the business unit’s management.
Furthermore, it would help drive the effective achievement of objectives of the compliance program during transitions in the Compliance Manager role.
Delegation of control compliance to the business unit’s management would enable them to continue with their tasks while a new Compliance Manager is hired and during the time needed for the Compliance Manager to adjust to their role.
“Emphasize Security and Risk Management to Attain and Maintain Compliance,” the fifth recommendation asserts that “PCI DSS provides a minimum set of security requirements for protecting payment card account data…,” and that “Compliance with industry standards or regulations does not inherently equate to better security.”
This point cannot be emphasized highly enough: “A more effective approach is to focus on building a culture of security and protecting an organization’s information assets and IT infrastructure and allow compliance to be achieved as a consequence.” The ongoing measurement of control implementation by CCM supports a culture of security. Organizations can use the information provided by DataBee for CCM to not only enable continuous reporting, but through it to support continuous remediation of control failures.
The next recommendation, “Continuously Monitor Security Controls,” describes how “the use of automation in both security management and security-control monitoring can provide a tremendous benefit to organizations in terms of simplifying monitoring processes, enhancing continuous monitoring capabilities, and minimizing costs while improving the reliability of security controls and security-related information.”
Ongoing monitoring of data that is frequently refreshed can be a core component for ongoing compliance. Ultimately, implementing a continuous controls monitoring program will help reduce extra workload as the PCI DSS assessment date approaches. DataBee for CCM is a tool that supports the necessary continuous monitoring.
The seventh recommendation, “Detect and Respond to Security Control Failures,” applies to two situations:
controls which have failed, but with no detectable consequences, and
control failures that escalate to security incidents.
PCI SSC notes that, “The longer it takes to detect and respond to a failure, the higher the risk and potential cost of remediation.” Continuous monitoring can help the organization to reduce the time it takes to detect a failed control.
Recommendation eight, “Maintain Security Awareness” speaks to the need to train the workforce, especially regarding how to respond to social engineering. Security training, both for the staff in general and role-based training for specific teams, is one of the requirements that DataBee for CCM reports on through its dashboards.
Recommendation nine is “Monitoring Compliance of Third-Party Service Providers,” and ten is “Evolve the Compliance Program to Address Changes.” A robust compliance program that is in place throughout the year can be more capable of evolving and adapting to change than an assessment focused program that allows controls to drift out of compliance between assessments. Continuous monitoring is key for combating compliance drift once an assessment has been completed.
After the ten recommendations, the main body of the document concludes with a section about the “Commitment to Maintaining Compliance.” Two of the key actions for maintaining continuous compliance are, “Assigning responsibility for ensuring the achievement of their security goals and holding those with responsibility accountable,” and “Developing tools, techniques, and metrics for tracking the performance and sustainability of security activities.” DataBee for CCM enables both these tasks.
The main theme of the “Best Practices for Maintaining PCI DSS Compliance” is that continuous compliance with PCI DSS that is maintained throughout the year is the goal. Ultimately, this helps improve the overall security posture of the organization. Making the required compliance activities business as usual tasks that are continuous throughout the year can also help with the specific goal of achieving a compliant result for a PCI DSS assessment when it comes due.
How DataBee for CCM fits in
We envisioned and realized DataBee for CCM as a fantastic fit for an evolving compliance program. Using the DataBee dashboards, with their continuously updated information that can be accessible to everyone who needs to see it, help free up time for GRC and other teams to focus on the evolution of the cybersecurity program. Given the rapid change in the cyber-threat landscape, and the frequent changes in security controls and regulatory requirements, turning report creation over to CCM to give time back to your people for higher value work is a win for your organization.
DataBee for CCM helps by providing consistent data to all teams, GRC, executive management, business management, IT, etc., so that everyone is working from the same information. This helps to delegate control compliance, and clearly identify accountable and responsible parties. Furthermore, DataBee for CCM shows executives, GRC, business managers and others content for multiple controls, from many different tools, through a single interface (as opposed to GRC needing to create multiple reports, or business managers and others having to create their own, possibly erroneous, reports). Additional dashboards can be created to report on other controls that are in scope for PCI DSS, such as secure configuration, business continuity, and monitoring the compliance of third-party service providers. Any control for which data is available to create useful dashboard content is a candidate for a DataBee for CCM dashboard.
Read More
Enter the golden age of threat detection with automated detection chaining
During my time as a SOC analyst, triaging and correlating alerts often felt like solving a puzzle without the box or knowing if you had all the pieces.
My days consisted of investigating alerts in an always-growing incident queue. Investigations could start with a single high or critical alert and then hunt through other log sources to piece together what happened. I had to ask myself (and my team) if this alert and that alert had any identifiable relationships or patterns with the ones they investigated that day, even though the alerts looked unrelated by themselves. Most investigations inevitably relied on institutional knowledge to find the pieces of your puzzle, searching by IP for one data source and the computer name in another. Finding the connections between the low and slow attacks in near real-time was a matter of chance and often discovered via threat-hunting efforts, slipping through the cracks of security operations. This isn’t an uncommon story and it's not new either – it’s the same problems faced during the Target 2013 breach and the National Public Data Network 2024 breach.
That’s why we launched automated detection chaining as part of the DataBee for Security Threats solution. Using a patent-pending approach to entity resolution, the security data fabric platform can chain together alerts from disjointed tools that could be potentially tied to an advanced persistent threat, insider threat, or compromised asset. What I like to call a “super alert” is presented in DataBee EntityViews™, which aggregates alerts into a time-series, or chronological, view. Now it’s easier to find attacks that span security tools and the MITRE ATT&CK framework. With our out-of-the-box detection chain, you can automatically create a super alert before the adversary reaches the command-and-control phase.
Break free from vendor-specific detections with Sigma Rules
Once a security tool is fully deployed in the network and environment, it becomes near impossible to change out vendors without significant operational impact. The impact is more than just replacing the existing solution, it's also updating all upstream and downstream integration points, such as custom detection content or log parsers. This leads to potential gaps in coverage due to limitations in the tooling deployed and the tools desired. Standard logging is done to a vendor-agnostic schema, and then an open-source detection framework is applied.
The DataBee Platform automated migrating to the Open Cybersecurity Schema Framework (OCSF), which has become increasingly popular with security professionals and is gaining adoption in some tools. Its vendor-agnostic approach standardizes disparate security logs and data feeds, giving SOC teams the ability to use their security data more effectively. Active detection streams in DataBee apply Sigma formatted rules over security data that is mapped to a DataBee-extended version of OCSF to integrate into the existing security ecosystem with minimal customizations. DataBee handles the translation from the Sigma taxonomy to OCSF to help lower the level of effort needed to adopt and support organizations on their journey to vendor-agnostic security operations. Sigma-formatted detections are imported and managed via GitHub to enable treating detections as code. By breaking free of proprietary formats, teams can more easily use vendor-agnostic Sigma rules to gain security insights from across all their tools, including data stored in security data lakes and warehouses.
The accidental insider threat
Accidental insider threats often begin with a phishing attack containing a malicious link or download that tricks the user. The malware is too new or has morphed to evade your end point detection. Then it spreads to whatever other devices it can authenticate to. Detecting the scope of the lateral movement of the malware is challenging because there is so much noise to search through. With DataBee EntityViews, SOC teams can easily review the historical information connected to the organization’s real-world people and devices, giving them a way to trace the progression of events.
Looking at a user’s profile shows relevant business contexts that can aid the investigation:
Job Title to hint at what is normal behavior
Manager to know who to go to for questions or if action needs to be taken
Owned assets that may be worth investigating further
The Event Timeline shows the various types of OCSF findings associated with the user.
By scrolling through the list of findings, a SOC analyst can quickly identify several potential issues, including malware present within the workstation. Most notable, the MITRE ATT&CK detection chain has triggered. In this instance, we had multiple data sources that alerted on different parts of the ATT&CK chain producing a super alert. The originating events are maintained as evidence and easily accessible to the analyst:
EntityViews allow for bringing the events from devices that the current user owns to help simplify the process of pulling together the whole story. In our example the device is the user’s laptop so it's likely that all of the activity is carried out by the user:
The first thing of note is the unusual number of authentication attempts to devices that seem atypical for a developer such as a finance server. As we continue to scroll through the user’s timeline, reviewing events from a variety of data sources, we finally come across our smoking gun. In this instance, we are able to see the phishing email that user clicked the link on that is our initial point of compromise:
It’s clear the device has malware on it, and the authentication attempts imply that the malware was looking to spread further in the network. To visualize this activity, we can leverage the Related Entities graphical view in the Activity section of EntityViews. SOC analysts can use a graphical representation and animation of the activity to visualize the connections between the compromised user and the organization. The graph displays other users and devices that have appearances in security findings, authentication, and ownership events. In our example, we can see that the user has attempted to authenticate to some atypical devices such as an HR system:
Filtering enables more targeted investigations, like focusing on only the successful authentication attempts:
Visualizations such as this in DataBee enable more accurate, timely and complete investigations. From this view, the SOC analysts can select any entity to see their EntityView with the activity associated with the related users and devices. Rather than pivoting between multiple applications or waiting for data to be reprocessed, they have real-time access to information in an easy to consume format.
Customizing detection chains to achieve organizational objectives
Detection Chains are designed to enable advanced threat modeling in a simple solution. Detection Chains can be created in the DataBee platform leveraging all kinds of events that flow through the security data fabric. DataBee ships with 2 detection chains to get you started:
MITRE ATT&CK Chain: Detect advanced low and slow attacks that span the MITRE ATT&CK chain before reaching Command & Control.
Potential Insider Threat: Detect insider threats who are printing out documents, emailing personal accounts, and messing with files in the file share.
These chains serve as a starting point. The intent is that organizations add and remove chains based on their specific needs. For example, you may want to extend the potential insider threat rule to include more potential email domains or limit file share behavior to accessing files that contain trade secrets or sales information.
Automated detection chains are nearly infinity flexible. By chaining together detections from the different data sources that align to different parts of the attack chain specific to a user or device, DataBee enables building advanced security analytics for hunting the elusive APTs and getting ahead of pesky ransomware attacks.
Building a better way forward with DataBee
Every organization is different, and every SOC team has unique needs. DataBee’s automated detection chaining feature gives SOC analysts a faster way to investigate complex security incidents, enabling them to rapidly and intuitively move through vast quantities of historical data.
If you’re ready to gain the full value of your security data with an enterprise-ready security, risk, and compliance data fabric, request a custom demo to see how DataBe for Security Threats can turn static detections into dynamic insights.
Read More
You've reduced data, so what's next
Organizations often adopt data tiering to reduce the amount of data that they send to their analytics tools, like Security Information and Event Management (SIEM) solutions. By diverting data to an object store or a data lake, organizations are able to manage and lower costs by minimizing the amount of data that their SIEM stores. Although they achieve this tactical objective, the process creates data silos. While people can query the data in isolation, they often fail to glean collective insights across the silos.
Think of the problem like a large building with cameras across its perimeters. The organization can monitor each camera’s viewpoint, but no individual camera has the full picture, as any spy movie will tell you. Similarly, you might have different tools that see different parts of your security picture. Although SIEMs originally intended to tie together all security data into a composite, cloud applications and other modern IT and cybersecurity technology tool stacks generate too much data to make this cost-effective.
As organizations balance saving money with having an incomplete picture, a high-quality data fabric architecture can enable them to build more sustainable security data strategies.
From default to normalized
When you implement a data lake, the diverted data remains in its default format. When you try to paint a composite picture across these tools, you rely on what an individual data set understands or sees, leaving you to pick out individual answers from these siloed datasets.
Instead of asking a question once, you need to ask fragments of the question across different data sets. In some cases, you may have a difficult time ensuring that you have the complete answer.
With a security data fabric, you can normalize the data before landing it in one or more repositories. DataBee® from Comcast Technology Solutions uses extract, transform, and load processes to automatically parse security data, then normalizes it according to our extended Open Cybersecurity Schema Framework (OCSF) so that you can correlate and understand what’s happening in the aggregate picture.
By normalizing the data on its way to your data lake, you optimize compute and storage costs, eliminating some of the constraints arising from other data federation approaches.
Considering your constraints
Federation reduces storage costs, but it introduces limitations that can present challenges for security teams.
Latency
When you move data from one location to another, you introduce various time lags. Some providers will define the times per day or number of times that you can transfer data. For example, if you want data in a specific format, some repositories may only manage this transfer once per day.
Meanwhile, if you want to stream the data into a different format for collection, the reformatting can also create a time lag. A transformation and storage process may take several minutes, which can impact key cybersecurity metrics like mean time to detect (MTTD) or mean time to respond (MTTR).
When you query security datasets to learn what happened over the last hour, a (near) real-time data source will contribute to an accurate picture, while a delayed source may not have yet received data for the same period. As you attempt to correlate the data to create a timeline, you might need to use multiple data sources that all have different lag times. For example, some may be mostly real-time while another sends data five minutes later. If you ask the question at the time an event occurred, the system may not have information about it for another five minutes, creating a visibility gap.
Such gaps can create blind spots as you scale your security analytics strategy. The enterprise security team may be asking hundreds of questions across the data system, and the time delay can create a large gap between what you can see and what happened.
Correlation
Correlating activities from across your disparate IT and security tools is critical. Data gives you facts about an event while correlation enables you to interpret what those facts mean. When you ask fragments of a question across data silos, you have no way to automate the generation of these insights.
For example, a security alert will give you a list of events including hundreds of failed login attempts over three minutes. While you have these facts, you still need to interpret whether they describe malicious actors using stolen credentials or a brute force attack.
To improve detections and enable faster response times, you need to weave together information like:
The IP address(es) involved over the time the event occurred
The user associated with the device(s)
The user’s geographic location
The network access permissions for the user and device(s)
You may be storing this data in different repositories without correlation capabilities. For example, you may have converged all DNS, DHCP, firewall, EDR, and Proxy data in one repository while combining user access and application data in another. To get a complete picture of the event, you need to make at least, although likely more than, two single-silo queries.
While you may have reduced data storage costs, you have also increased the duration and complexity of investigating incidents, which gives malicious actors more time in your systems, making it more difficult to locate them and contain the threat.
Weaving together federated data with DataBee
Weaving together data tells you what and when something happened, enabling insights into activity rather than just a list of records. With a fabric of data, you can interpret it to better understand your environment or gain insights about an incident. With DataBee, you can focus on protecting your business while achieving tactical and strategic objectives.
At the tactical level, DataBee fits into your cost management strategies because it focuses on collecting and processing your data in a streamlined affordable way. It ingests security and IT logs and feeds, including non-traditional telemetry like organizational hierarchy data, from APIs, on-premises log forwarders, AWS S3s, or Azure Blobs then automatically parses and maps the data to the OCSF. You can use one or more repositories, aligning with cost management goals. Simultaneously, data users can access accurate, clean data through the platform to build reliable analytics without worrying about data gaps.
The platform enriches your dataset with business policy context and applies patent-pending entity resolution technology so you can gain insights based on a unified, time-series dataset. This transformation and enrichment process breaks down silos so you can efficiently and effectively correlate data to gain real-time insights, empowering operational managers, security analysts, risk management teams, and audit functions.
Read More
The value of OCSF from the point of view of a data scientist
Data can come in all shapes and sizes. As the “data guy” here at DataBee® (and the “SIEM guy” in a past life), I’ve worked plenty with logs and data feeds in different formats, structures, and sizes delivered using different methods and protocols. From my experience, when data is inconsistent and lacks interoperability, I’m spending most of my time trying to understand the schema from each product vendor and less time on showing value or providing insights that could help other teams.
That’s why I’ve become involved in the Open Cybersecurity Schema Framework (OCSF) community. OCSF is an emerging but highly collaborative schema that aims to standardize security and security-related data to improve consistency, analysis, and collaboration. In this blog, I will explain why I believe OCSF is the best choice for your data lake.
The problem of inconsistency
When consuming enterprise IT and cybersecurity data from disparate sources, most of the concepts are the same (like an IP address or a hostname or a username) but each vendor may use a different schema (like the property names) as well as sometimes different ways to represent that data.
Example: How different vendors represent a username field
Vendor
Raw Schema Representation
Vendor A (Firewall)
user.name
Vendor B (SIEM)
username
Vendor C (Endpoint)
usr_name
Vendor D (Cloud)
identity.user
Even if the same property name is used, sometimes the range of values or classifications might vary.
Example: How different vendors represent “Severity” with different value ranges
Vendor
Raw Schema Representation
Possible Values
Vendor A (Firewall)
severity
low, medium, high
Vendor B (SIEM)
severity
1 (critical), 2 (high), 3 (medium), 4 (low)
Vendor C (Endpoint)
severity
info, warning, critical
Vendor D (Cloud)
severity
0 (emergency) through 7 (debug)
In a non-standardized environment, these variations require custom mappings and transformations before consistent analysis can take place. That’s why data standards can be helpful to govern how data is ingested, stored, and used, maintaining consistency and quality so that it can be used across different systems, applications, and teams.
How can a standard help?
In the context of data modeling, a "standard" is a widely accepted set of rules or structures designed to ensure consistency across systems. The primary purpose of a standard is to achieve normalization—ensuring that data from disparate sources can be consistently analyzed within a unified platform like a security data lake or a security information event management (SIEM) solution. From a cyber security standpoint, this becomes evident in at least a few common scenarios:
Analytics: A standardized schema enables the creation of consistent rules, models, and dashboards, independent of the data source or vendor. For example, a rule to detect failed login attempts can be applied uniformly, regardless of whether the data originates from a firewall, endpoint security tool, or cloud application.
Threat Hunting - Noise Reduction: With normalized fields, filtering out irrelevant data becomes more efficient. For instance, if every log uses a common field for user identity (like username), filtering across multiple log sources becomes much simpler.
Threat Hunting - Understanding the Data: Having a single schema instead of learning multiple vendor-specific schemas reduces cognitive load for analysts, allowing them to focus on analysis rather than data translation.
For log data, several standards exist. Some popular ones are: Common Event Format (CEF), Log Event Extended Format (LEEF), Splunk's Common Information Model (CIM), and Elastic’s Common Schema (ECS). Each has its strengths and limitations depending on the use case and platform.
Why existing schemas like CEF and LEEF fall short
Common Event Format (CEF) and Log Event Extended Format (LEEF) are widely used schemas, but they are often too simplistic for modern data lake and analytics use cases.
Limited Fields: CEF and LEEF offer a limited set of predefined fields, meaning most log data ends up in custom fields, which defeats the purpose of a standardized schema.
Custom Fields Bloat: In practice, most data fields are defined as custom, leading to inconsistencies and a lack of clarity. This results in different interpretations of the same data types, complicating analytics.
Overloaded Fields: Without sufficient granularity, crucial data gets overloaded into generic fields, making it hard to distinguish between different event types.
Example: Overloading a single field like “message” to store multiple types of information (e.g., event description, error code) creates ambiguity and reduces the effectiveness of automated analysis.
The limits of CIM and ECS: vendor-specific constraints
Splunk CIM and Elastic ECS are sophisticated schemas that better address the needs of modern environments, but they are tightly coupled to their respective ecosystems.
Proprietary Optimizations:
CIM: Although widely used within Splunk, CIM is proprietary and lacks an open-source community for contributions to the schema itself. Its design focuses on Splunk’s use cases, which can be limiting in broader environments.
ECS: While open-source, ECS remains heavily influenced by Elastic’s internal needs. For instance, ECS optimizes data types for Elastic’s indexing and querying, like the distinction between keyword and text fields. Such optimizations can be unnecessary or incompatible with non-Elastic platforms.
Field Ambiguity:
CIM uses fields like src and dest, which lack precision compared to more explicit options like source.ip or destination.port. This can lead to confusion and the need for additional context when performing cross-platform analysis.
Vendor-Centric Design:
CIM: The field definitions and categories are tightly aligned with Splunk’s correlation searches, limiting its relevance outside Splunk environments.
ECS: Data types like geo_point are unique to Elastic’s product features and capabilities, making the schema less suitable when integrating with other tools.
How OCSF addresses these challenges
The OCSF was developed by a consortium of industry leaders, including AWS, Splunk, and IBM, with the goal of creating a truly vendor-neutral and comprehensive schema.
Vendor-Neutral and Tool-Agnostic: OCSF is designed to be applicable across all logs, not just security logs. This flexibility allows it to adapt to a wide variety of data sources while maintaining consistency.
Open-Source with Broad Community Support: OCSF is openly governed and welcomes contributions from across the industry. Unlike ECS and CIM, OCSF’s direction is not controlled by a single vendor, ensuring it remains applicable to diverse environments.
Specificity and Granularity: The schema’s granularity aids in filtering and prevents the overloading of concepts. For example, OCSF uses specific fields like identity.username and network.connection.destination_port, providing clarity while avoiding ambiguous terms like src.
Modularity and Extensibility: OCSF’s modular design allows for easy extensions, making it adaptable without compromising specificity. Organizations can extend the schema to suit their unique use cases while remaining compliant with the core model.
In DataBee’s own implementation, we’ve extended OCSF to include custom fields specific to our environment, without sacrificing compatibility or requiring extensive custom mappings. For example, we added the assessment object, which can be used to describe data around 3rd party security assessments or internal audits. This kind of log data doesn’t come from your typical security products but is necessary for the kind of use cases you can achieve with a data lake.
Now that we have some data points about my own experiences with some of the industry’s most common schemas, it’s natural to share a visualization through a comparison matrix of OCSF and two leading schemas.
OCSF Schema Comparison Matrix
Aspect
OCSF
Splunk CIM
Elastic ECS
Openness
Open-source, community and multi-vendor-driven
Proprietary, Splunk-driven
Open-source, but Elastic-driven
Community Engagement
Broad, inclusive community, vendor-neutral
Limited to Splunk community and apps
Strong Elastic community, centralized control
Flexibility of Contribution
Contributable, modular, actively seeks community input
No direct community contributions
Contributable, but Elastic makes final decisions
Adoption Rate
Early but growing rapidly across multiple vendors
High within Splunk ecosystem
High within Elastic ecosystem
Vendor Ecosystem
Broad support, designed for multi-vendor use
Splunk-centric, limited outside of Splunk
Elastic-centric, some third-party integrations
Granularity and Adaptability
Structured and specific but modular; balances adaptability with detailed extensibility
Moderately structured with more generic fields; offers broad compatibility but less precision
Highly granular and specific with tightly defined fields; limited flexibility outside Elastic environments
Best For
Flexible, vendor-neutral environments needing both detail and adaptability
Broad compatibility in Splunk-centric environments
Consistent, detailed analysis within Elastic environments
The impact of OCSF at DataBee
In working with OCSF, I have been particularly impressed with the combination of how detailed the schema is and how extensible it is. We can leverage its modular nature to apply it to a variety of use cases to fit our customers' needs, while re-using most of the schema and its concepts. OCSF’s ability to standardize and enrich data from multiple sources has streamlined our analytics, making it easier to track threats across different platforms and ultimately helping us deliver more value to our customers. This level of consistency and collaboration is something that no other schema has provided, and it’s why OCSF has been so impactful in my role as a data analyst.
If we have ideas for the schema that might be usable for others, the OCSF community is receptive to contributions. The community is already brimming with top talent in the SIEM and security data field and is there to help guide us in our mapping and schema extension decisions. The community-driven approach means that I’m not working in isolation; I have access to a wealth of knowledge and support, and I can contribute back to a growing standard that is designed to evolve with the industry.
Within DataBee as a product, OCSF enables us to build powerful correlation logic which we use to enrich the data we collect. For example, we know we can track the activities of a device regardless of whether the event came from a firewall or from an endpoint agent, because the hostname will always be device.name.
Whenever our customers have any questions about how our schema works, the self-documenting schema is always available at ocsf.databee.buzz (which includes our own extensions). This helps to enable as many users as possible to gain security and compliance insights.
Conclusion
As organizations continue to rely on increasingly diverse and complex data sources, the need for a standardized schema becomes paramount. While CEF, LEEF, CIM, and ECS have served important roles, their limitations—whether in scope, flexibility, or vendor-neutrality—make them less ideal for a comprehensive data lake strategy.
For me as a Principal Cybersecurity Data Analyst, OCSF has been transformative and represents the next evolution in standardization. With its vendor-agnostic, community-driven approach, OCSF offers the precision needed for detailed analysis while remaining flexible enough to accommodate the diverse and ever-evolving landscape of log data.
Read More
Political Advertising: the race to every screen moves fast
If there’s one thing that’s self-evident in this U.S. presidential election year, it’s this: political advertising moves fast.
"For agencies representing the vast number of campaigns across state and national campaigns, it’s not just about the sheer volume, but also about the need to respond quickly,” explains Justin Morgan, Sr. Director of Product Management for the CTS AdFusion team. “Just think about how the news cycle – in just the last few days alone – has necessitated rapid changes in messaging that needs to be handled accurately across every outreach effort.
According to GroupM, political advertising for 2024 is expected to surpass $16 billion – over four percent of the entire ad revenue for the year. As consumers we’re certainly aware that campaign messaging is constant, and constantly changing, but what does that mean behind the scenes from a technology standpoint?
“Political advertisers have to maintain a laser-focus on messaging and results,” explains Morgan. “This makes accuracy and agility even more crucial, so that agencies can make changes fast and trust that every spot is not just placed accurately, but also represents their candidate(s) in the best possible quality, no matter where or how that ad gets seen.”
For the AdFusion team, it’s important to make ad management simple and streamlined so our partners can focus on the most important goal, winning the race. That’s why we focus on three areas to help our clients get better, faster, and smarter:
Better: Precisely target the right voters with the right message by leveraging program-level automated traffic and delivery
Faster: Quickly shift campaign strategy as the race evolves by delivering revised ads and traffic in minutes
Smarter: Drive compliance with one-hour invoicing and in-platform payments.
Learn more about how AdFusion improves the technology foundation for driving political campaigns here.
Read More
The challenges of a converged security program
It’s commonplace these days to assume we can learn everything about someone from their digital activity – after all, people share so much on social media and over digital chats. However, advanced threats are more careful on digital. To catch advanced threats, therefore, combining insights from their actual activities in the world on a day-to-day basis with their digital communications and activity can provide a better sense if there’s an immediate and significant threat that needs to be addressed.
Let’s play out this insider threat scenario. While this scenario is in the financial services sector, with quick imagination, a security analyst could see applicability to other sectors. An investment banking analyst, Sarah, badges into a satellite office on a Saturday at 7 pm. Next, she logs onto a workstation, and prints 200 pages of materials. These activities alone, could look innocuous. But taken together, could there be something more going on?
As it turns out, Sarah tendered her resignation the prior Friday with 14 days notice. She leaves that Saturday night with two paper bags of confidential company printouts in tow to take to her next employer – a competing investment bank - to give her an edge.
A complete picture of her activity can be gleaned with logs from a few data sources:
HR data showing her status as pending termination, from a system like Workday or SAP
Badge reader logs
Sign in logs
Print logs
Video camera logs, from the entry and exit way of the building
While seemingly simple, piecing all this information together and taking steps to stop the employee’s actions or even recover the stolen materials is non-trivial. Today, companies are asking themselves, what type of technology is required to know that her behavior was immediately suspicious? And what type of security program can establish the objectives and parameters for quickly catching this type of insider threat?
What is a converged security program?
In the above scenario, sign-in logs and print logs alone aren’t necessarily suspicious. The suspicion level materially increases when you consider the combined context of her employment status with the choice of day and time to badge into the office. As such, converged security dataset analysis brings together physical security data points, such as logs from cameras or badge readers in the above example, and digital insights from activity on computers, computer systems or the internet. If these insights are normalized into the same dataset with clear consistency across user and device activity, they can be analyzed by physical security or cybersecurity analysts for faster threat detection. Furthermore, such collaboration can give way to physical and cybersecurity practitioners establishing a converged set of policies and procedures for incident response and recovery.
In his book, Roland Cloutier describes three important attributes of a converged security function:
One Table: everyone sitting together to discuss issues and create a sense of aligned missions, policies, and procedures for issue detection and response.
Interconnected Issue Problem Solving: identifying the problem as a shared mission and connecting resources in a way that resolves problems faster.
Link Analysis: bringing together data points about an issue or problem and correlating them to gain insights from data analytics.
Challenges of bringing together physical and information security
In today’s environment, the challenges of intertwining physical and digital security insights are substantial. Large international enterprises have campuses scattered across the world and a combination of in-office and remote workers. They may face challenges when employee data is fragmented across different physical and digital systems. Remote workers often don’t have physical security log information associated with their daily activity because they work from the confines of their homes, out of reach of corporate physical monitoring.
The modern workforce model further complicates managing physical and digital security as organizations contend with the:
Rise of remote work
Demise of the corporate network
Usage of personal mobile devices at work
Constant travel of business executives
A worker can no longer be tracked by movement in the building and on the corporate network. Instead, the person’s physical location and network connections change throughout the day. Beyond the technical challenges, organizations face hierarchical structure and human element challenges.
Many companies separate physical security from cybersecurity. One reason for this is that a different skillset may be required to stop the threats. Yet, there is value in the two security leaders developing an operating model for collaboration centers on a global data strategy with consistent and complete insights from the physical security and cybersecurity tools.
Consider a model to do so that revolves around 3 principles:
Common data aggregation and analysis across physical and cybersecurity toolsets
Resource alignment for problem solving and response in the physical security org and the cybersecurity group.
A common set of metrics for accountability across the converged security discipline
Diverse, disconnected tools
This is the problem cybersecurity faces, but this time on a wider scale. Each executive purchases tools for monitoring within their purview. They get data.
However, they either fail to gain insights from it or any insights they do achieve are limited to the problem the technology solves.
Access is a good example:
Identity and Access Management (IAM): sets controls that limit and manage how people interact with digital resources.
Employee badges: set control for what facilities and parts of facilities people can physically access.
Returning to the insider threat that Sarah Smith poses: the CISO’s information security organization has visibility into sign in and print logs but would have to collaborate with the CSO’s physical security group for badge logs. This process takes time and, depending on organizational politics, potentially requires convincing.
The siloed technologies could potentially create a security gap when the data remains uncorrelated:
The sign in and print logs alone may not sufficiently draw attention to Sarah’s activities in the CISO’s organization.
Badge in logs in the CSO’s organization may not draw an alert.
HR data, such as employment/termination status, may not be correlated in either the CISO’s or CSO’s available analytical datasets.
Without weaving the various data sources together into one story, Sarah’s behavior has a high risk of going completely undetected – by anyone.
In a converged program, IAM access and badge access would be correlated to improve visibility. In a converged program with high security data maturity, the datasets would provide a more complete picture with insights that correlate HR termination status, typical employee location, and more business context.
Resource constraints
The challenge of resource alignment often begins by analyzing constraints. Both physical and digital security costs money. Many companies view these functions as separate budgets, requiring separate sets of technologies, leadership, and resources.
Converged security contemplates synergies where possible overlaps can potentially reduce costs. For example:
Human Resources data: identifying all workforce members who should have physical and digital access.
IT system access: determining user access where HR is the source underlying Active Directory or IGA birthright provisioning and automatic access termination.
Building access: badges provisioning and terminating physical access according to HR status
The HR system, sign on system, and badge-in system each serve a separate recordation purpose, which can then provide monitoring functionality. However, by keeping insights from daily system usage separate, the data storage and analysis can grow redundant. As Cloutier notes, “siloed operations tend to drive confusion, frustration, and duplicative work streams that waste valuable resources and increase the load on any given functional area.” (24)
Instead, imagine if diverse recordation systems output data to a single location that parsed, correlated, and enriched data to create user profiles and user timelines so cross-functional teams with an interdisciplinary understanding of threat vectors could analyze it. In such an organization, this solution could
Reduce redundant storage.
Eliminate manual effort in correlating data sources from different systems.
Save analysts time by having all the data already in one spot (no need for gathering in the wake of an incident.
Allow for more rapid detection and response.
Metrics and accountability
Keeping physical security separate from cybersecurity can create the risk of disaggregated metrics and a lack of accountability. People must “compare notes” before making decisions, and the data may have discrepancies because everyone uses different technologies intended to measure different outcomes.
These data, tool, and operations silos can create an intricate, interconnected set of overlapping “blurred lines” across:
Personal/Professional technologies
Physical/Digital security functions
In the wake of a threat, the last thing people want to do is increase the time making decisions or argue over accountability, which can quickly spiral into conversations of blame.
Imagine, instead, a world in which the enterprise can make security a trackable metric. Being able to track an end goal – such as security, whether physical or digital – makes it easier to
Hold people accountable.
Make clear decisions.
Take appropriate action.
A trackable metric is only as good as the data that can back it up. Converged security centers around the concept of a global security data strategy that provides an open architecture for analyses that answer different questions while using a commonly accessed, unified data set that diverse security professionals accept as complete, valid, and the closet thing they can get to the “source of truth”.
Weaving together data for converged security with DataBee®
DataBee by Comcast Technology Solutions fuses together physical and digital security data into a data fabric architecture, then enriches it with additional business information, including:
Business policy context
Organizational hierarchy
Employment status
Authentication and endpoint activity logs
Physical bad and entrance logs
By weaving this data together, organizations achieve insights using a transformed dataset mapped to the DataBee-extended Open Cybersecurity Framework Scheme (OCSF). DataBee EntityViewsTM uses a patent-pending entity resolution software that automatically unifies disparate entity pieces across multiple sources of information. This enables many analytical use cases at speed and low cost. One poignant use case includes insider threat monitoring with a comprehensive timeline of user and devices activity, inside a building and when connected to networks.
The DataBee security data fabric architecture solves the Sarah problem, by weaving together in on timeline:
Her HID badge record from that Saturday’s office visit
The past several months of HR records from Workday showing her termination status.
Her Microsoft user sign-in to a workstation in the office
The HP print logs associated with her network ID and a time stamp.
DataBee empowers all security data users within the organization, including compliance, security, operations, and management. By creating a reliable, accurate dataset, people have fast, data-driven insights to answer questions quickly.
Read More
Vulnerabilities and misconfigurations: the CMDB's invasive species
“Knowledge is power.” Whether you attribute this to Sir Francis Bacon or Thomas Jefferson, you’ve probably heard it before. In the context of IT and security, knowing your assets, who owns them, and how they’re connected within your environment are fundamental first steps in understanding your environment and the battle against adversaries. You can’t place security controls around an asset if you don’t know it exists. You can’t effectively remediate vulnerabilities to an asset without insight into who owns it or how it affects your business.
Maintaining an up-to-date configuration management database (CMDB) is critical to these processes. However manually maintaining the CMDB is unrealistic and error-prone for the thousands of assets across the modern enterprise including cloud technologies, complex networks, and devices distributed across in-office and remote workforce users complicate this process. To add excitement to these challenges, the asset landscape is everchanging for entities like cloud assets, containers virtual machines, which can be ephemeral and become lost in the noise generated by the organization's hundreds of security tools. Additionally, most automation fails to link business users to the assets, and many asset tools struggle to prioritize assets correlating to security events, meaning that companies can easily lose visibility and lack the ability to prioritize asset risk.
Most asset management, IT service management (ITSM), and CMDBs focus on collecting data from the organization’s IT infrastructure. They ingest terabytes of data daily, yet this data remains siloed, preventing operations, security, and compliance teams from collaborating effectively.
With a security data fabric, organizations can break down data silos to create trustworthy, more accurate analytics that provide them with contextual and connected security insights.
The ever-expanding CMDB problem
The enterprise IT environment is a complex ecosystem consisting of on-premises and cloud-based technologies. Vulnerabilities and misconfigurations are an invasive species of the technology world.
In nature, a healthy ecosystem requires a delicate balance of plants and organisms who all support one another. An invasive species that disrupts this balance can destroy crops, contaminate food and water, spread disease, or hunt native species. Without controlling the spread of invasive species, the natural ecosystem is at risk of extinction.
Similarly, the rapid adoption of cloud technologies and remote work models expands the organization’s attack surface by introducing difficult-to-manage vulnerabilities and misconfigurations. Traditional CMDBs and their associated tools often fail to provide the necessary insights for mitigating risk, remediating issues, and maintaining compliance with internal controls.
In the average IT environment, the enterprise may combine any of the following tools:
IT Asset Management: identify technology assets, including physical devices and ephemeral assets like virtual machines, containers, or cell phones
ITSM: manage and track IT service delivery activities, like deployments, builds, and updates
Endpoint Management: manage and track patches, operation systems (OS) updates, and third-party installed software
Vulnerability scanner: scan networks to identify security risks embedded in software, firmware, and hardware
CMDB: store information about devices and software, including manufacturer, version, and current settings and configurations
Software-as-a-Service (SaaS) configuration management: monitor and document current SaaS settings and configurations
Meanwhile, various people throughout the organization need access to the information that these tools provide, including the following teams:
IT operations
Vulnerability management
Security
Compliance
As the IT environment expands and the organization collects more security data, the delicate balance between existing tools and people who need data becomes disrupted by newly identified vulnerabilities and cloud configuration drift.
Automatically updating the CMDB with enriched data
In nature, limiting an invasive species’ spread typically means implementing protective strategies for the environment that contain and control the non-native plant or organism. Monitoring, rapid response, public education, and detection and control measures are all ways that environmentalists work to protect the ecosystem.
In the IT ecosystem, organizations use similar activities to mitigate risks and threats arising from vulnerabilities and misconfigurations. However, the time-consuming manual tasks are error-prone and not cost-efficient.
Connect data and technologies
A security data fabric ingests data from security and IT tools, automating and normalizing the inputs so that the organization can gain correlated insights from across a typically disconnected infrastructure. With a vendor agnostic security data platform connecting data across the environment, organizations can break down silos created by various schemas and improve data’s integrity.
Improve data quality and reduce storage costs
By applying extract, transform, and load (ETL) pipelines to the data, the security data fabric enables organizations to store and load raw and optimized data. Flattening the data can reduce storage costs since companies can land it in their chosen data repository, like a data lake or warehouse. Further, the data transformation process identifies and can fix issues that lead to inaccurate analytics, like:
Data errors
Anomalies
Inconsistencies
Enrich CMDB with business information
Connecting asset data to real-world users and devices enables organizations to assign responsibility for configuration management. Organizations need to correlate their CMDB data with asset owners so that they can assign security issue remediation activities to the right people. By correlating business information, like organizational hierarchy data, with device, vulnerability scan, and ITSM data, organizations can streamline remediation processes and improve metrics.
Gain reliable insights with accurate analytics
Configuration management is a critical part of an organization’s compliance posture. Most security and data protection laws and frameworks incorporate configuration change management and security path updating. With clean data, organizations can build analytics models to help improve their compliance outcomes. To enhance corporate governance, organizations use their business intelligence tools, like Power BI or Tableau, to create visualizations so that senior leadership teams and directors can make data-driven decisions.
Maintain Your CMDB’s delicate ecosystem with DataBee®
DataBee from Comcast Technology Solutions is a security data fabric that ingests data from traditional sources and feeds then supplements that with business logic and people information. The security, risk, and compliance platform engages early and often throughout the data pipeline leveraging metadata to adaptively collect, parse, correlate, and transform security data to align it with the vendor-agnostic DataBee-extended Open Cybersecurity Framework Schema (OCSF).
Using Comcast’s patent-pending entity resolution technology, DataBee suggests potential asset owners by connecting asset data to real-world users or devices so organizations can assign security issue remediation actions to the right people. With a 360 view of assets and devices, vulnerability and remediation management teams can identify critical and low-priority entities to help improve mean time to detach (MTTD) and mean time to respond (MTTR) metrics. The User and Device tables supplement the organization’s existing CMDB and other tools, so everyone who needs answers has them right at their fingertips.
Read More
Continuous controls monitoring (CCM): Your secret weapon to navigating DORA
Financial institutions are a critical backbone of the local and geographical – and world – economy. As such the financial services industry is highly regulated and often faces new compliance mandates and requirements. Threat actors target the industry because it manages and processes valuable customer personally identifiable information (PII) such as account, transaction, and behavioural data.
Maintaining consistent operations is critical, especially in an interconnected, global economy. To standardise processes for achieving operational resilience, the European Parliament passed the Digital Operational Resilience Act (DORA).
What is DORA?
DORA is a regulation passed by the European Parliament in December of 2022. DORA applies to digital operational resilience for the financial sector. DORA entered into force in January of 2023, and it applies as of January 17, 2025.
Two sets of rules, or policy products, provide the regulatory and implementation details of DORA. The first set of rules under DORA were published on January 17, 2024, and consist of four Regulatory Technical Standards (RTS) and one Implementing Technical Standard (ITS). It is worth noting that not all the RTSes contain controls that financial entities need to implement. For example, JC 2023 83, the “Final Report on draft RTS on classification of major incidents and significant cyber threats,” provides criteria for entities to determine if a cybersecurity incident would be classified as a “major” incident according to DORA. The public consultation on the second batch of policy products is completed, and the feedback is being reviewed prior to publishing the final versions of the policies. Based on the feedback received from the public, the finalised documents will be submitted to the European Commission July 17, 2024.
What is Continuous Controls Monitoring (CCM), and how can it help?
DORA has a wide-ranging set of articles, many of which require the implementation and monitoring of controls. Organisations can use a continuous controls monitoring (CCM) solution, which is an emerging governance, risk and compliance technology, to automate controls monitoring and reduce audit cost and stress. When choosing a CCM solution for DORA, consider a data fabric platform that brings together data from enterprise IT and cybersecurity tools and enriches it with business data to help organisations apply data analytics for measuring and reporting on the effectiveness of internal controls and conformance to laws and regulations. The following are examples of how CCM could be used to support DORA compliance.
Continuous Monitoring:
Article 9 of DORA, Protection and prevention, explains that to adequately protect Information and Communication Technologies (ICT) systems and organise response measures, “financial entities shall continuously monitor and control the security and functioning of ICT systems.” Similarly, Article 16, Simplified ICT risk management framework, requires entities to “continuously monitor the security and functioning of all ICT systems.”
Additionally, Article 6 requires financial entities to “minimise the impact of ICT risk by deploying appropriate strategies, policies, procedures, ICT protocols and tools.” It goes on to require setting clear objectives for information security that include Key Performance Indicators (KPIs), and to implement preventative and detective controls. Reporting on the implementation of multiple controls, combining compliance data with organizational hierarchy, and reporting on KPIs are all tasks that CCM excels at. When choosing a CCM solution for DORA, consider one that supports uninterrupted oversight of multiple controls by automating the ingestion of data, formatting it, and then presenting it to users through the business intelligence solution of their choice.
The Articles of JC 2023 86 the “Final report on draft RTS on ICT Risk Management Framework and on simplified ICT Risk Management Framework” contain many ICT cybersecurity requirements that are a natural fit to be measured by CCM. Here are some examples of these controls:
Asset management: entities must keep records of a set of attributes for their assets, such as a unique identifier, the owner, business functions or services supported by the ICT asset, whether the asset is or might be exposed to external networks, including the internet, etc.
Cryptographic key management: entities need to keep a register of digital certificates and the devices that store them and must ensure that certificates are renewed prior to their expiration.
Data and system security: entities must select secure configuration baselines for their ICT assets as well as regularly verifying that the baselines are in place.
A CCM solution that is built on a platform that correlates technical and business data supports security, risk, and compliance teams for building accurate, reliable reports to help measure compliance. It provides consistent visibility into control status across multiple teams throughout the organisation. This reduces the need for reporting controls in spreadsheets and in multiple dashboards, helping business leaders make more immediate and data-driven governance decisions about their business.
Executive Oversight:
Financial entities are required to have internal governance to ensure the effective management of ICT risk (Article 5, Governance and organisation). CCM solutions that integrate with business intelligence solutions, like Power BI and Tableau, to build executive dashboards and data visualizations can provide an overview of multiple controls through a single display.
Roles and Responsibilities:
DORA Article 5(2)) requires management to “set clear roles and responsibilities for all ICT-related functions and establish appropriate governance arrangements to ensure effective and timely communication, cooperation and coordination among those functions.” A CCM solution that combines organisational hierarchy with control compliance data makes roles and responsibilities explicit, which helps improve accountability across risk, management, and operations teams. That is, a manager using CCM does not have to guess which assets or people that belong to their organisation are compliant with corporate policy, or regulations. Instead, they can easily view their compliance status.
CCM dashboards and detail views provide the specifics about any non-compliant assets such as the asset name, and details of the controls for which the asset is non-compliant. Similarly, CCM can communicate details about compliance for a manager’s staff, such as if mandatory training has been completed by its due date, or who has failed phishing simulation tests.
Coordination of multiple teams:
As the FS-ISAC DORA Implementation Guidance notes, “DORA introduces increased complexity and requires close cross-team collaboration. Many DORA requirements cut across teams and functions, such as resilience/business continuity, cybersecurity, risk management, third-party and supply chain management, threat and vulnerability management, incident management and reporting, resilience and security testing, scenario exercising, and regulatory compliance. As a result, analysing compliance and checking for gaps is challenging, particularly in large firms.”
CCM helps with cross-team collaboration by providing a common, accurate, and consistent view of compliance data, which can reduce overall compliance costs. That is, GRC teams are not tasked with creating and distributing multiple reports for various teams and trying to keep the reports consistent, and timely. Or business teams are no longer responsible for pulling their own reports, overcoming issues with inconsistent or inaccurate reporting from inexperience with the product creating the report, reports being run with different parameters or on different dates, or other differences or errors. CCM helps resolve this issue because it makes the same content, using consistent source data from the same point in time, available to all users.
5 ways how DataBee can help you navigate DORA
The requirements for DORA are organised under these five pillars. How does DataBee help enterprises to comply with each of the five?
1. Information and Communication Technologies (ICT) risk management requirements (which include ICT Asset Management, Vulnerability and patch management, etc.)
DataBee’s Continuous Controls Monitoring (CCM) delivers continuous risk scores and actionable risk mitigation, helping financial entities to prioritize remediation for at-risk resources.
2. ICT-related incident reporting
DORA identifies what qualifies as a “major incident” and must therefore be reported to competent authorities. This is interesting compared to cybersecurity incident reporting requirements from the U.S. Securities and Exchange Commission (SEC) which are based on materiality, but do not provide details about what is or is not material. DORA includes criteria to determine if the incident is “major.” Some examples are if more than 10% of all clients or more than 100,000 clients use the affected service, or if greater than 10% of the daily average number of transactions are affected. Additionally, if a major incident does need to be reported, DORA includes specific information that financial entities must provide. These include data fields such as date and time the incident was detected, the number of clients affected, and the duration of the incident. A security data fabric such as DataBee can help to provide many of the measurable data points needed for the incident report.
3. ICT third-party risk
DataBee for CCM provides dashboards to report on the controls used for the management and oversight of third-party service providers. These controls are implemented to manage and mitigate risk due to the use of third parties.
4. Digital operational resilience testing (Examples include, vulnerability assessments, open-source analyses, network security assessments, physical security reviews, source code reviews where feasible, end-to-end testing or penetration testing.)
DORA emphasizes digital operational resilience testing. DataBee supports this by aggregating and simplifying the reporting for control testing and validation. DataBee’s CCM dashboards provide reporting for multiple controls using an interface that is easily understood, and which business managers can use to readily assess their unit’s compliance with controls required by DORA.
5. Information sharing
As with incident reporting, the data fabric implemented by DataBee supports information sharing. DataBee can economically store logs and other contextual data for an extended period. DataBee makes this data searchable providing the ability to locate, and at the organization’s discretion, exchange cyber threat information and intelligence with other financial entities.
Read More
Channels are changing: Cloud-based origination and FAST
Linear television isn’t dead, but it certainly isn’t the same as it was even five years ago. As content delivery and consumption shift toward digital platforms, content owners and distributors find themselves navigating a rapidly evolving landscape. The rise of streaming services has further intensified competition for viewership, challenging traditional linear TV models to adapt or risk becoming obsolete.
In response, content owners and operators are rethinking their distribution strategies and exploring innovative approaches to engage audiences and adapt to end-user preferences. However, thriving in this new ecosystem demands more than just reactivity — it requires innovation, agility, and a keen understanding of emerging technologies and industry trends. Read on to learn the role that Managed Channel Origination (MCO) plays in the evolution of content delivery or watch experts Gregg Brown, Greg Forget, David Travis, and Jon Cohen dive deeper in the full webinar here.
Global reach in the digital media landscape
MCO, which leverages cloud-based technologies and advanced workflows to streamline and scale content delivery, is at the forefront of the content delivery revolution. With cloud infrastructure, service providers can seamlessly scale up or down based on demand, optimizing resource allocation and adapting to evolving consumer preferences and distribution platforms.
Thanks to the cloud, barriers and complexities that once hindered global distribution are either significantly reduced — or no longer exist at all — allowing businesses to operate at a global scale and deliver content across multiple regions and availability zones. This newfound flexibility enables providers to deploy channels easily in any region, eliminating the need for physical infrastructure and simplifying the distribution process.
Beyond flexibility, cloud solutions can also lower the total cost of ownership, accelerate deployment speed and time to market, and lower security and support costs. When this is considered with the greater levels of flexibility and scalability offered by cloud-based origination, it’s clear that with the right MCO solution, content providers can tap into emerging markets, capitalize on growing viewership trends, and drive revenue growth on a global scale.
Unlocking the value of live content
Live content, such as sporting events and concerts, remains a cornerstone of linear television, engaging audiences with unique viewing experiences and a sense of immediacy that other forms of media just can’t quite replicate. Yet, operators must strike a balance between the expense of acquiring live content, effectively monetizing it through linear channels, and delivering it to global markets. Cloud-based origination makes it easier to achieve this crucial balance and highlights the importance of collaboration and strategic partnerships in scalable, cost-effective delivery.
Within the context of MCO, live content translates to greater opportunities for regionalization and niche content catering to diverse viewer preferences. As the industry continues to evolve, leveraging technological advancements and embracing innovation will be essential for content owners to navigate the complexities of global distribution, advertising, and monetization while delivering compelling content experiences to audiences around the world.
FAST forward: Transforming the perception of ad-supported channels
FAST channels represent another way that linear is undergoing a transformation. These free, ad-supported channels offer content to viewers over the internet and cater to the growing demand for alternative viewing options. Yet content owners must carefully consider the role of FAST channels within their overall offerings.
The current state of the FAST market feels like halftime in a major sporting event, prompting stakeholders to reassess their strategies and make necessary adjustments to adapt to shifting viewer preferences and navigate the complexities of content distribution. By leveraging advanced technologies, cloud-based channel origination can act as a catalyst for the creation, curation, and monetization of FAST channels.
The simplicity of onboarding onto FAST platforms with pre-enabled implementations has made entry into the market more accessible for content owners. This accessibility, coupled with the growing consumer demand for high-quality content, underscores the importance of FAST channels meeting broadcast-quality standards to remain competitive and reliable.
There's a noticeable transition in the FAST channels landscape, where channels previously seen as lower value are now transforming to increase their sophistication and content quality. This shift reflects a growing demand for higher-quality FAST channels, signaling a departure from the perception of FAST channels as merely supporting acts to traditional linear TV.
With many content providers expanding their reach beyond regionalized playouts to cater to international markets, a cloud-based approach can help navigate the opportunities and challenges associated with expanding FAST channels into emerging markets, particularly concerning advertising revenue and economic viability in new territories.
Advertising within cloud-based origination is still evolving. Currently, distributors wield substantial data and control, managing ad insertion, networks, and sales. At the same time, FAST channels represent the next evolution of monetization, particularly with technologies like server-side ad insertion (SSAI) and AI making it easier to leverage data to create more personalized experiences. As the flow of open data increases, and the use of AI and ML expands, it will only serve to foster greater efficiency and innovation and ensure that ads reach the right global users at the right time.
Crafting a customized channel origination strategy
While channel origination offers content owners an innovative and cost-effective way to launch and manage linear channels more efficiently, it can’t be approached as a one-size-fits-all solution. For instance, does the business want to add a new channel or expand distribution? Is the content file-based, or does it include live events? If it's ad-supported, are there graphic requirements, or is live data needed?
To address these variabilities, content owners must create a scalable strategy that is flexible enough to adapt to regional requirements. By leveraging cloud-based solutions, content owners can effectively manage fluctuations in demand and adapt their channel offerings to meet the needs of global audiences.
In a marketplace where entertainment options abound, cloud-based origination helps businesses make the most impact, not just by increasing the number of channels, but also by ensuring that they have the right offerings in the right places. By adopting a holistic approach to channel origination that incorporates internet-based delivery services, content owners can maximize their reach, enhance viewer engagement, and position themselves for success in an ever-changing landscape.
Learn how Comcast Technology Solutions’ Managed Channel Origination can help your business expand your offerings or break into new markets in Europe, the Middle East, Africa, and beyond.
Read More
100% exciting, 100 people strong (and growing)
When you join an organization the size of Comcast, it’s easy for outsiders to see only “Fortune 29 Company!” or “huge enterprise!”. But tell that to the DataBee team – we’ve been in start-up land since October 2022. And it’s been an exciting ride!
DataBee® is a security, risk and compliance data fabric platform inspired by a platform created internally by Comcast’s global CISO, Noopur Davis, and her cybersecurity and GRC teams. They got such amazing results from the platform that they built and operated for 5 years—from cost savings to faster threat detection and compliance answers, to name a few—that Comcast executives saw the potential to create a business around this emerging technology space. What could provide a better product market fit than real business outcomes from a large, diversified global enterprise data point?
That was back in 2022, and the beginning of the DataBee business unit, which was built and created within Comcast Technology Solutions to bring this security data fabric platform to market. Initially funded with just enough to begin testing the market, it didn’t take long to realize that the opportunity for DataBee was auspicious, and substantial additional financing was provided by Comcast’s CEO in May 2023. One year after raising substantial additional financing from Comcast, I’m proud to say that DataBee has passed the 100-team-member milestone (and growing). I believe it's one of the most exciting cybersecurity start-ups out there. Times ten. (Ok, that’s my enthusiasm spilling over 😉.)
As challenging as fundraising is, staffing a team as large and talent diverse as DataBee has been no small feat, and we’ve done it relatively quickly - across three continents and five countries, no-less. Part of that challenge has been overcoming preconceived notions of who and what Comcast is, especially when you’re recruiting people from the cybersecurity industry. “Wait, what??? Comcast is in the enterprise security business??!!” Yes!
Here's what I’ve learned about hiring on the scale-up journey:
Focus on the mission. Your mission can be the most attractive thing about your business, especially when you’ve zeroed in on addressing an unmet need in the market. In the case of DataBee, our mission has been to connect security data so that it works for everyone. So many talents on the DataBee team have been compelled by the idea of solving the security data problem – too much data, too dispersed, in too many different formats to provide meaningful insights quickly to anyone who needs them. To play a role in fixing this problem? It’s intoxicating.
Look for people who seek the opportunity for constant learning and creativity. “Curiosity” appears at the top of the list of essential qualities in a successful leader, and I understand why. I think of curiosity as a hunger for constant learning and creativity, as well as the courage to explore uncharted territory, and these have been qualities I’ve been looking for as I staff up the DataBee team. These traits aren’t reserved for leaders only; I see them in practice across the whole team, from individual contributors all the way up to department leaders, and I know it’s having a big impact on our ability to rapidly innovate and build solutions that matter. (High energy doesn’t hurt either!)
Be a kind person. The number one thing I hear from my new hires is that they are taken aback by what a nice team I have. This makes me very happy and a little sad all at the same time: happy because I love knowing that the “be a kind person” rule is being put into practice across the DataBee team, but sad because it means that some of our new hires have not experienced kindness in previous jobs. Kindness and competency are not mutually exclusive things, and an environment of benevolence can foster creativity and success.
The funding we’ve received, and the strength, size, and rapid growth of the DataBee team is a testament to the fact that we’ve identified and are working to solve a very real problem being faced by many organizations – data chaos. It’s a problem that exists especially in security, but the rabid interest in artificial intelligence (AI) reminds us that we need to look beyond security data to all data, to ensure that good, clean, quality data is feeding AI systems. For AI to really work its magic, it needs quality data training so that it can deliver amazing experiences.
DataBee is 100 strong and growing, and our security data fabric platform, which is already helping customers manage costs and drive efficiencies, keeps evolving so it can help organizations continue the “shift left” to the very origins of their data. The goal is that quality data woven together, enriched, and delivered by DataBee will ultimately fuel AI systems and help business and technology leaders across the spectrum glean the insights they need for security, compliance, and long-term viability.
Read More
DataBee's guide to sweetening your RSAC experience
Noopur Davis, Comcast’s Chief Information Security Officer recently talked about bringing digital transformation to cybersecurity, where she stated, “The questions we dare to ask ourselves become more audacious each day.” In cybersecurity, this audacity is the spark that ignites innovation. By embracing bold questions answered by connected security data, we can unlock new ideas that transform how we defend our digital world.
This is particularly fitting as the theme for this year's RSA Conference is "The Art of Possible”. With data being so abundant, it’s essential to have immediate and continuous insights to be data-directed when making business decisions and staying ahead of today’s threats, to better enable you in foreseeing tomorrow’s challenges.
Organizations, however, still struggle to pull together and integrate security data into a common platform. This year Techstrong research surveyed cybersecurity professionals in a Pulsemeter report that shares a glimpse into security data challenges and how security data fabrics herald a new era in security data management. According to the survey, 44% of respondents have 0-50% of their security data integrated. This can result in:
Lack of visibility into security and compliance programs
Data explosions and silos
Challenges with evolving regulations and mandates
Difficulties performing analytics at scale
Exorbitant data storage and computing costs
DataBee® from Comcast Technology Solutions can help overcome these challenges and assist in regaining control over your security data. The DataBee Hive Platform has brought technology to the market that can help with automating data ingestion, validating data quality, and lowering processing costs. This can lead to:
Improved collaboration by working on the same data set
More immediate and connected security data insights
More consistent and accurate compliance dashboards
Better data to power your AI initiatives and capabilities
DataBee is excited to participate in RSAC 2024 and help you achieve these outcomes for your organization.
Where to meet DataBee in Moscone Center
We're thrilled to showcase the DataBee Hive; a security, risk, and compliance data fabric platform designed to deliver connected security data that works for everyone. Buzz by Moscone North Hall and visit our booth #5278. We’ll be showcasing:
How to cost-optimize your SIEM
Reduce security audit stress
Make your CMDB more complete
Get a 360-degree activity view of any asset or user
And more!
Is the expo floor too busy or noisy? You can meet us one-on-one at Marriott Marquis by reserving a more personal conversation with the team.
Make you and RSA Conference a reality
Here's the deal: we want you there! DataBee has two fantastic options to help you attend, claim your ticket here at the RSA Conference website:
🎟️ Free RSA Conference Expo Pass: 52ECMCDTABEXP
🎟️ Discounted Full RSA Conference Pass - Save $150 with code: 52FCDCMCDTABE
Snowflake, DataBee, and Comcast
But wait, there's more! Join us for an evening of conversations, networking, and relaxation at our reception on May 8th, co-hosted with Comcast Ventures, Comcast Business, and Snowflake.
Date: Wednesday, May 8th, 2024
Time: 5:00 pm - 7:00 pm
After you register, keep an eye out for your confirmation email, which will include all the insider details on where to find us. This is your chance to unwind, network with fellow attendees, and forge valuable connections in a more relaxed setting. Register here to secure your spot and receive the location details.
Let's make your RSAC experience the best yet!
The DataBee team is eager to meet you and explore ways we can contribute to your organization's security data maturity journey. We invite you to visit us at booth #5278, network with us at the private reception, grab a photo with our mascot, and snag some bee-themed giveaways! We look forward to seeing you there!
Read More
All SIEMs Go: stitching together related alerts from multiple SIEMs
Today’s security information and event management (SIEMs) platforms do more than log collection and management. They are a critical tool used by security analysts and teams to run advanced security analytics and deliver unified threat detection and search capabilities.
A SIEM’s original purpose was to unify security monitoring in a single location. Security operation centers (SOCs) could then correlate event information to detect and investigate incidents faster. SIEMs often require specialized skills to fine-tune logs and events for correlation and analysis that is written in vendor-specific and proprietary query languages and schemas.
For some enterprises and agencies, it is not uncommon to see multiple SIEM deployments that help meet unique aspects of their cybersecurity needs and organizational requirements. Organizations often need to:
federate alerts
connect related alerts
optimize their SIEM deployments
Managing multiple SIEMs can be a challenge even for the most well-funded and skilled security organizations.
The business case for multiple SIEMs
For a long time, SIEMs worked. However, cloud adoption and high-volume data sources like endpoint detection and response (EDR) tools threw traditional, on-premise SIEMs curveballs. Especially when considering the cost of ingestion-based pricing and on-premise SIEM storage.
While having one SIEM to rule them all (your security logs) is a nice-to-have, organizations often find themselves managing a multi-SIEM environment for various reasons, including:
Augmenting an on-premises SIEM with Software-as-a-Service (SaaS) solutions to manage high-volume logs.
Sharing a network infrastructure with a parent company managing its SIEM while needing visibility into subsidiaries managing their own.
Acquiring companies through mergers and acquisitions (M&A) that have their own SIEMs.
Storing log data’s personally identifiable information (PII) in a particular geographic region to comply with data sovereignty requirements.
Multiple SIEM aggregation with DataBee
Complexity can undermine security operations by causing missed alerts or security analyst alert fatigue. DataBee® from Comcast Technology Solutions ingests various data sources, including directly from SaaS and on-premises SIEMs, stitching together related event context or alerts to help streamline the identification and correlation process. Alerts are enriched with additional logs and data sources, including business context and asset and user details. With a consistent and actionable timeline across SIEMs, organizations can optimize their investments and mature their security programs.
Normalization of data
An organization with SaaS and on-premises SIEM deployments may want to federate alerts and access to the data through the multiple SIEMs. This can be difficult to aggregate notable events during investigations.
DataBee can receive data from multiple sources including SaaS and on-premises SIEMs. DataBee normalizes the data and translates the original schema into an extended Open Cybersecurity Schema Framework (OCSF) format before sending it to the company’s chosen data repository, like a security data lake. By standardizing the various schemas across multiple SIEMs, organizations can better manage their detection engineering resources by writing rules once to cover multiple environments.
Enhanced visibility and accountability
With proprietary and patent-pending entity resolution technology, DataBee EntityViews ties together the various identifiers for entities, like users and devices, then uses a single identifier for enrichment into the data lake. With entity resolution, organizations can automatically correlate activity across numerous sources together and use that both in entity timelines as well as UEBA models.
Cost optimization for high-volume data sources
The adoption of cloud technologies and endpoint detection and response (EDR) have considerably impacted SIEM systems, creating a notable challenge due to the sheer volume of data generated. This surge in data stems from EDR's comprehensive coverage of sub-techniques, as outlined in the MITRE ATT&CK framework. While EDR solutions offer heightened visibility into endpoint activities, this influx of data overwhelms traditional SIEM architectures. This leads to performance degradation and the inability to effectively process and analyze events. However, leveraging the cost-effective storage solutions offered by data lakes presents a viable solution to this conundrum.
By utilizing data lakes, organizations can store vast amounts of EDR data at scale. This approach can not only alleviate the strain on SIEM systems. but also facilitate deeper analysis and correlation of security events. This empowers organizations to extract actionable insights and bolster their cyber defense strategies. Thus, integrating EDR with data lakes emerges as a promising paradigm for managing the deluge of security data while maximizing its utility in threat detection and response.
Real-Time detection streams
DataBee uses vendor-agnostic Sigma rules that allow organizations to get alerts for data, including forwarded data like DNS records. If the SOC team wants to receive alerts without having to store the data in the SIEM, the detections can be output into the data lake or any SIEM or security orchestration, automation, and response (SOAR) solution, including the original SIEM forwarding the data. By building correlated timelines across user activity between multiple SIEMs, organizations can gain flexibility across SIEM vendors so they can swap between them or trade them out, choosing the best technologies for their use cases rather than the ones that integrate better with current deployments.
Ready to bring together related alerts from multiple SIEMs? Let's talk!
Read More
Seeing the big (security insights) picture with DataBee
The evolution of digital photography mimics the changes that many enterprise organizations face when trying to understand their cybersecurity controls and compliance posture. Since the late 1990s, technology has transformed photograph development from an analog, manual process into a digital, automated field. These images hold our memories, storing points in time that we can look back on and learn from. Cybersecurity, in turn, is experiencing a similar transformation.
When you consider the enterprise data pipeline problem that DataBee® from Comcast Technology Solutions aims to solve through the everyday lens of creating, storing, managing, and retrieving personal photos, the platform’s evolutionary process and value makes more sense.
Too many technologies generating too much historical data
Portable, disposable Kodak cameras were all the rage in the 1990’s; but it could be days or weeks before you could see what you snapped because films needed to be sent for processing.
Over the 2000s, however, these processes increasingly turned digital, accelerating results dramatically. While high-quality, professional-grade digital cameras aren’t in danger of becoming obsolete, once cell phones with integrated cameras hit the market, they became an easy on-the-go way to capture life’s historical moments even though the picture might not develop until days, weeks, or maybe never as they’re left on the roll of film. Today, people use their smartphone devices lending even more depth and quality as we capture from family gatherings to vacation selfies instantly.
At Comcast, we faced a similar enterprise technology and security data problem. Just as people handle different kinds of images and the technologies that produce them, we have vast amounts of technologies that generate security data. It’s a fragmented, complicated environment that needs to handle rapidly expanding data.
Across the enterprise, Comcast stores and accesses increasingly larger amounts of data, including:
8000 month-by-month scans
1.7 million IPS targeted monthly for vulnerability scanning
7 multiple clouds or hybrid cloud environment
10 petabytes worth of data in our cybersecurity data lake
109 billion monthly application transactions
Finding the right moment in time
Let’s play out a scenario: You let your friend in on a stage in your life where you had bright red hair. Their response? “Pics or it didn’t happen.” To track down the historic photo, it takes immense effort to:
Figure out the key context about where and when it was captured.
Find the source of where the photo could be – is it in a hard drive? A cloud photo album? A tagged image in your social media profile?
Identify the exact photo you need within the source (especially if it is not labeled).
Comcast faced a similar data organization and correlation problem in their audits and their threat hunting. While we were drowning in data, we found that at the same time we were starved for insights. We were trying to connect relevant data to help build a timeline of activity of a user or device but as the data kept growing and security tools kept changing, we found data was incomplete or took weeks' worth of work to normalize and correlate data.
We faced many challenges when trying to answer questions and fractured data sources compounded this problem. Some questions we were asking were – do all the employees have their EDR solution enabled? Is there a user with the highest number of security severities associated to them across all their devices? And on answering these questions quickly and accurately, such as:
People maintaining spreadsheets that become outdated as soon as they’re pulled.
People building Power BI or Tableau reports without having all the necessary data.
Reports that could only be accessed from inside an applications console, limiting the ability to connect them to other meaningful security telemetry and data.
Auditing complex questions can be unexpectedly expensive and time consuming because data is scattered across vast, siloed datasets.
Getting security insights
Going back to the scenario where pictures are stored on all these disparate devices, it initially seems like a reasonable solution to just consolidate everything on an external hard drive. But, to do that, you must know each device’s operating system and how to transfer the images over. They differ in file size, filetype, image quality, and naming convention. While one camera dates a photo as “Saturday January 1, 2000,” another uses “1 January 2000.” In some cases, the images contain more specific data, like hour, minute, and second. Consolidating the pictures in cloud-based storage platforms only solves the storage issues – you still have to manage the different file formats and attached metadata to organize them by the actual date a picture was taken rather than date a batch of photos were uploaded.
Translating this to the security data problem, many organizations find that they have too much data, in too many places, created at too many different times. And said data are in different file types, unique formats, and other proprietary ways of saying the same thing. Consolidating and sorting data becomes chaotic.
As a security, risk, and compliance data fabric platform, DataBee ingests, standardizes, and transforms the data generated by these different security and IT technologies into a single, connected dataset that’s ready for security and compliance insights. This is surprisingly like adding a picture to your “Favorite” folder for easy access. Organizations need to accurately and quickly answer questions about their security and compliance.
The objective at Comcast was to solve the challenge of incomplete and inaccurate insights caused by siloed data stores. DataBee provides the different security data consumers access to analytics-derived insights. The end result enables consistent, data-driven decision making across teams that need accurate information about data security and compliance, including:
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
Chief Technology Officer (CTO)
Chief Data Officer (CDO)
Governance, Risk, and Compliance (GRC) function
Business Information Security Officer (BISO)
While those people need the insights derived from the platform, we also recognized that the regular users would inhabit many roles:
Threat hunters
Data engineers
Data scientists
Security analytics and engineering teams
To achieve objectives, we started looking at the underlying issue - the data, its quality, and its accessibility. At its core, DataBee delivers ready-to-use content and is a transformation engine that ingests, autoparses, normalizes, and enriches security data with business context. DataBee’s ability to normalize the data and land it in a data lake enables organizations to use their existing business intelligence (BI) tools, like Tableau and Power BI, to leverage analytics and create visualizations.
Transforming data creates a common “language” across:
IT tools
Asset data
Organizational hierarchy data
Security semantics aren’t easy to learn – it can take years of hands-on knowledge on a variety of toolsets. DataBee has the advantage of leveraging learnings from Comcast to create proprietary technology that parses security data, mapping columns and values to references in the Open Cybersecurity Framework (OCSF) schema while also extending that schema to fill in currently existing gaps.
Between our internal learnings and working with customers, DataBee delivers pre-built dashboards that accelerate the security data maturity journey. Meanwhile, customers who already have dashboards can still use them for their purposes. For example, continuous controls monitoring (CCM) dashboards aligned to the Payment Card Industry Data Security Standard (PCI DSS) and National Institute of Technology and Standards Cybersecurity Framework (NIST CSF) offer a “quick start” for compliance insights.
DataBee can help customers achieve various security, compliance, and operational benefits, including:
Reduced security data storage costs by using the Snowflake and Databricks
Gaining insights and economic value by leveraging a time-series dataset
Real-time active detection streams with Sigma rules that optimize SIEM performance
Asset discovery and inventory enrichment to identify and suggest appropriate ownership
Weave together data for security and compliance with DataBee
Want to see DataBee in action and how we can help you supercharge your security, operations, and compliance initiatives? Request a custom demo today.
Read More
Bridging the GRC Gap
Misconceptions and miscommunications? Not with continuous controls monitoring from DataBee. Learn how you can reduce audit costs and improve security outcomes.
Read More
The DataBee Hive hits Orlando for the 2024 Gartner® Data & Analytics Summit
In a great blog on bringing digital transformation to cybersecurity, Comcast CISO Noopur Davis wrote, “Data is the currency of the 21st century — it helps you examine the past, react to the present, and predict the future.” Truer words were never written.
No doubt that’s why we think Gartner created the Gartner Data & Analytics Summit — to bring together data and analytics leaders “aspiring to transform their organizations through the power of data, analytics, and AI.”
DataBee® from Comcast Technology Solutions is all about facilitating that transformation. The DataBee Hive™, a cloud-native security, risk, and compliance data fabric platform, helps organizations transform how they collect, correlate, and enrich data to deliver the deep insights needed to drive efforts to remain competitive, secure, and compliant. It works by connecting disparate data sources and feeds to merge them with an organization’s data and business logic to create a shared and enhanced dataset that delivers continuous compliance assurance, proactive threat detection with near-limitless hunts, and improved AIOps — all while optimizing data costs.
This is why we’re incredibly excited to be participating in the 2024 Gartner Data & Analytics Summit for the first time. We’re looking forward to the conference sessions but even more to engaging with attendees to hear firsthand what their data, analytics, and AI challenges are and to share what we’ve learned on the path to creating the DataBee security data fabric platform.
Below are details on DataBee’s involvement in the conference — please pay us a visit at the show.
Speaking, exhibiting, and giving out the cutest plushie at the show.
The Gartner Data & Analytics Summit takes place March 11‒13, 2024, in Orlando at the Walt Disney World Swan and Dolphin Resort in Lake Buena Vista.
The Comcast Speaking Session
Comcast DataBee: Democratizing AI With a Foundation in Enterprise & Security Data
March 12 at 12:45‒1:05 pm
Location: Theater 4, Exhibit Showcase
Speaker: Rick Rioboli, Executive Vice President & Chief Technology Officer, Comcast Cable
Session description:
For more than a decade, Comcast has incorporated AI into our products and operations, going beyond personalized content to increase customer accessibility and protect against cyberattacks. To keep pace with industry disruptors and drive more AI innovations, Comcast launched AI Everywhere. The initiative aims to reduce friction when developing safe, secure, and scalable AI solutions throughout the business. Part of that is getting data AI-ready.
Join this session to learn three key components of bringing AI to more people and how combining security data can drive more value across the enterprise.
DataBee exhibit
Be sure to visit the Comcast Technology Solutions DataBee exhibit. We can be found in Booth 618, in the Data Management Tools and Technology Village. We’ll have some of our top “bees” on hand to talk data, analytics, AI, and how a security data fabric platform can help you connect security and compliance data for insights that work for everyone in your organization.
Plus, who doesn’t love a cute plushie to take home? We’ll be giving these away at our booth.
To learn more about DataBee in just a few short minutes, check out this quick and fun explainer video or download the DataBee Hive datasheet.
If you’d like to schedule a specific time to meet during the event, we’d love it. Contact us, and we’ll follow up right away to book a time.
We hope to see you in Orlando!
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. While Gartner is hosting the Gartner Conference, Gartner is not in any way affiliated with Exhibiting Company or this promotion, the selection of winners or the distribution of prizes. Gartner disclaims all responsibility for any claims that may arise hereunder.
Read More
Our 2024 forecast for media and advertising trends
As we approach the halfway point of the decade, events of the past year underscore just how appropriate the name the “Roaring 20s” really is for the current era of media and entertainment. From dramatic advances in AI and ML technology and crumbling cookie-first ad strategies, to the rise of ad-supported channels and labor strikes across the industry, 2023 was truly a year of disruption to the status quo. Are these trends passing fads or seismic shifts? Read on for insights into how these trends will impact media and advertising in the coming years, and watch the full webinar here.
Sports content and the quest to overcome fragmentation
Sports rights have become a crucial factor in driving consumer interest and platform sign-ups. At the same time, fans continue to change how they find and consume content. With the rise of streaming services and social media platforms, we now have access to a vast array of sports content at our fingertips.
Highlights:
Sports streaming services expected to reach $22.6 billion in 2027
Consumer access to content now spread across multiple streaming services
Shifting reliance on regional sports networks
This abundance of choice has created a new challenge: discoverability.
Viewers currently navigate a deeply fragmented landscape of different platforms and services, each with its own exclusive content and user interface.
This challenge becomes particularly pronounced as the regional sports network (RSN) model undergoes forced adaptation in the face of shifting consumer preferences. Moreover, the financial dynamics of sports rights are undergoing scrutiny, with tier two platforms reassessing the viability of their investments in sports content.
So, how can sports platforms and traditional entertainment provide seamless discoverability and ensure that subscribers can easily find the sports content they want without navigating multiple platforms and user interfaces?
It requires:
Collaboration between different stakeholders, including rights holders, operators, content providers, and advertisers
Providing opportunities for compelling, customizable, and immersive viewing experiences
One way to do this is with FAST channels.
Life in the FAST channel lane
FAST (Free Ad-Supported Streaming TV) channels are a type of streaming service that offers free access to a wide range of content, supported by advertisements. Unlike traditional cable or satellite TV, FAST channels are delivered over the internet, allowing viewers to watch when and how they want. These new channels, such as World Rally Championship’s 24x7 channel for rally fans, can feature a mix of live and on-demand content, creating incredibly immersive and personalized viewing experiences.
In the past year, these channels have become increasingly popular thanks to their accessibility, convenience, and affordability. They offer viewers a way to access a wide range of content without subscription fees, making them an attractive option for cord-cutters and budget-conscious consumers. Additionally, the targeted advertising capabilities of streaming platforms allow advertisers to reach specific audiences more effectively, making FAST channels a critical advertising platform and revenue generator.
Yet, even though FAST channels represent a growing segment of the streaming market — and are expected to continue expanding — market saturation will pose an ongoing challenge. Personalizing the FAST channel experience will be crucial to maintain viewer engagement and drive higher CPMs.
Bundle up: Indirect wholesale SVOD to exceed stand-alone SVOD
Research shows that the average U.S. household has eight+ streaming services. Content may still be king, but the way viewers access it is certainly changing.
As aggregation increases, content wars are giving way to bundling wars, signifying yet another fundamental shift in how entertainment and media are consumed and accessed. Bundling models, where multiple services or channels are packaged together for a single subscription fee, have emerged as a powerful strategy for attracting consumers. Convenient and cost-effective, bundling will continue to capture viewers, especially as the landscape becomes increasingly fragmented and streaming services proliferate.
The shift of streaming wars to bundling wars means:
More subscription video on demand (SVOD) sign-ups will come from the bundled segment than from the direct-via-streamer segment.
Super aggregation bundles are emerging with operators and broadband, most notably in EU markets.
Stand-alone SVOD will still dominate, but by net additions, indirect wholesale SVOD subscriptions will, for the first time, exceed direct retail subscriptions.
Aggregators play a central role in these bundling wars, and those that effectively leverage data analytics to understand consumer preferences and behavior are better equipped to “own the customer” by curating tailored content suggestions for their individual users. By offering personalized recommendations, savvy aggregators enhance the value proposition of bundled services, helping consumers discover new content that aligns with their interests and preferences. This personalized approach not only improves the overall consumer experience but also increases engagement and retention rates within bundled ecosystems.
Ad revenue continues to rise. Advanced advertising strategies support success with streamers.
Advertising is expected to reach 21% of revenue for hybrid models. Our experts predict that services will strive to reach more than 50% of their revenue through advertising to match the traditional pay-TV model.
With this trend in mind, how can brands ensure they get the most out of their advertising efforts within advertising video on demand (AVOD)/hybrid models, driving more targeted and focused advertising while also improving the viewer experiences?
Enter unified data management and contextual advertising. By unifying data management and analyzing metadata, brands can strategically place ads that align with the viewer’s interests and intent at that moment — all while staying on the right side of privacy regulations.
While we don’t expect AVOD to surpass SVOD in the near future, it is catching up. And as AVOD continues growing in popularity and contributes a bigger chunk of revenue for streamers than we've seen before, we can expect AVOD and SVOD to coexist alongside pay-TV.
AI acceleration: Use cases and business needs
Though we’ve only scratched the surface of its capabilities, the pace of advancement in generative AI is accelerating rapidly and shaping the next chapter of media and entertainment. Already, AI applications are unlocking new opportunities for greater personalization and targeting. We’re quickly shifting from “wanting” to incorporate AI to “needing” to incorporate the technology to keep up.
While many brands express apprehension around brand safety, the potential of AI to drive efficiencies and enhance creativity is undeniable. As content providers, operators, and advertisers seek ways to navigate the complexities of a fragmented media landscape, generative AI offers a promising solution by streamlining production workflows and enabling the creation of personalized, thematic content.
Already we’re seeing generative AI produce synopses and segments with expected growth over the next five years within these top use cases:
Audio generators: +60% growth
Video generators: +61% growth
Image generators: +66% growth
Code generators: +73% growth
What does this mean?
Companies that treat AI as table stakes will come out on top in the coming years. From generating clips and metadata to detecting sentiment and creating hyper-specific ad campaigns, AI will only continue to revolutionize how content is conceptualized, produced, and distributed.
To 2030 and beyond…
No matter what the next part of the decade holds, innovation and adaptation will continue to be key to accelerating growth in the media and entertainment landscape. Industry players must continue navigating evolving consumer preferences and embracing technological advancements to overcome challenges and embrace new opportunities to deliver compelling, personalized experiences to audiences.
Read More
3 Hot Takes on Cybersecurity: SIEMs, GenAI, and more
Missed our Cybersecurity Hot Takes webinar? Lucky for you, you can watch the on-demand recording now!
We dived into cybersecurity’s burning questions with Roland Cloutier, former CISO at Tiktok and ADP, Edward Asiedu, Senior Principal of Cybersecurity Solutions at DataBee, and Amy Heng, Director of Product Marketing at DataBee®. They shared their hot takes and insights related to tools, data, and people in the cybersecurity landscape. As they indulged in hot wings, the discussions ranged from the relevance of Security Information and Event Management (SIEM) tools to the transformative potential of Generative Artificial Intelligence (Gen AI).
Here’s what can expect to learn more about:
The Decline of SIEM Tools: Roland predicted the decline of traditional SIEM tools. He argued that the current SIEM solutions are becoming obsolete due to the evolving nature of cybersecurity. The complexity of cyber risks and privacy issues demands a more sophisticated approach that goes beyond the capabilities of conventional SIEM tools. Instead, Roland emphasized the need for a data fabric platform to fill the void left by traditional SIEMs.
Vendor-Specific Query Languages: To Learn or Not to Learn? Edward highlighted the drawbacks of forcing engineers to learn multiple query languages for different security tools. Both Edward and Roland expressed a preference for standardized approaches like SQL or Sigma rules, which makes detecting threats faster and closing the cybersecurity skills gap.
Gen AI and how to operationalize it for cybersecurity: Gen-AI is a rising start in the cybersecurity landscape. Contrary to the fear of a robot revolution, Roland envisioned Gen AI as a powerful tool for enhancing security risk and privacy operations. He emphasized its potential to automate tasks such as incident response, analytics, and investigation, significantly reducing the time and resources required for these activities.
Besides the hot wings being spicier with each question, it’s clear that the cybersecurity landscape is evolving rapidly. Traditional tools like SIEM are on the decline, making way for more adaptable solutions that bring security data insights to more people. The shift towards standardized query languages and the integration of AI, particularly Gen AI, promises a future where cybersecurity operations are more efficient, automated, and capable of handling the ever-growing volume of data.
Why not hear more hot takes? Catch the webinar on-demand today.
Read More
Squeaky Clean: Security Hygiene with DataBee
Enterprises have an ever-growing asset and user population. As organizations become more complex thanks to innovation in technology, it becomes increasingly difficult to track all assets in the environment. It's difficult to secure a user or device you may not be aware of. DataBee can augment and complement your configuration management database (CMDB) by enhancing the accuracy, relevance, and usability of your asset, device, and user inventory, unlocking 3 primary use cases for security teams:
Security Hygiene
Owner & Asset Discovery
Insider Threat Hunting
Security Hygiene
DataBee for Security Hygiene can help deliver more accurate insights into the assets in your environment while automatically keeping your asset inventory up-to-date and contextualized. Bringing more clarity about your users and devices in your environment, enables your business to enhance its security coverage, reduces manual processes, increases alerts’ accuracy, and more rapidly responds to incidents.
Security hygiene uses entity resolution, a patent pending technology from Comcast, to create a unique identifier from a single data source or across multiple data sources that refer to the same real-world entity, such as a user or device. This technique is performed by DataBee helps reduce the manual entity correlation efforts by analysts, and the unique identifiers can be used to discover assets and suggest missing owners.
DataBee supports ingestion from multiple data sources to help keep your security hygiene in check, including a variety of traditional sources for asset management such as your CMDB, directory services, and vulnerability scanner. It also can learn about your users and devices from non-traditional data sources such as network traffic, authentication logs, and other data streamed through DataBee that contains a reference to a user or device.
DataBee can also be used to exclude data sources and feeds from entity resolution that provide little fidelity for entity-related context and support feed source prioritizations. These inputs are used when there is collision or conflicting information. For example, if a device is first seen in network traffic, preliminary information about the device will be added to DataBee such as the hostname and IP. Another example is when your CMDB updates a device’s IP. DataBee will use this update to overwrite the associated IP if the CMDB feed is prioritized over the network traffic information.
Owner and Asset Discovery
Organizations often struggle to assign responsibility for resolving security issues on assets or devices, many times due to unclear ownership caused by gaps in the asset management process, such as Shadow IT and orphaned devices. DataBee can provide a starting point to identifying and validating the owner of a device. When a new device is discovered or the owner is not indicated in CMDB, DataBee will leverage the events streaming through the platform to make a suggestion of the potential owners of the asset. The system tracks who logs into the device for seven days after discovery and uses statistical analysis to suggest up to the three most likely owners.
The Potential Owners are listed in the Entity Details section of the Entity View page that can be expanded to quickly validate the ownership. Once security analysts are able to validate the owner, the system of record can be updated and DataBee will receive the update.
As organizations become more complex thanks to innovation in technology, it can become increasingly difficult to track all assets in the environment. DataBee provides User and Device tables to supplement existing tools. Orphaned assets remain unknown and often unpatched, creating unmonitored levels of increasing risk. Shadow IT is a growing problem as more cloud-based solutions become easily accessible. Entity resolution maintains an inventory of known assets in your organization. This enables continuous discovery of assets that would otherwise slip through the cracks based on events streamed through DataBee.
Insider Threat Hunting
Insider threats are people, such as employees or contractors, with authorized access to or knowledge of an organization’s resources, that can cause potential harm arising from purposeful or accidental misuse of authorized user access. This can have negative effects on the integrity, confidentiality, and availability of the organization, its data, personnel, or facilities. Insider threats are often able to hide in plain sight for many reasons. There is complexity in cross-correlating logs of an individual's activities across various tools and products. Further, the data is siloed, making it difficult for security analysts to see the full picture. DataBee’s security, risk, and compliance data fabric platform weaves together events as they stream through the business context needed to identify insider threats.
DataBee leverages entity resolution to create an authoritative and unique entity ID using data from across your environment that is mapped to real people and devices to enable hunting for insider threats in your organization. Entity resolution aggregates information from multiple data sources, merges duplicate entries, and suggests potential owners for devices and assets. By correlating and enriching the data before storing it in your data lake, DataBee creates an entity timeline, associating each event with the correct entity at the time of its activity.
These views enable insider threat hunting by allowing security analysts to see the activities conducted by a user and the related business context in a single view to identify potential malicious behavior. The interactive user experience is intended to make leveraging the Open CyberSecurity Framework (OCSF) formatted logs more accessible to all security professionals. Within the Event Timeline, security analysts can filter the events based on type to focus investigations. Clicking on the event will show the mission critical fields needed to decide if the event is interesting. Clicking on the magnifying glass icon allows you to inspect the full event. DataBee enables one-click pivoting to related entities to simplify diving deeper into the investigation. The views are powered by the data in the data lake. Therefore, the data is available in OCSF format for threat hunters to continue their investigations in traditional tools like Jupyter notebooks to meet hunters where they are with their data.
Take a look under the hood of DataBee v2.0 and DataBee for Security Hygiene
Are you ready for an enterprise-ready security, risk, and compliance data fabric? Request a custom demo to see how DataBee uses a unique identifier that can be used to augments your CMDB and help deliver more accurate insights into the assets in your environment.
Read More
Finding security threats with DataBee from Comcast Technology Solutions
Last week, DataBee® announced the general availability of DataBee v2.0. Alongside a new strategic technology partnership with Databricks, we released new cybersecurity and Payment Card Industry Data Security Standard (PCI DSS) 4.0 reporting capabilities.
In this blog, we’ll dive into the new security threat use cases that you can unlock with a security, risk, and compliance data fabric platform.
DataBee for security practitioners and analysts
In security operations, detecting incidents in a security information and event management (SIEM) tool is often described as looking for a needle in a haystack of logs. Another fun (or not-so-fun) SIEM metaphor is a leaky bucket.
In an ideal world, all security events and logs would be ingested, parsed, normalized, and enriched into the SIEM, and then the events would be cross-correlated using advanced analytics. Basically, logs stream into your bucket and the SIEM, and all the breaches would be detected.
In reality, there are holes in the bucket that allow for undetected breaches to persist. SIEMs can be difficult to manage and maintain. Organization-level customizations, combined with unique and ever-changing vendor formats, can lead to detection gaps between tools and missed opportunities to avert incidents. Additionally, for cost-conscious organizations, there are often trade-offs for high-volume sources that leave analysts unable to tap into valuable insights. All these small holes add up.
What if we could make the security value of data more accessible and understandable to security professionals of all levels? DataBee makes security data a shared language. As a cloud-native security, risk, and compliance data fabric platform, DataBee engages early in the data pipeline to ingest data from any source, then enriches, correlates, and normalizes it to an extended version of the Open Cybersecurity Schema Framework (OCSF) to your data lake for long-term storage and your SIEM for advanced analytics.
Revisiting the haystack metaphor, if hay can be removed from the stack, a SIEM will be more efficient and effective at finding needles. With DataBee, enterprises can efficiently divert data, the “hay,” from an often otherwise cost-prohibitive and overwhelmed SIEM. This enables enterprises to manage costs and improve the performance of mission-critical analytics in the SIEM. DataBee uses active detection streams to complement the SIEM, identifying threats through vendor-agnostic Sigma rules and detections. Detections are streamed with necessary business context to a SIEM, SOAR, or data lake. DataBee takes to market a platform inspired by security analysts to tackle use cases that large enterprises have long struggled with, such as:
SIEM cost optimization
Standardized detection coverage
Operationalizing security findings
SIEM cost optimization
Active detection streams from DataBee provide an easy-to-deploy solution that enables security teams to send their “needles” to their SIEM and their “hay” to a more cost-effective data lake. Data that would often otherwise be discarded can now be analyzed enroute. Enterprises need only retain the active detection stream findings and security logs needed for advanced analytics and reporting in the SIEM. By removing the “hay,” enterprises can reduce their SIEM operating costs.
The optimized cloud architecture enables security organizations to gain insights into logs that are too high volume or contain limited context to leverage in the SIEM. For example, DNS logs are often considered too verbose to store in the SIEM. They contain a high volume of low-value logs due to limited information retained in each event. The limited information makes the DNS logs difficult to cross-correlate with the disparate data sources needed to validate a security incident.
Another great log source example is Windows Event Logs. There are hundreds of validated open-source Sigma detections for Windows Event Logs to identify all kinds of malicious and suspicious behavior. Leveraging these detections has traditionally been difficult due to the scale required both for the number of detections and volume of data to compare it to. With DataBee’s cloud-native active detection streams, the analytics are applied as the data is normalized and enriched, allowing security teams new insights into the potential risks facing their organization. DataBee’s power and scale complement the SIEM’s capabilities, plugging some of the holes in our leaky bucket.
Analyst fatigue can be lessened by suppressing security findings for users or devices that can reduce reliability of a finding. With DataBee’s suppression capability, you can filter and take actions on security findings based on the situation. Selecting “Drop” for the action ignores the event, which is ideal for events that are known to be false positive in the organization. Alternatively, applying an “Informational” action reduces the severity and risk level of the finding to Info, still allowing the finding to be tracked for historical purposes. The Informational level is perfect for tuning that requires auditability long term. The scheduling option uses an innovative approach that gives you a way to account for recurring known events like change windows that might fire alerts or additional issues that could lead to false positives.
By applying the analytics and tuning to the enriched logs as they are streamed to more cost-effective long-term storage in the data lake, security teams can detect malicious behavior like PowerShell activity or DNS tunneling. Additionally, DataBee’s Entity Resolution not only enriches the logs but learns more about your organization from them, discovering assets that may be untracked or unknown in your network.
Standardized detection coverage
With the ever-evolving threat landscape, detection content is constantly updated to stay relevant. As such, security organizations have taken on more of a key role in content management between solutions. Compounded by the popularization of Sigma-formatted detections with both security researchers and vendors, many large enterprises are beginning their journey to migrate existing custom detections to open-source formats managed via GitHub. Sigma detection rules are imported and managed via GitHub to DataBee to quickly operationalize detection content. Security organizations can centralize and standardize content management for all security solutions, not just DataBee.
Active detection streams apply Sigma rules, an open-source signature format, over security data that is mapped to a DataBee-extended version of OCSF to integrate into the existing security ecosystem with minimal customizations. DataBee handles the translation from Sigma to OCSF to help lower the level of effort needed to adopt and support organizations on their journey to vendor-agnostic security operations. With Sigma-formatted detections leveraging OCSF in DataBee, organizations can swap out security vendors without needing to update log parsers or security detection content.
Operationalizing security findings
One of DataBee’s core principles is to meet you where you are with your data. The intent is to integrate into your existing workflows and tools and avoid amplifying the “swivel chair” effect that plagues every security analyst. In keeping with the vendor-agnostic approach, DataBee security findings generated by active detection streams can be output in OCSF format to S3 buckets. This format can be configured for ingestion to immediate use in major SIEM providers.
Leveraging active detection streams with Entity Resolution in DataBee enables organizations to identify threats with vendor-agnostic detections with all the necessary business context as the data streams toward its destination. DataBee used in conjunction with the SIEM allows security teams visibility out of the box into potential risks facing their organization without the noise.
Read More
DataBee and Databricks: Business-ready datasets for security, risk, and compliance
In today's fast-paced and data-driven world, businesses are constantly seeking ways to gain a competitive edge. One of the most valuable assets these businesses have is their data. By analyzing and deriving insights from their data, organizations can make informed decisions, manage organizational compliance, optimize resource allocation, and improve operational efficiency.
Better together: DataBee and Databricks
As part of DataBee® v2.0, we’re excited to announce a strategic partnership with Databricks that gives customers the flexibility to integrate with their data lake of choice.
DataBee is a security, risk, and compliance data fabric platform that transforms raw data into analysis-ready datasets, streamlining data analysis workflows, ensuring data quality and integrity, and fast-tracking organizations’ data lake development. In the medallion architecture, businesses and agencies organize their data in an incremental and progressive flow that allows them to achieve multiple advanced outcomes with their data. From the bronze layer, where raw data lands as is, to the silver layer, where data is minimally cleansed for some analytics, to the gold layer, where advanced analytics and models can be run on data for outcomes across the organization, let DataBee and Databricks get your data to gold.
In the past, creating gold-level datasets was a challenging and time-consuming process. Extracting valuable insights from raw data required extensive manual effort and expertise in data aggregation, transformation, and validation. Organizations had to invest significant resources in developing custom data processing pipelines and dealing with the complexities of handling large volumes of data. Lastly, legacy systems and traditional data processing tools struggled to keep up with the demands of big data analytics, resulting in slow and inefficient data preparation workflows. This hindered organizations' ability to derive timely insights from their data and make informed decisions.
DataBee's integration with Databricks empowers customers to take their gold-level datasets up a notch by leveraging advanced data transformation capabilities and sophisticated machine learning algorithms within Databricks. Regardless of whether the data is structured, semistructured, or unstructured, Databricks' unique lakehouse architecture provides organizations with a robust and scalable infrastructure to store and manage vast amounts of data and insights in SQL and non-SQL formats. The lakehouse architecture from Databricks allows businesses to leverage the flexibility of a data lake and the analysis efficiency of a data warehouse in a unified platform.
The integration between DataBee and Databricks involves two key components: the Databricks Unity Catalog and the Auto Loader job.
The Databricks Unity Catalog is a unified governance solution for data and AI assets within Databricks that serves as a centralized location for managing data and its access.
The Auto Loader automates the process of loading data from Unity Catalog-governed sources to the Delta Lake tables within Databricks. The Auto Loader job monitors the data source for new or updated data and copies it to the appropriate Delta Lake tables. This ensures that the data is always up to date and readily available for analysis within Databricks. When integrating DataBee with Databricks, the data is loaded from the Databricks Unity Catalog data source using the Auto Loader, ensuring that it is easily accessible and can be leveraged for analysis.
This seamless integration, combined with DataBee's support for major cloud platforms like AWS, Google Cloud, and Microsoft Azure, enables organizations to easily deploy and operate Databricks and DataBee in their preferred cloud environment, ensuring efficient data processing and analysis workflows.
Connecting security, risk, and compliance insights faster with DataBee
It’s time to start leveraging your security, risk, and compliance data with DataBee and Databricks.
DataBee joins large security and IT datasets and feeds close to the source, correlating with organizational data such as asset and user details and internal policies before normalizing it to the Open Cybersecurity Schema Framework (OCSF). The resulting integrated, time-series dataset is sent to the Databricks Data Intelligence Platform where it can be retained and accessible for an extended period. Empower your organization with DataBee and Databricks and stay ahead of the curve in the era of data-driven decision-making.
Read More
Five Benefits of a Data-Centric Continuous Controls Monitoring Solution
For as long as digital information has needed to be secured, security and risk management (SRM) leaders and governance, risk and compliance (GRC) leaders have asked: Are all of my controls working as expected? Are there any gaps in security coverage, and if so, where? Are we at risk of not meeting our compliance requirements? How can I collect and analyze data from across all my controls faster and better?
From “reactive” to “proactive”
Rather than assessing security controls at infrequent points in time, such as while preparing for an audit, a more useful approach is to implement continuous monitoring. However, it takes time to manually collect and report on data from a disparate set of security tools, making “continuous” a very challenging goal. How can SRM and GRC teams evolve from being “reactive” at audit time, to “proactive” all year long? Implement a data-centric continuous controls monitoring (CCM) solution.
According to Gartner®, CCM tools are described as follows:
“CCM tools offer SRM leaders and relevant IT operational teams a range of capabilities that enable the automation of CCM to reduce manual effort. They support activities during the control management life cycle, including collecting data from different sources, testing controls’ effectiveness, reporting the results, alerting stakeholders, and even triggering corrective actions in the event of ineffective controls or anomalies. Furthermore, the automation they support enables SRM leaders and IT operational teams to gain near real-time insights into controls’ effectiveness. This, in turn, improves situational awareness when monitoring security posture and detecting compliance gaps.” Gartner, Inc., Innovation Insight: Cybersecurity Continuous Control Monitoring, Jie Zhang, Pedro Pablo Perea de Duenas, Michael Kranawetter, 17 May 2023
The use of a CCM solution offers significant advantages over point-in-time reviews of multiple data sources and reports. This blog identifies five of the key benefits of using a CCM solution.
Share the same view of the data with all teams in the three lines of defense.
A shared and consistent view of data facilitates better coordination between operations teams that are accountable for compliance with organizational security policy, the process owners who manage the tools and data used to measure compliance, and the GRC team that oversees compliance.
A set of CCM dashboards can provide that common view. Without a shared view of compliance status, teams may be looking at different reports, or reports created similarly, but at different points in time, resulting in misunderstandings; in effect, a cybersecurity “Tower of Babel.” Consistent reporting based on a mutually recognized source of truth for compliance data is an essential first step.
Furthermore, without a consistent view of compliance data, it will be challenging to have a productive conversation about the quality of the data and its validity. If operations teams are pulling their own reports, or even if they are consuming reports provided by the process owners or GRC team, inconsistencies in data are likely to be attributed to differences in report formats, or differences in the dates when the reports were run. If all the teams are looking at the same set of CCM dashboards displaying the same data, it is easier to resolve noncompliance issues that may be assigned to the wrong team, or to find other errors, such as missing or incorrect data, that need to be fixed.
Bring clarity to roles and responsibilities.
Job descriptions may include tasks such as, “Ensure compliance with organizational cybersecurity policy.” But ultimately, what does that mean, especially to a business manager for whom cybersecurity is not their primary responsibility? In contrast, a set of CCM dashboards that an operations level manager can access to see what specifically is compliant or noncompliant for their department provides an easily understood view of that manager’s responsibilities. Managers do not need to spend unproductive time trying to guess what their role is, or trying to find the team that can provide them with information about what exactly is noncompliant for the people and assets in their purview.
Compliance documents and frameworks typically include requirements for documenting “roles and responsibilities,” for example, the n.1.2 controls (e.g., 1.1.2, 2.1.2, etc.) and 12.1.3 in PCI-DSS v4.0. Similarly, the “Policy and Procedures” controls, such as AC-01, AT-01, etc. in NIST SP 800-53 state that the policy “Addresses… roles, [and] responsibilities.”
Ultimately, roles and responsibilities for operations managers and teams can be presented to them in an understandable format by displaying compliant and noncompliant issues for the people and assets that they manage. This is not to say that cybersecurity related roles and responsibilities should not be listed in job descriptions. However, a display of what is or is not compliant for their department will complement their job description by making the manager’s responsibilities less abstract and more specific.
Making compliance and security a shared responsibility
Cybersecurity is Everyone’s Job according to the National Initiative for Cybersecurity Education (NICE), a subgroup on Workforce Management at the National Institute of Standards and Technology (NIST). At the operations level, a manager’s primary responsibility for the business may be to produce the product that the business sells, to sell the product, or something related to these objectives. But the work of the business needs to be done with cybersecurity in mind. Business operations managers and the staff that report to them have a responsibility to protect the organization’s intellectual property, and to protect confidential data about the organization’s customers. So, even if cybersecurity is not someone’s primary job responsibility, cybersecurity is in fact everyone’s job.
At times, business managers may take a stance that “cybersecurity is not my job,” and that it is the job of the CISO and their team to “make us secure.” Or business managers may accept that they do have cybersecurity responsibilities, but then struggle to find a team or a data source that can provide them with the specifics of what their responsibilities are.
A CCM solution can give business managers a clear understanding of what their cybersecurity “job” is without requiring them to track down the information about the security measures they should be taking, as the data alerts them to security gaps they need to address.
Enhance cybersecurity by ensuring compliance with regulations and internal policies
Compliance may not equal security, but the controls mandated by compliance documents are typically foundational requirements that, if ignored, are likely to leave the organization both noncompliant and insecure. An organization that has good coverage for basic cybersecurity hygiene is likely to be in a much better position to achieve compliance with any regulatory mandates to which they are subject. Or, conversely, if the organization has gaps in their existing cyber hygiene, working to achieve compliance with their regulatory requirements, or an industry recognized set of security controls, will provide a foundation on which the organization can build a more sophisticated, risk-based cybersecurity program.
The basics are the basics for a reason. Using a CCM tool to achieve consistent coverage for the basics when it comes to both compliance and cybersecurity provides a more substantial foundation for the cybersecurity program.
Creating a progressive and positive GRC feedback loop using CCM
A CCM solution does not take the place of or remove the need for a GRC team and a GRC program. But it is a tool that, if incorporated into a GRC program, can help by saving time formerly used to manually create reports, and by facilitating coordination and cooperation by providing teams a consistent view of their compliance “source of truth.” Implementing a CCM solution may uncover gaps in data (missing or erroneous data), or gaps in communication between teams, such as the business teams that are accountable for compliance, and the process owners who are managing the tools and data used to track compliance. Uncovering any such gaps provides the opportunity to resolve them and to make improvements to the program. As gaps in data, policy or processes are uncovered and resolved, the organization is positioned to make continuous improvement in its compliance posture.
If there are aspects of the organization’s current GRC program that have not achieved their intended level of maturity, a CCM solution like DataBee can help by providing a consistent view of compliance data that all teams can reference. CCM can be the focus that teams use to facilitate discussions about the current state, and how to move forward to a more compliant state. Over time, the organization can draw on additional sources of compliance data and display it through new dashboards to continue to build on their compliance and cybersecurity maturity.
Get started with DataBee CCM
For more insights into how a CCM solution can ease the burden of GRC teams while improving an organization’s security, risk and compliance posture, read the recent interview of Rob Rose, Sr. Manager on the Cybersecurity and Privacy Compliance team here at Comcast. Rob and the Comcast GRC team use the internally developed data fabric platform that DataBee is based on, and they’ve achieved some remarkable results.
The DataBee CCM offering delivers the five key benefits described here and more. If your organization would like to evolve its SRM and GRC programs from being “reactive” to “proactive” with continuous, year-round controls monitoring, be in touch and let us show you how DataBee can make a difference.
Download the CCM Solution Brief to learn more, or request a personalized demo.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Read More
DataBee is a Leader in Governance, Risk, and Compliance in Snowflake's The Next Generation of Cybersecurity Applications
Today, Snowflake recognized DataBee, part of Comcast Technology Solutions, as a Leader in the Governance, Risk & Compliance (GRC) Category in Snowflake’s The Next Generation of Cybersecurity Applications. As the Director of Strategic Sales and Go-to-Market Strategy, I am proud to help joint customers achieve fast, accurate, and data-driven compliance answers and resolutions that measure risks and controls effectiveness.
The inaugural, data-backed report identified five technology categories that security teams may consider when building their cybersecurity strategy. In addition to the GRC category, the other categories included: Security Information and Event Management (SIEM), Cloud Security, Data Enrichment and Threat Intelligence, and Emerging Segments.
DataBee puts your data at the center for dynamic, detail-rich compliance metrics and reports. The cloud-native security, risk and compliance data fabric platform weaves together security data sources with asset owner details and organizational hierarchy information, breaking down data silos and adding valuable context to cyber-risk reports and metrics.
By being a connected application Powered by Snowflake partner, DataBee makes continuous controls monitoring (CCM) a reality by enabling customers to securely and quickly access large, historical datasets in Snowflake while driving down costs and maintaining high performance. DataBee’s robust analytics enables teams across the organization to leverage the same dataset for high fidelity analysis, decisioning, response and assurance outcomes without worrying about retention limits. From executives to governance, risk and compliance (GRC) analysts, DataBee on Snowflake delivers a dynamic and reliable single source of truth.
Thank you to Snowflake for partnering with DataBee! As Nicole Bucala mentioned in our press release, DataBee makes it faster, easier and more cost effective for GRC teams to combine and share the security and business data and insights that their constituents need to stay compliant and mitigate risk. Our strategic partnership with Snowflake is an essential part of our solution, providing a powerful, flexible, and fully managed cloud data platform for our customers’ data regardless of the source.
Read More
DataBee appoints Ivan Foreman for EMEA expansion leadership
You may be surprised to know this, but the security data issues that challenge US-based security teams are issues felt ’round the world. Of course, I’m kidding: These challenges have been worked on, talked about, and written about for years and continue to eat up news cycles because it’s still too hard to correlate and analyze all of the security data generated by the tools and technologies that live in most enterprise security stacks.
DataBee® from Comcast Technology Solutions (CTS) has been created to help bring order, ease, and clarity to security data chaos in the enterprise. Having focused this first year of business on our US home turf, we’re very happy now to expand into EMEA with a cybersecurity veteran at the helm — Ivan Foreman. Based out of London with work and life experience in Israel and South Africa, Ivan brings to his new role as Executive Director and Head of EMEA Sales, DataBee a deep understanding of the unique needs of customers and partners from across EMEA, and a true passion for ensuring the security of both people and organizations.
Let’s learn a little bit more about Ivan and his background…
[LC]: Ivan, you’ve joined CTS DataBee to lead Sales and Business Operations in EMEA. Talk to us about your charter there and what you hope to accomplish in your first several months.
[IF]: I’m very excited to join CTS DataBee and the opportunity to build the business in EMEA. My first month was spent learning as much as possible about Comcast, the value a security data fabric has brought to the organization, and the commercialization of this innovation through DataBee. I was fortunate enough to travel to HQ in Philadelphia and meet the leadership team who have been building DataBee for the past year. My next couple of months will be spent building the DataBee brand in the EMEA region and helping organizations there get more from their security data. So far, the response has been fantastic, and almost everyone I’ve spoken to about DataBee is keen to learn more and seems to have an interest in putting me in touch with their colleagues.
[LC]: How did you learn about the opportunity with CTS DataBee, and what ultimately attracted you to this position?
[IF]: I worked with Nicole Bucala (VP & GM of DataBee) at a previous company, and during the summer, I saw her post on LinkedIn about DataBee and was intrigued. As I was in the process of looking for a new role, I reached out to learn more.
There were ultimately three things that attracted me to the role:
The company: DataBee is part of Comcast Technology Solutions, whose parent company, Comcast, is one of the largest companies in the US. Having worked for small cybersecurity startups in the past, I understand the challenges of building a brand from scratch; here, I have the support of the Comcast brand and an amazing internal use case that validates how well equipped we are to solve the enterprise security data problem.
The product: Listening to the story of how Noopur Davis, Comcast’s CISO, built a security data fabric internally to help her answer the very difficult questions asked by Comcast’s board and regulators, whilst at the same time saving money and improving the company’s security posture, was very compelling.
The people: It has always been very important for me to work with people I like and respect. Nicole has done an amazing job of hiring some of the best and brightest talent in the industry. Throughout my interviews, I was impressed by the quality of the people I met and was excited by the prospect of so many successful people all working together.
[LC]: What is it about DataBee the product that excites you and that you think will resonate with enterprises in EMEA?
[IF]: What is resonating is our Continuous Controls Monitoring (CCM) offering — the ability to see real-time dashboards relating to the company’s security, risk, and compliance posture. Every single company has different data sources and security metrics that they need to monitor, and our CCM capability provides both standard and customizable dashboards that make it possible for an organization to track their specific security controls and compliance requirements.
In EMEA in particular, keeping up with regulations is such a challenge, and every industry and country seems to have different cybersecurity-related regulations they need to adhere to. In the UK, the ‘network and information systems,’ or NIS, is the main framework to look out for. Telecommunications companies, or Telcos, are grappling with the UK’s Telecommunications (Security) Act, whose requirements need to be in place for Tier 1 Telcos by March 2024. PCI DSS 4.0, which applies globally to any organization that processes payment cards, is another one to review. DataBee can ease the challenge of keeping up with these and other regulations by giving CISOs and GRC teams an easier way to continuously monitor their controls and keep ahead of their annual audits.
[LC]: Tell us a bit about your background in cybersecurity — you’ve been in this industry for most, if not all, of your career, correct? Share with us some of your experiences and what’s kept you hooked on the security space.
[IF]: I grew up in South Africa and graduated from university in Durban, but my professional cybersecurity experience began when I went to live in Israel. There, I worked for a company called Aladdin, which, at that time, was the market leader in combatting software piracy.
Eventually I moved to the UK, and other roles there included:
Business development manager for Softwrap, which had a very innovative secure envelope solution for helping to securely distribute software online (long before there were App Stores).
Progressive channel management and leadership roles at ScanSafe, pioneer in SaaS web security. I was one of the first employees and helped the company grow and expand until it was sold to Cisco in 2009 for $183 million. I stayed with Cisco for another four years and was promoted to lead the company’s security business in the UK, selling its full security portfolio (firewalls, IDS, email security, web security, VPN, identity services, etc.).
VP of sales EMEA and VP of sales Asia Pacific for Wandera, where I was reunited with the original founders of ScanSafe, who were focused this time on enterprise mobility security and data management.
VP of sales EMEA for Illusive Networks, an Israeli deception security company. I started as their first EMEA hire, helping them grow and expand the business there.
Senior director of global channel sales for Nozomi Networks (an OT security company), where I led their global channel business and was purely focused on developing relationships with hundreds of partners around the world.
Whilst working in the security industry, it is not just about selling products; you actually feel as if you are positively contributing to society by helping to keep companies and people safe from bad actors. I guess that’s what has really kept me interested in this space and why I believe I’ll probably stay in cybersecurity until the end of my career.
[LC]: What are some of the key data and/or cybersecurity challenges that are unique to enterprises in EMEA?
[IF]: One of the most interesting challenges I have seen in the UK specifically is the very short tenure of CISOs. I recently read a Forrester report, which highlighted that the average tenure for a UK CISO (working for the FTSE 100 companies) was 2.8 years. This means that they are not likely able to invest in long-term projects, but rather focus on short-term wins before they move on to a new challenge. It’s therefore critical to understand where the CISO is in their tenure as a key success factor, which may make or break a potential sale.
[LC]: Looking ahead into 2024, what are some of your security and/or security business predictions for the year ahead in EMEA? Any threats/challenges/opportunities you see on the near-term horizon?
[IF]: No doubt AI and ML is going to play a huge role in 2024 and beyond. Ensuring these technologies are used correctly and morally is going to be a huge challenge as bad actors and malicious hackers can also use them to attack enterprises and states.
The other challenge I see is finding skilled cybersecurity professionals who are available to help implement policies and keep companies safe. As reported by ISC2 in their 2023 Cybersecurity Workforce Study, there are roughly four million empty cybersecurity positions in companies and organizations globally. The people who work in the industry need to find a way to ensure children at school learn about the importance of these jobs and are encouraged to consider careers in this field.
[LC]: Will you be working to build channel partnerships in EMEA? If so, what types of partners are you hoping to create relationships with?
[IF]: Yes, DataBee is a channel-friendly organization, and we love working with our partners to help our customers achieve fast time-to-value. The only way to really grow and scale our business quickly throughout EMEA is to embrace the channel. I believe, however, that it is critical to focus on a few key strategic partners; it’s not quantity, it’s quality, and ensuring that there’s a good overlap of our target customers and the customers served by our partners.
I’ve already started discussions with a few strategic partners who have expertise in this space and see the value of what DataBee is bringing to market. The most critical element from my perspective is finding partners who can help deliver the professional services that will ensure a successful DataBee implementation and faster time-to-value.
[LC]: Who is resonating with the DataBee story and value proposition right now?
[IF]: Initially, anyone involved in security, risk, and compliance management. Our CCM solution is ideal for CISOs and GRC and compliance executives because of the real-time reporting it can provide, and it’s great for GRC analysts and audit teams who need that ‘single source of truth’ — connected and enriched data.
We have an aggressive product roadmap for the DataBee data fabric platform that we hope will make it very relevant and important to other cybersecurity, privacy, data management, and business intelligence roles early in 2024 and beyond. Within Comcast, the data fabric platform that DataBee is based on is being used by everyone from the Comcast CISO and CTO to GRC analysts, data scientists, data engineers, threat hunters, security analysts, and more.
[LC]: Where are you based, and what’s the easiest way for people to reach you?
[IF]: I’m based in London. It is best to reach me via email ivan_foreman@comcast.com or via LinkedIn.
Additional information
We believe that DataBee is truly unique, providing a comprehensive approach to bringing together security and enterprise IT data in a way that improves an organization’s security, risk, and compliance posture.
As Comcast CISO Noopur Davis has said, “Data is the currency of the 21st century — it helps you examine the past, react to the present, and predict the future.” It is a universal currency that all organizations should be able to use, whether for deep security insights and improved protection, or to propel the business forward with a better understanding of customer needs.
Learn how your organization can take full advantage of its security data by requesting a personalized demo of DataBee or reaching out to Ivan. He can’t wait to talk to you.
Read More
Putting your business data to work: threat hunting edition
Detectives, bounty hunters, investigative reporters, threat hunters. They all share something in common: When they’re hot on a scent, they’re going to follow it. In the world of cybersecurity, threat hunters can use artifacts left behind by a bad actor or even a general hunch to start an investigation. Threat hunting, as a practice, is a proactive approach to finding and isolating cyberthreats that evade conventional threat detection methods.
Today’s threat hunters are technologists. They are using an arsenal of tools and triaging alerts to pinpoint nefarious behaviors. However, technology can also be a barrier. Pivoting between tools, deciphering noisy datasets and duplicative fields, assessing true positives from alerts, and waiting to access cold data repositories can slow down hunts during critical events.
Threat hunters that I have worked with here at Comcast and at other organizations have shared that data, when enriched and connected, can be a crucial advantage. Data helps paint a picture about users, devices, and systems, and the expansive lens enables threat hunters to have a more accurate investigation and response plan. However, data is expensive to store long term, and large, disparate datasets can be overwhelming to sift through to find threat signals.
Threat hunting in the AI age
The broad adoption of artificial intelligence (AI) and machine learning (ML) opens the door to data-centric threat hunting, where a new generation of hunters can execute more comprehensive and investigative hunts based on the continuous, automated review of massive data. Threat hunters can collaborate with data engineers and analysts to build AI/ML models that can quickly and intelligently inspect millions of files and datasets with the accuracy, scale, and pace that manual efforts cannot match.
When companies are generating terabytes and petabytes of data every day, using AI/ML can help security teams:
Collect data from multiple security tools and aggregate it with non-security insights.
Scrutinize network traffic data and logs for indicators of compromise.
Detect unknown threats or stealthy attacks, including the exploitation of zero-day vulnerabilities and lateral movement activities.
Alert on multiple failed log-in attempts or brute force activity and identify unauthorized access.
At Comcast, having clean, integrated data allows AI/ML to improve operational efficiency and fidelity. For the cybersecurity team, operationalizing AI/ML to scrub large datasets led to a 200% reduction in false positives; for the IT team, AI/ML highlighted single-use and point solutions that could be reduced or eliminated, leading to a $10 million cost avoidance.
Creating more effective threat hunting programs with your data
Threat hunters want access to data and logs — the more the merrier. This is because clever malware developers are deleting or modifying artifacts like clearing Windows Events Log or deleting files to evade detection, but fortunately for us, threat hunters know packets don’t lie.
Analyzing all that data can quickly become a challenging task. DataBee® takes on the security data problem early in the data pipeline to give data engineers and security analysts a single source of truth with cleaner, enriched time-series datasets that can accelerate AI operations. This enables them to utilize their data to build AI/ML models that can not only automate and augment the review of data but also achieve:
Speed and scale: Security data from different tools that have duplicative information and no common schemas can now be analyzed quickly and at scale. DataBee parses and deduplicates multiple datasets before analysis. This gives data engineers clean data to build effective AI/ML models directly sourced from the business, increasing visibility and early detection across the threat landscape.
Business context: Threat hunting needs more than just security data. Security events without business context require hours of event triaging and prioritization. DataBee weaves security data with business context, including org chart data and asset owner details, so data engineers and threat hunters can create more accurate models and queries. For Comcast, employing this model has led to more informed decision-making and fewer false positives.
Data and cost optimization: The time between when a security event is detected and when a bad actor gains access to the environment may be days, months, or years. This makes data retention important — but expensive. Traditional analytical methods and SIEMs put tremendous pressure on CIO and CISO budgets. DataBee optimizes data, retaining its quality and integrity, so it can be stored long term and cost-effectively in a data lake. Data is highly accessible, allowing threat hunters to conduct multiple compute-intensive queries on demand that can better protect their organization.
Bad actors are evolving. They’re using advanced methods and AI/ML to improve their success rates. But cybersecurity teams are smart. Advanced threat hunters are expanding outside of generic out-of-the-box detections and using AI/ML to improve their success rate and operational efficiency. Plus, using AI/ML effectively also saves money by enabling threat hunting teams to scale, doing more hunts within the same set of resources in the same time frame.
Take your interest into practice and download the data-centric threat hunting guide that was created through interviews and insights shared by Comcast’s cybersecurity team.
Read More
Trick or threat: 5 tips to discovering — and thwarting — lateral movement with data
We know ghouls and ghosts aren’t the only things keeping you up this spooky season. Bad actors are getting smarter with their attacks, using tactics and techniques that baffle even the most seasoned cyber professionals.
Discovering — and thwarting — lateral movement can be particularly difficult because of disjointed but established software security tools that cannot always identify unwarranted access or privilege escalation. Many behaviors, like pivoting between computer systems, devices and applications, can appear as if they’re from a legitimate user, allowing bad actors to go undetected in environments.
Threat hunters are critical to exposing lateral movement activities. But much like hunting monsters in the dark, threat hunting using manual detection processes against large datasets is a scary task — one that is time-consuming and tedious. With the help of advanced tools like AI and machine learning (ML), hunters can analyze massive amounts of data quickly to pick up the faintest signals of nefarious activities. Data breach lifecycles have proven to be up to 108 days shorter compared to organizations that do not use some form of AI/ML in their practice. 1
Best practices for using AI/ML to detect lateral movement
At the end of the day, your threat hunters can still have the advantage. No one knows your environment better than you do. By building AI/ML models fueled by data from your environment, your threat hunters can detect — and ultimately thwart — lateral movement before the bad actors escalate further in the cyber kill chain.
Models, processes, and procedures are often bespoke, but a few time-tested best practices can accelerate threat detections and response. For lateral movement, this might look like using data about your users, their assets, and their business tool access to identify activities that indicate data exfiltration and espionage. Let's take a look at these best practices in the context of a lateral movement use case:
Store as much relevant data as possible for as long as possible. Investigating and finding evidence of lateral movement may require analyzing months or years of data because adversaries can be present but undetected for days, months — or even years. Raw and processed data, which has been deduplicated and contextualized, should be stored in an accessible, cost-effective data storage repository for threat hunters to run their queries.
Create baselines based on business facts and historical actions. Data scientists who work with business data should collaborate with threat hunters to develop and define baselines based on the hypothesis for a given use case. Typically, this means describing the environment or situation ‘right now’ and searching for deviations to indicate malicious activity. Creating proper baselines requires expertise to know what attributes and data points to use and how to use them. Regarding lateral movement, baselines should be based on factual and historical data reflecting business goals, past scenarios, hypotheses or triggers, and infrastructure conditions. Baselines created without context are meaningless.
Use the data with the best tools. Even with AI/ML, human interaction and judgment are still required. But data analysis doesn’t happen by itself. Data is often compiled and aggregated in a data lake, only to be ignored or underutilized. SIEMs can provide short-term storage and analysis of security data, but when you are threat hunting, you need more than just noisy security data. To get the best of both worlds, data transformation needs to be performed early in the pipeline so threat hunters have clean, enriched data they can trust and tools they are familiar with.
Produce accurate, data-driven reports. Producing meaningful KPIs and reports helps executive sponsors find value in threat hunting activities and encourage ongoing program investment. KPIs also help validate the efficacy of hunts even if nothing is found. For example, investigating a suspected lateral movement breach may have found no bad actor activity. The proper reporting underscores and validates the hunt was done soundly and backed up the baselines and KPIs.
Allocate a budget. Threat hunting can be an expensive and active cyber defense activity. When a trail is hot, hunters want to follow it. It’s important to allocate a budget for data storage, internal and outsourced resources, and multiple, compute-intensive queries. Creating a budget ensures that security teams have the resources they need when they need it most. “After the fact” prioritization once a breach or lateral movement has been detected will not only leave the organization at risk but will likely be a slow process or provide inaccurate findings. So, planning, as with any cyber security initiative, pays off.
Read More
Expert Insights: GRC and the Role of Data
Since I joined Comcast Technology Solutions (CTS) and the DataBee® team back in late March, I’ve been awed and challenged by how many different roles the DataBee data fabric platform is relevant to. Is it for security analysts? Threat hunters? Data scientists? GRC professionals? Yes, yes, yes and yes… essentially, DataBee is relevant to anyone in an organization who needs data to understand, protect and evolve the business.
Let’s get to know some of the amazing people who rely on data every day to do their jobs and help their organization be successful!
Governance, Risk & Compliance (GRC)
The function of GRC is critical – when it is correctly implemented, it is a business enabler and revenue-enhancer; when it is poorly managed or even non-existent, it can be a business inhibitor, leaving an organization vulnerable to compliance violations and increased cyberthreats.
GRC programs are set of policies and technologies that align IT, privacy, and cybersecurity strategies to their business objectives.
I recently had the good fortune of meeting Rob Rose, a Manager on the Cybersecurity and Privacy Compliance team here at Comcast, and I enjoyed my conversation with him so much that I immediately hit him up to be one of our spotlight experts.
Working to achieve a more secure privacy and cybersecurity risk posture
[LC] Rob, tell us about what you do as Manager, Cybersecurity and Privacy Compliance here at Comcast.
[RR] At the highest level, my role at Comcast is to help the company achieve a more secure privacy and cybersecurity risk posture. This takes shape as leading an initiative called ‘Controls Compliance Framework’ (CCF) which is broken down into two sister programs: ‘Security Controls Framework’ (SCF) and ‘Privacy Controls Framework’ (PCF). The team I lead creates a continuous controls monitoring (CCM) product that business units across Comcast can use to monitor their adherence to privacy and cybersecurity-related controls.
Creating, and maintaining, this product includes collaboration with multiple different teams in the company, starting with the process owners of each control activity that helps to mitigate risk (e.g., the Corporate User Access Review team):
Collaboration with process owners: My team first works with the process owners to understand how the process is designed and should operate, and what actions the various Business Units across Comcast are expected to complete.
Document requirements: We then learn from process owners how and where they store the data to support their control (e.g., Oracle Databases, ServiceNow, etc.) and from there we document requirements to bring to our development team.
Report on privacy and security posture: Once our development team has ingested all the disparate data from multiple process owners and developed our product based on the requirements we documented, we bring this product to Business Units, with the goal of providing them a single pane of glass view into their overall privacy and security posture.
GRC Challenges
[LC] What would you say are the 3 biggest challenges faced by GRC and compliance experts?
[RR] There are a few keys challenges that we run into as a GRC function, and fortunately, compliance data leveraged in the CCF program can address them.
Clean Data – the first issue is the cleanliness and usability of the data. As a GRC function, we rely on process owners throughout the company to provide us with data. Frequently, however, we see issues with both data cleanliness and ownership in the data that is provided to us. When we bring this data to Business Units, who work day in and day out with the assets they own, they often provide the feedback that there is something amiss in the data. This can be turned into a positive as we can bring this feedback back to the process owners, who can then clean up the data on their end, making the data quality better for the entire company.
Awareness – Often times when we alert Business Units of compliance actions that need their attention, we discover that they were unaware of these requirements. This awareness creates some ‘knowledge transfers’ that we must do to inform Business Units of the need for the actions, and the importance of the actions. This helps to increase the overall cybersecurity and privacy awareness of the Company.
Prioritization – Akin to point B, since Business Units are often not aware of the privacy and security related actions they need to take, they have not planned the resource capacity into their roadmap to complete those activities. Helping to prioritize which actions are ‘must do’ right now, as opposed to which actions can wait, has been something we’ve been working on with our program. To help with this, we’ve been driving towards risk-based compliance, noting that while all systems and assets need to meet cybersecurity requirements, certain systems and assets are a higher priority to meet these requirements based on the types of data that the system utilizes.
The rewards of the job
[LC] What are some of the most rewarding aspects of your job?
[RR] We’ve had some real success stories over the past few quarters where Business Units have gone from a non-compliant state to a highly compliant state. This has come from a combination of presenting Business Units with insights through data so they are aware of where they have gaps, and from helping them understand what actions they need to take. Seeing a Business Unit make this transition from non-compliant to compliant is incredibly rewarding, as we know that we’ve helped make the company more secure.
Collaboration
[LC] What other teams or roles do you interact with the most as you go about your job day-to-day?
[RR] As a GRC professional, we’re in a unique position where we interact with individuals at all levels of the business. This includes the first line of defense (Business Units), the second line of defense (process owners, Legal, other teams within Comcast Cybersecurity), and the third line of defense (Comcast Global Audit). We have the benefit of working with both very technical teams of developers and system architects, as well as with very process driven teams who define what requirements the company should be meeting. In addition, in our position, our team works with top level executives from a reporting standpoint, as well as the front line workers who are actually implementing changes to systems and applications to make the company more secure!
The role of data in GRC
[LC] How do you use data in your job, and what type of data do you rely on?
[RR] As a GRC professional, the ability to have clean data provided to us is the keystone to our success. The Controls Compliance Framework product my team and I work on is dependent on data coming in from disparate data sources so we can cleanse and aggregate it to provide meaningful insights to Business Units.
At the highest level, you can break down the types of data that we need into a few different categories:
Application data: What applications exist in the environment, who owns those applications, and what level of risk does the application present -- e.g., does the application use customer data, proprietary Comcast data, etc? Does the application go through User Access Reviews on a set frequency, and has it been assessed to confirm it was developed in a secure way?
Infrastructure: What underlying assets or infrastructure support those applications? What servers are used to run the application? Are they in the cloud or on-prem? What operating system is the server running? Is the server hardened, scanned for vulnerabilities, and does the server have the necessary endpoint agents on it (e.g., EDR)?
Vendor information: What vendors do we engage with as an aspect of our business, what data do we share with the vendor, and what assessments have been completed to confirm the vendor will handle that data securely?
For each of these types of data, we also need data to support the completion of the processes around that data. Have all assessments been done on the application, asset, vendor, and do they meet all the required controls?
As you can imagine, these data sources are all in different locations. One of the key aspects of the program we run is to pull all of this data into one location and provide it in a single pane of glass view for business units so they can have one easy location to go to to understand their risk posture.
The GRC questions that data helps answer
[LC] Can you give us an example of the kinds of questions you’re looking to answer [or problems you’re looking to solve] with data?
[RR] The biggest question I’m looking to answer as a risk professional is what the overall risk posture of the company is. Do we have a lot of unmitigated risk that could expose us to issues, or are we generally covered? This question is most easily answered with the data that was described above (the role of data in GRC).
GRC vs. Compliance
[LC] How would you describe the difference between GRC and compliance? Or are they one-and-the-same?
[RR] The ‘C’ in ‘GRC’ stands for Compliance. Compliance is a huge piece of what I do as a GRC professional. Making sure that the company is complying with the necessary standards and policies (compliance) and lessening the exposure we have as a company and the impact of that exposure (risk) is pretty much what my entire day is filled with! We then present these insights to executives so they can be aware and take action to adhere to standards and policies as needed (governance).
A random data point about our GRC expert
[LC] Rob, if you could go back in time and meet any historical figure, who would it be and why?
[RR] Wow this is a tough one that I think could change day by day. I recently took a trip to Rome and we visited the Sistine Chapel while we were there. I was in such awe of the work that Michelangelo did. I think I’d like to meet someone like Michelangelo as the arts are an area that is so foreign to me and how I think. I’d love to pick his brain to find out if he knew what he was creating would be a work that was revered and visited by millions of people for centuries to come, to understand his artistic thought process, and to learn more about his life.
Parting words of wisdom
[LC] Any other words of wisdom to share?
[RR] To my other GRC colleagues out there, I am sure you run into the same struggles of trying to get Business Units to comply with internal company policies and standards. The implementation of data-driven tools have made it significantly easier to assist BUs in becoming compliant. The feedback that we’ve gotten is that utilizing data, and putting it in a simple and straightforward tool, has helped to ‘make compliance easy’. Let’s all keep striving to find ways to make it as easy on the people whose main job is not compliance to fit compliance into their work!
DataBee for GRC
As Rob mentions, the Comcast GRC team is charged with pulling all kinds of data – application, infrastructure and vendor data -- into one location, essentially providing a “single pane of glass view” to Business Units, making it easy for these Business Units to see and understand their risk posture. The GRC team uses the internally developed data fabric platform that DataBee is based on to do this, and its use has helped to drive those improved compliance rates that Rob mentioned, as well as other improvements in Comcast’s overall compliance and security posture. (An aside – I think that’s pretty cool.)
DataBee v1.5 was recently launched, and with it a continuous controls monitoring (CCM) capability that provides deep insights into the performance of controls across the organization, identifying control gaps and offering actionable remediation guidance. Check out the CCM Solution Brief to learn more.
Thanks to Rob for a great interview!
Read More
Terrestrial Distribution for MVPDs?
Allison Olien, Vice President and General Manager for Comcast Technology Solutions (CTS), is witnessing firsthand how the changes, challenges, and new opportunities for MVPDs and content providers are evolving the industry faster than ever. She took some time to reflect on how the 2020’s have impacted companies, and on how new technological approaches are opening new doors to growth and profitability.
The 2020s have already proven to be a decade of sweeping change for MVPDs and cable operators around the U.S., starting with the repurposing of C-band spectrum for 5G services. We’re three years in – what’s it like now?
(AO) Well, the C-band reallocation had a definite material impact for satellite-based services; I mean, how could it not? Many of them had to physically transition to a new satellite - for example, our services that had operated from the SES-11 satellite had to migrate to the SES-21 satellite in response. That said, it wasn’t the only thing changing for providers; it was more of a catalyst for a hard look at a) how technology was evolving, b) how device improvements, high-def video and subscription models were changing the competitive landscape, and c) how to compete more effectively in a more dynamic market.
Are these the reasons why CTS recently introduced a new terrestrial distribution model?
(AO) Precisely. MVPDs don’t operate in a bubble; they’ve paid attention to the ways the competitive landscape has changed – changes that have happened on both sides of the screen. For CTS, these are partners we’ve worked with for a long, long time. Satellite delivery is trusted and reliable, but it has limitations, it takes a lot of equipment – and, most importantly, it used to be the only option for many of them. Managed Terrestrial Distribution, or MTD, replaces most (or all) satellite feeds to a cable plant with broadband IP networking. MTD uses a system’s existing internet connection and doesn’t require a dedicated point-to-point-connection to any specific PoP. This not only liberates MVPDs from the limitations of a fixed transponder bandwidth, but also paves the way for the entire content-to-consumer pathway to evolve into full IP delivery.
So, the net result essentially gives these businesses a way to improve services and attract more subscribers in two ways – the ability to offer more services, but also more HD content?
(AO) Ultimately, it’s an opportunity for MVPDs to reimagine the way consumers can engage with their content and clear the way for an evolution to full IP delivery to customers. To your point, yes – with the managed terrestrial distribution we’ve rolled out, a distributor can effectively double the amount of channels they offer and triple the amount of MPEG-4 HD content; but the actual hardware investment is drastically reduced as well. For our current customers as well as other providers looking for a new technology foundation to grow on, we think it’s worth having a conversation about.
Click here to learn more about Managed Terrestrial Distribution.
Continue reading the rest of ICN Issue 10 - September 2023 here.
Read More
Making cybersecurity continuous controls monitoring (CCM) a reality with DataBee 1.5
Exciting innovations are a-buzz! Today, DataBee® v1.5 is now generally available and features a host of new continuous controls monitoring (CCM) capabilities on top of the DataBee security data fabric that puts your data at the center for dynamic, detail-rich compliance metrics and reports.
As more businesses become digital-first, business leaders are leaning in and placing more importance on cyber-risk programs. In addition to pressure from regulators, internal auditors and KPIs are keeping security and risk management teams up at night. The lack of reporting capabilities that can show real-time compliance trends over time, on a consistent data set, have analysts scrambling to collect data and test controls that will only be evaluated in the latest audit, instead of focusing on sustainable programs and insights.
Putting continuous in continuous controls monitoring
DataBee 1.5 introduces a data-centric approach to continuous controls monitoring (CCM) for the security, risk, and compliance data fabric platform. By focusing upstream on the data pipeline, DataBee weaves together security data sources with asset owner details and organizational hierarchy information, breaking down data siloes and adding valuable context to cyber-risk reports and metrics.
From executives to governance, risk, and compliance (GRC) analysts, DataBee delivers a dynamic and reliable single source of truth by connecting and enriching data insights to measure CCM outcomes. Comcast has experienced first-hand the security data fabric journey, and DataBee 1.5 brings to market the innovations from our internal tool – including feeds, dashboards, and visualizations – to your organization so you can scale your continuous controls services program.
In this example, Will Hollis, an Executive VP of ACME Studios, views the security posture of his organization using DataBee’s Executive KPI Dashboard.
Verifiable data trust
The robust platform features 14 pre-built CCM dashboards, aligned to the NIST Cybersecurity Framework, and the ability to self-defined KPI values – or use DataBee recommended values. Risk scores are populated using underlying data sources collected and enriched by DataBee. Users can see in detail where and how the data is used when hovering over the score. The completeness, accuracy, and timeline in the dashboards builds trust in CCM reporting and leads to accountability amongst business leaders. Afterall, data trust gives you wider adoption of your cyber-risk program throughout the business.
DataBee gives you insights into how the scores are provided
Operational efficiency
A benefit of having data at the center of your CCM program is that it streamlines engagement models with control owners. Previously GRC teams had the tedious task of scanning a variety of data sources – nearly all of which they did not have control over – and having fragmented conversations with different stakeholders. With DataBee, instead of hearing from your GRC teams infrequently and during urgent events, there is a continuous feedback loop built on the quality of data and actionable insights. Another benefit is the ability to measure the effectiveness of cyber-risk investments and programs.
Proactive risk management
The ever-evolving threat landscape and regulatory constraints are a nightmare to deal with for any GRC team at any scale. DataBee’s CCM capabilities deliver deeper insights about risks and inefficiencies, providing recommendations for resolution and hierarchy information to proactively reach out to control and asset owners. In the screenshot below, users can drill down to the granular details from their vulnerability management dashboards and find solution recommendations to resolve issues quickly. Teams throughout the business can focus on closing gaps instead of finding them, enabling the business to remain in compliance with internal and external requirements.
Drill into the vulnerability management details to find out how to resolve issues.
Get started with DataBee
Continuous controls monitoring is a game changer for security transformation and outcomes. DataBee, from Comcast Technology Solutions, is thrilled to deliver data-centric CCM capabilities that scales for businesses of all sizes. Watch our DataBee 1.5 announcement to hear from Nicole Bucala, Yasmine Abdillahi, and Erin Hamm about the product journey. Want to get started right away? Email us at CTS-Cyber@comcast.com and check out our AWS Marketplace Listing.
Read More
Elevating Ad Management: 5 Strategies for Success
A successful advertising campaign is more than the sum of its parts. That’s the goal, right? Campaigns need to justify themselves by delivering measurable results. That said, there are a lot of moving parts to a modern campaign, and they don’t all move at the same speed. Even here in the 2020s, there are still manual processes in some workflows that have not been kissed by innovation (yes, spreadsheets still exist). Coordination and control across a complex ad ecosystem can be achieved, but the speed and accuracy needed from today’s advertising operations require a centralized, unified, fully automated approach. Imagine your advertising operations as a terminal, from where you move your content to destinations all over the globe, often in real time. Is there an easier way to achieve a better, faster, smarter ad creative process at scale?
Let’s take a look at some of the key elements that will enable advertisers to uplevel ad management in the second half of this decade (and beyond).
#1: The unified ad platform “really ties the room together”
Advertising is a dynamic exercise where time is almost always of the essence. Efficiency in workflow management makes or breaks the success of a campaign. One of the key strategies for streamlining workflows at scale is the adoption of a unified platform that can act as a centralized hub.
With a centralized architecture, advertisers can oversee the entire campaign lifecycle without switching between different interfaces. From final media buys to traffic instruction creation and asset delivery, previously disjointed tasks become a single cohesive process. Every step benefits from the integration of tools and data within a single ecosystem.
A unified platform also brings teams into harmony. Collaboration among team members is more organic, drawing teams, departments, and stakeholders into greater alignment. When everyone can access the same resources and insights, real-time sharing minimizes miscommunication and reduces errors. Think of it as having your whole team playing on the same field — response to change is more agile, assets can be optimized on the fly, and more resources can be focused on campaign effectiveness instead of just “completion.”
#2: Safeguarding content: Automated usage rights integration
Images, music, and other creative assets bring the magic — but each asset requires careful consideration of usage rights and permissions. Violating these rights is a red flag that can lead to legal complications and reputational damage, not to mention costly fines. A better, faster, and smarter ad creative process at scale must incorporate automated and integrated usage rights data.
Incorporating usage rights information into the ad creative workflow helps to ensure that every piece of content used is compliant with legal and contractual obligations. Automated systems can cross-reference assets against a database of usage rights, preventing the inadvertent use of content that hasn't been properly licensed. This not only mitigates legal risks but also saves time by eliminating manual rights checks. Ultimately, usage rights management is essential; it helps to ensure efficiency and process/policy adherence, elevating the consistency — and results — of the creative process.
#3: Ensuring consistency across destinations at scale
With centralized command-and-control, advertisers can more effectively manage creative distribution across a staggering number of media destinations. One of the biggest benefits to this approach is that more focus can be applied to optimizing results and the refinement of campaign components to maximize impact at the screen level. Advertisers can improve outcomes and adjust objectives while still adhering to brand guidelines across destinations.
Speaking of brand guidelines, enter the automation of industry standard identification. Automating the use of these unique identifiers streamlines tracking and distribution of creative assets. Standardized identifiers act as “digital passports” for creative elements, facilitating their seamless movement across platforms, regions, and languages. Advertisers can leverage this additional layer of automation to further reenforce data consistency and accuracy throughout the global delivery process. This consistency not only bolsters brand integrity but also expedites the localization of content, as each asset's specifications and requirements are readily accessible.
#4: Drawing a straighter, shorter line between ad creative and ROI
The goal “make the most of every ad dollar across every media experience” is a common one, whether you’re on the advertising side or you’re a media destination looking to attract more viewers and more ad revenue.
A platform that combines real-time creative conditioning and asset management, along with APIs to major ad decisioning engines, offers content owners and distributors a path to addressability and targeting across TV, digital, and social channels, and again, reduces the amount of time required to place campaigns from days to minutes, so quality can be reestablished as job one.
For instance, when the in-house advertising team at Lowe’s, the home improvement chain, wanted to expand audience reach and reap the benefits of top-quality content across platforms and ad ecosystems, they knew they would have to find innovative ways to execute campaigns with increased efficiency across both spot creation and distribution.
#5: AdFusion: Solving advertising’s biggest challenges at scale
Achieving a better, faster, and smarter ad creative process at scale demands a paradigm shift. From streamlined workflows and automated usage rights management to the automation of standardized creative data through industry IDs, advertisers need a platform that does it all, empowering them to overcome the logistical and geographical complexities of global mobile media consumption.
This is the challenge that Comcast Technology Solutions' AdFusion™ was built to answer. AdFusion is a unified platform that centralizes data and automates processes upstream and downstream. The platform is the result of Comcast’s dedicated research and investment and ensures consistency and accuracy across the entire ad process, making it easier to find and place the right creative, ensure that usage rights are in compliance, and track campaigns across broadcast, digital, and radio.
Justin Morgan, the head of product for AdFusion, expressed his excitement about the platform's potential: "In an industry that's constantly accelerating, AdFusion provides a technology approach that evolves and scales to meet companies' needs. It saves time and resources while enhancing data accuracy, ultimately helping our clients achieve better results in their campaigns."
Discover how AdFusion can support you in your quest to make the ad lifecycle better, faster, smarter at scale here.
Read More
Comcast (DataBee) at Black Hat? Yes!
The DataBee® team can’t help but have a little fun with the fact that Comcast is not exactly one of the first companies you think of when you think “cybersecurity”. Comcast has attended Black Hat in the past, but this is the first time we are debuting a cybersecurity solution for large enterprises – the DataBee security data fabric platform, which is poised to transform the way enterprises currently collect, correlate and enrich security and compliance for the better.
This was the 26th year of Black Hat USA but despite how many security vendors there are serving the market – a reported 3,500 in the US alone, 300 of whom were on the show floor – “security data chaos” (a term we love to use because it’s such an accurate description) remains a very real and difficult problem. Our discussions with booth visitors validated that it’s still very labor and cost-intensive to bring together the security data teams need to understand the threats that might be imminent or already wreaking havoc. When we would tell the DataBee story, there was a lot of head nodding.
From booth discussions to participation in a major industry announcement and the Dark Reading News Desk, the DataBee team took advantage of being at Black Hat to raise awareness of the security data problem and how we’re uniquely addressing it. A few highlights include:
The Open Cybersecurity Schema Framework (OCSF) announcement
On Tuesday, August 8, DataBee was included in the announcement, OCSF Celebrates First Anniversary with the Launch of a New Open Data Schema:
The Open Cybersecurity Schema Framework (OCSF), an open-source project established to remove security data silos and standardize event formats across vendors and applications, announced today the general availability of its vendor-agnostic security schema. OCSF delivers an open and extensible framework that organizations can integrate into any environment, application or solution to complement existing security standards and processes. Security solutions that utilize the OCSF schema produce data in the same consistent format, so security teams can save time and effort on normalizing the data and get to analyzing it sooner, accelerating time-to-detection.
OCSF is a schema that DataBee has standardized on to make data inherently more usable to a security analyst. It also enables out-of-the-box relationships and correlations within a customer’s preferred visualization tool, such as Power BI or Tableau. (For more on this, check out the DataBee product sheet.)
Matt Tharp, who leads field architecture for DataBee, has contributed to the OCSF framework and was quoted in the announcement alongside leaders from Splunk, AWS and IBM, among others. Coverage of the announcement included this piece in Forbes.
Dark Reading News Desk
At the Dark Reading News Desk , Matt was joined by Noopur Davis, EVP and Chief Information Security & Product Privacy Officer at Comcast, for a great discussion with contributing editor Terry Sweeney on the topic of the big data challenge in security. Noopur and her cybersecurity team developed the security data fabric platform that DataBee is based on, and Matt—as an architect of DataBee—is part of the team bringing the commercial solution to market.
They discussed topics including: the challenge that big data creates for security teams; how Comcast has gone about addressing this issue; what a security data fabric is and how this approach differs from other solutions such as security information and event management (SIEM) systems; where and how a security data fabric and a data lake intersect; and what the customer response to DataBee has been so far.
The video of this discussion is a great way to understand DataBee’s origin story and the very real benefits that Comcast has gotten from building and using a security data fabric platform. Following in the internal solution’s footsteps, DataBee—unlike other security products—is designed to handle environments on the scale of large enterprises like Comcast.
DataBee in a minute and 39 seconds
The concept of a security data fabric platform is new and it’s a little on the complex side. So leading up to the Black Hat show, the DataBee team created an animated “explainer” video that brings to life what DataBee is, how it works and the key benefits it brings to different roles:
GRC teams can validate security controls and address non-compliance
Data teams can accelerate AI initiatives and unlock business insights
Security teams can quickly discover and stop threats
If you were at Black Hat and need a refresher, or if you’re learning about DataBee for the first time, this short video provides a great high-level introduction.
While DataBee has other use cases besides security, this is a market and critical capability in need of a better way to manage all of the data that’s relevant to understanding an organization’s real security, risk and compliance posture.
Will we be back at Black Hat in 2024? You betcha.
In the meantime, learn more or schedule a customized demo of DataBee today.
Additional resources:
Read Noopur’s blog It’s Time to Bring Digital Transformation to Cybersecurity
See how DataBee can be used for continuous controls assurance
Check out the DataBee website
Read More
It’s Time to Bring Digital Transformation to Cybersecurity
Recently, I’ve been thinking a lot about cybersecurity in the age of digital transformation.
As enterprises have implemented digital transformation initiatives, one of the great—albeit challenging—outcomes has been data: tons and tons of data, often referred to as “big data”. Storing, managing and making all of that data accessible to many different users can be expensive and non-trivial, but all that big data is gold. I know first-hand how valuable big data is to understand the health of the business; the insights provided by all this data enables an enterprise to continually adapt as needed to meet customer needs, to remain competitive, and to innovate.
Security has been left behind
Security has largely been left behind when it comes to digital transformation. While our counterparts in other areas of the business are using data lakes and mining rich data sets for actionable intelligence, security leaders and teams are still having to work way too hard to piece together a comprehensive view of threats across the organization. Data is the currency of the 21st century – it helps you examine the past, react to the present, and predict the future. Yet, too many security products are producing too much security data in silos; it’s difficult, at best, to bring all of this data together for a unified view of what’s really happening. Because of the constantly-changing threat landscape—exacerbated by everything from the global pandemic to the Russian/Ukraine war to new technology developments such as generative AI (which can also be used for good)—new security tools and capabilities to address the latest threats just keep getting added to the mix, like band aids on new wounds.
If you do a search for “the average number of security products in the enterprise security stack”, you’ll get answers that range from 45, to 76, to 130 – the number is large and the tools are many. (Tempting as it is, I won’t share how many externally developed security tools we’re using in the Comcast environment.) There are products for data protection, risk and compliance, identity management, application security, security operations, network, endpoint and data center security, cloud security, IoT and more. (While a few years old, the Optiv cybersecurity technology map provides a glimpse at how big and daunting this space is.) A security information and event management (SIEM) solution can help by collecting and analyzing the log and event data generated by many of these security tools. SIEMs are wonderful and provide essential functions, but they are expensive, not ideal for simultaneous, parallel compute, and do not really “set the data free” for long term storage, elastic expansion, and use by multiple personas.
This is the age of digital transformation for cybersecurity
“Digitalisation is not, as is commonly suggested, simply the implementation of more technology systems. A genuine digital transformation project involves fundamentally rethinking business models and processes, rather than tinkering with or enhancing traditional methods.” (ZDNet)
No more security data silos… it’s time for us to apply the tenets of digital transformation to security. We tackled that challenge here at Comcast and the results have been impressive. With a workforce of over 150,000, tens of millions of customers, and critical infrastructure at scale, we have a lot to secure. We also have millions of sensors deployed through our ecosystem, providing potentially rich insights. As digital transformation demands, we took a fresh look at our cybersecurity program and came up with a new approach to consolidating, analyzing and managing our security data that has resulted in millions of dollars in cost savings and, even more importantly, the ability to bring together vast amounts of clean, actionable and easy-to-use security data. And not just security data – we enrich security data with lots of other enterprise data to enable even deeper insights.
Applying the data fabric approach to security
How did we do it? We built on the idea of a data fabric – “an emerging data management design for attaining flexible, reusable and augmented data integration pipelines, services and semantics.” The outcome is a cloud-native security data fabric that has integrated security data from across our security stack and millions of sensors, enriched by enterprise data, enabling us to cost-effectively store over 10 petabytes of data with hot retention for over a year and allowing us to provide a ‘single source of truth’ to all of the functions that need access to this data for security, risk and compliance management.
“By 2024, data fabric deployments will quadruple efficiency in data utilization, while cutting human-driven data management tasks in half.”
Source: Gartner® e-book, Understand the Role of Data Fabric, Guide 4 of 5, 2022
Comcast’s security data fabric solution ingests data from multiple feeds, then aggregates, compresses, standardizes, enriches, correlates, and normalizes that data before transferring a full time-series datasets to our security data lake built on Snowflake. Once that enriched data is available, the use cases are endless: Continuous controls assurance, threat hunting, new detections, understanding and baselining behaviors, useful risk models, asset discovery, AI/ML models, and much more. The questions we dare to ask ourselves become more audacious each day.
My dream was to have vast amounts of relevant security data easily actionable within minutes and hours—not days or weeks—and we’ve achieved that using a security data fabric. We’re able to do continual “health checks” on our business and security posture and adapt quickly as threats and business conditions change. Security is a journey, and we are always looking for ways to improve – our security data fabric helps us in that journey. And in doing so, it has made us a part of the digital transformation journey of our entire corporation.
Fast-track the transformation of your security program
The security data fabric that has proven so beneficial to Comcast’s cybersecurity program is being commercialized and offered to large enterprises in a solution we call DataBee®, available through Comcast Technology Solutions. If you’re interested in taking a fresh look at your cybersecurity program with an eye towards digital transformation, check out DataBee and consider using a security data fabric to deliver the big data insights you need to stay ahead of the ever-evolving threat and compliance landscape.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Read More
DataBee is growing fast – join us!
Let’s get right to the point: we’re growing fast and looking for great minds to come with us.
In April, CTS launched DataBee® – Comcast’s innovative cloud-native security, risk and compliance data fabric platform. The original concept of DataBee was designed by Comcast’s CISO organization to gain more security insights using data generated throughout the business and validate the efficacy of our security programs. And we’re excited to share the benefits DataBee with other organizations. As Nicole Bucala, our VP and GM of Cybersecurity said in this blog, DataBee “is solving a problem I’d seen vendors try to address, unsuccessfully, for years: the desire to centralize security, compliance, and business data neatly, sorted and combined elegantly in a single place, from which insights could be derived and actions to be taken.”
Since launch, not only has DataBee gotten a lot of buzz (pun intended), but we continue to identify more use cases that can benefit from it. As a result, Comcast is investing heavily into the future of DataBee, and we’re inviting you to explore a place on our expanding team.
Comcast: a career – and company – you can be proud of
Comcast Technology Solutions is a fantastic place for bright minds and forward-thinkers looking to innovate and evolve with a company they can be proud of. As part of Comcast NBCUniversal, you’ll be joining one of the strongest, most forward-thinking companies in the world, with a multitude of ways to collaborate and evolve your own career path. We’re not just resting on our laurels as a Fortune 40 company – Comcast NBCUniversal has a proud heritage of employee satisfaction, environmental action, and community support:
93% of our employees rate us as a Great Place to Work.
2023 is our tenth consecutive year on the Points of Light Civic 50 list of community-minded companies.
DiversityInc. has recognized us as a Top 20 company for inclusion.
LinkedIn ranks us as #13 in their list of top companies to work for.
Open positions: just a start
Creative thinkers and great “people people” are expanding DataBee’s reach across complex industries that need a better way to not only protect themselves, but to do more with their data. Our engineering and sales teams are thriving with professionals who are building the future of data.
Click here to check out our current open positions.
Strategic Account Executive (Virtual)
Cybersecurity Field GRC Architect (Colorado, Virtual)
Cybersecurity Software Developer (Virtual)
Principal DevOps Engineer (Virtual)
Principal DevOps Engineer (Virtual)
UI Full Stack Developer (Virtual)
UI Full Stack Developer (Virtual)
Cybersecurity Platform Engineer (Virtual)
Sr. Manager Development Operations (Virtual)
Data Solutions Engineer (Virtual)
Cybersecurity Principal SaaS Architect (Virtual)
Cybersecurity Data Scientist Manager (Virtual)
Cybersecurity Data Analysis Manager (Virtual)
Cybersecurity Dataflow Software Engineering Manager (Virtual)
Senior Director Product Manager, Data Fabric Platform (Virtual)
Director Product Manager, Data Fabric (Virtual)
Please don’t hesitate to reach out to me personally on LinkedIn – we can talk about DataBee’s plans, your career goals, and where we might find a place to work together. There’s a lot more to come; DataBee is just starting to take flight.
Read More
Introducing DataBee: Sweetening the Security, Risk and Compliance Challenges of the Large Enterprise
Today, the Comcast Technology Solutions (CTS) cybersecurity business unit announced DataBee®, a cloud native security, risk and compliance data fabric platform. DataBee marks the first “home grown” product from this business unit and – like the other great products and platforms offered across the CTS suite – brings to market a solution originally developed for Comcast’s own use.
A security data fabric built by security professionals for security professionals
At its essence, DataBee weaves together disparate security data from across your technology stack into a single fabric where it is standardized, sharable, and searchable for analyses, monitoring, and reporting at scale. While the concept of a data fabric has been around for some time, you may not have yet come across a “security data fabric.” That’s because it’s time to bring security into the data fabric equation.
DataBee was inspired by Comcast’s internal security and compliance teams. The proliferation of cybersecurity tools and voluminous amounts of data made it difficult to combine for a unified view, creating silos while being costly to store and analyze. In much the same way a data fabric is used for streamlining access to and sharing data in distributed data environments, DataBee security data fabric combines data sources, data sets, and controls from various security tools to bring security data to the organization’s global data strategy.
After data is taken in from all the various feeds, DataBee aggregates, compresses, standardizes, enriches, correlates and normalizes it before transferring a full historical, time-series dataset to a data lake where data is stored. Enter Snowflake…
DataBee delivers a security data fabric for customers on the Snowflake Data Cloud
With Comcast Technology Solutions’ launch of DataBee, we’re proud to announce a strategic partnership with Snowflake, the Data Cloud company. DataBee is integrated with Snowflake, enabling customers to quickly and easily connect DataBee to their Snowflake instance where data is stored and processed.
The unique architecture of the Snowflake platform separates “compute” from “storage”, enabling organizations to scale up or down as needed, and store and analyze large volumes of data in a cost-effective way. This not only reduces costs but also provides flexibility, speed, and scalability, making it an ideal choice for storing security data.
After DataBee parses, flattens, and normalizes data for analysis, Snowflake’s platform is able to store substantial volumes of data for an extended period of time—historically a big challenge for cybersecurity solution providers—while driving down costs and maintaining high performance. The robust analytics enables teams across the organization to leverage the same dataset for high fidelity analysis, decisioning, response, and assurance outcomes without worrying about retention limits.
Normalized and enriched data from Snowflake can be exported into a customer’s business intelligence (BI) tool such as Tableau or PowerBI, generating more actionable reporting and metrics. Threat hunters also experience enhanced capabilities by using the same, clean data with tools of their choice, such as Jupyter Notebooks, enabling them to identify real threats faster as they conduct their investigation across large-scale datasets. Further, the enriched data from DataBee can be joined with additional datasets from the Snowflake Marketplace to derive additional insights.
DataBee provides security, risk and compliance capabilities for customers looking to create a security data lake strategy with their cloud data platform.
Cloud-native security and compliance data fabric at scale
Enabling a unified global data strategy with DataBee
DataBee combines the business context needed by security, risk and compliance teams to protect an organization’s people and assets. These teams include threat hunters, data scientists, security operations center (SOC) analysts, compliance and audit specialists, and incident responders. This unified view of critical security data with business context enables people in these roles to rapidly identify real threats and manage compliance.
Some use cases include:
Compliance: For continuous controls assurance for security controls such as Endpoint Detection and Response (EDR) coverage, asset management, vulnerability management, and more. DataBee provides near real-time visibility into an organization’s compliance and risk posture.
Threat Hunting: Designed for faster time-to-detection by enabling threat hunters to conduct automated and deeper searches with the ability to run multiple hunts at once.
Data Modeling: For supporting and building machine learning models. DataBee provides threat detection teams with time-series analytics to create machine-learning based detection.
SIEM Decoupling: Separate the storage of data that typically goes into a SIEM solution from your analytical layer. Cleansing data at the upstream results in SIEM cost reduction and highly performative analysis.
Behavior Baselining with Anomaly Detection: With your data in a clean, sharable, and usable format, security teams can easily understand user and device behavior and to rapidly detect and take action on any anomalies.
The real-world benefits of a security data fabric
The security data fabric architecture built and implemented by Comcast’s security team has yielded impressive results for the broader security, risk and compliance teams across the organization. In our own use we saw:
Daily data throughput reductions in our SIEM resulting in a 30% decrease in the cost of our security operations
3x faster threat detection
35% noise reduction in the data sets users work with
Faster compliance answers as a result of streamlined compliance reporting and automated queries
These results validate the very positive impact that bridging the worlds of data and security can have on an organization, and that we want other enterprises to benefit from through DataBee. When organizations have clean, sharable data to leverage that adds business context to security events, security teams can identify and detect real threats quickly and compliance teams can validate and achieve continuous compliance assurance while reducing costs for data storage and SIEM throughput. By bringing data fabric to the enterprise security tool chest, DataBee improves their security, risk and compliance posture.
Indeed, security is now all about the data. Businesses have made significant investments in their security teams and the solutions they use to protect the business. However, if all of these tools are working in silos and independent of the larger business context, they will still be inadequate at detecting and protecting an organization from cyberthreats.
By bringing security under their global data strategy, organizations will have more actionable insights, reduced false positive findings, the ability to conduct threat hunting across large-scale data sets, and achieve near real-time visibility into their compliance and risk posture.
Meet DataBee at RSA
The DataBee team will be hosting exclusive events and meetings during the RSA Conference in San Francisco. Check out our itinerary:
A Recipe for Security Data Lake Success, a breakfast event on April 26, 2023 at 8:30am
Request a meeting with the DataBee executive team
Learn more about DataBee and download our data sheet
The Future of Cybersecurity Brought Me to Comcast, a blog by Nicole Bucala, VP & GM, Cybersecurity Suite
Read More
The Future of Cybersecurity Brought Me to Comcast
Comcast Technology Solutions is proud to announce that Nicole Bucala has been brought on board to lead our new Cybersecurity business unit as its Vice President and General Manager. We asked Nicole to share the reasons why, as a cybersecurity leader, she made the choice to join us – and to set the stage as we bring Comcast’s scale and innovation to bear for the security needs of industries around the world.
When I was approached to join Comcast last summer to lead its new SaaS enterprise cybersecurity business unit, I was both skeptical and intrigued. Skeptical for the obvious reasons: there aren’t many stellar success stories about a diversified global conglomerate getting into enterprise cybersecurity, which is a complex and challenging market. Yet at the same time, Comcast was approaching its foray into enterprise cybersecurity in a way I hadn’t seen anyone try before. Comcast was, in fact, choosing to bring its own chief information security officer’s (CISO) inventions to market. This meant that Comcast was commercializing proven, accepted product concepts in use by experts, internally, and at scale.
Why did this scream out to me as powerful? Well, for those of us who’ve sold security to Comcast before, we know their security organization is staunch. The stakes are high when trying to secure critical infrastructure at scale. Staffed with over a thousand security professionals, Comcast builds many of its own security technologies to cater to its size and scale, as well as to address the advanced threats and increasing regulatory scrutiny it must withstand. In addition, Comcast is a Fortune 30, highly profitable company that has a substantial, wisely allocated security investment profile. So, anything built internally has to improve security efficacy, work at Comcast scale AND be cost-effective enough to save the organization money overall – a key value proposition that can immediately differentiate a commercial security offering.
At the time, I was employed at Zscaler, in charge of building innovative partnerships with the largest global conglomerates, which entailed designing and launching Zscaler’s zero trust innovations to the forefront of the next major adjacencies: 5G, operational technology (OT), IoT, and application security. I was working at the fastest growing public company in cyber, leading a phenomenal team, working for incredibly inspiring executives, and I had zero intention of leaving. Yet, to the surprise (and perhaps caution) of many around me, I made the leap, and I don’t regret it.
Here are five reasons why Comcast is poised to win in its new adventure, and I consider myself fortunate to be a leader of this amazing team:
The solutions we are commercializing are imagined, designed, built and used by Comcast itself. I have been surprised to learn that this translates to a solution that both saves an organization money, and can be sold in a differentiated manner. Comcast is a financially driven company with a high bar for the effectiveness of its technologies; so for any internally built solution to survive, it has to be efficient and cost effective at scale. In addition, from a commercialization perspective I’ve found that practitioner-to-practitioner conversations are naturally laden with inherent trust. When we are talking with the CISO of another Fortune 100 company about our solution, we are able to get to a much more ‘real’ level of talk, faster, when compared generally to customer calls I attended as an employee of a pure-play cybersecurity vendor. There’s a simple reason for this inherent trust: When you’re a practitioner looking to help other practitioners, you have a grounded level of understanding in the pros and cons of the solution you’re discussing. You know the essence of the pain points it addresses. You know, with precision, the outcomes you saw. And yes, you have experienced issues with the solution, because nothing is perfect – and you have resolutions to them that worked. What this means, is that other practitioners can trust that your knowledge of the solution is accurate and pertinent.
Comcast’s solution is a novel architectural approach that solves a long-standing pain point for security and compliance teams. The proposed solution is solving a problem I’d seen vendors try to address, unsuccessfully, for years: the desire to centralize security, compliance, and business data neatly, sorted and combined elegantly in a single place, from which insights could be derived and actions to be taken. Back in 2019, RSA Security’s annual industry conference had a theme of Business-Driven Security. That was also the first year that Extended Detection and Response (XDR) became a “real thing.” Vendors had dreams of combining enterprise and security data and controls to create a single source of truth and single pane of glass for investigations and compliance assurance, replete with business-oriented risk analytics for C-suites and boards. It’s now 2022, and XDR still hasn’t caught on the way practitioners hoped. I had seen other companies try to create similar solutions to the problem but they were unable to identify an architecture that didn’t skyrocket the storage costs, or were unable to make the product reliably scale for Fortune 50 accounts, or the system was “closed” such that only certain security personas could use it in prescribed ways. And so, I was delighted to see that Comcast had figured out an elegant new scalable, open architecture that saved money, while also being built on all the latest cloud-based tech.
This cloud-native platform was a new class of tech: a security, risk and compliance data fabric platform. It sat between a customer’s data sources, data lake, and analytical tools, achieving a true decoupling of security & compliance data ingest, from data storage, from the analytical layer. It ingests and transforms data such that a compressed, normalized, enriched, and time-series dataset is stored in a single data lake, in a way that reduces your SIEM costs by at least 30% and yields detections 3x faster. With approximately 100 data feeds from the top security and compliance tools today, protecting an enterprise with ~150K workforce, millions of endpoints and saving $15M+ in costs, the Comcast solution was proven to have profound impact in several ways. For a company with a huge amount of data to contend with, this single invention has changed how Comcast does security, for the better.
Comcast offers a proven model for commercializing in-house inventions. The new cybersecurity business unit is housed in Comcast Technology Solutions (CTS), the division within Comcast that takes internally developed technology and makes it available to other enterprises. CTS has successfully commercialized several other internal inventions, including those in content and streaming, advertising, communications infrastructure, and other technology. CTS offers its business units much flexibility and autonomy, which means that as the leader of the cybersecurity BU, I’d have the ability to set up new sales channels, billing and payment methods, and bring on new types of roles and talent.
Comcast offers excellent security career development. Believe it or not, I considered the chance to work with a large team of security practitioners to be a rare learning opportunity for someone whose career was largely on the commercial security vendor side. For years, I’d worked at security vendors who’d revered CISOs, trying to understand their motives, desires, anxieties and doubts… but never had I worked directly with one or one’s team closely over an extended period of time. I also had hired a few leaders in the past who’d worked as SOC analysts early in their careers, and saw they had a really grounded orientation around security products that everyone else on the security vendor side seemed to lack. By being embedded with practitioners, you simply get a better innate sense of the real problems, and can discern which security innovations will provide lasting value. It eliminates the element of guesswork that, no matter how many customer conversations you have, is otherwise unavoidably part of the product development process at traditional security vendors, unless you are also a stereotypical customer yourself.
Passion, and the benefit of being naive. The employees at Comcast who are working on commercializing this offering are without a doubt, by and large, some of the most passionate people I have worked with in my career. There is often a stereotype that employees of big companies work 9-5, but I have found the opposite to be true at Comcast: everyone is routinely working at all hours of the day to make this program a success. I can’t remember who said it, but there’s a famous saying out there that passionate people almost always succeed at their mission. They almost always achieve more than someone who lacks passion. Furthermore, the fact that Comcast is newish to selling SaaS security solutions to large enterprises is, in many ways, a blessing. The battle scars of “experience” can sometimes be a negative: they can cloud judgment, cause risk intolerance, and damper innovation. Grit, determination, and a willingness to learn can almost always make up for a lack of experience.
Over the next year, we have set some really exciting and daunting goals. We are launching a Generally Available first version of DataBee®, along with iterative releases every few months. We are going to build our brand in enterprise security. We are going to sign some fantastic strategic design partners. We will build relationships with strategic technology and go-to-market partners. I’ll be building out my team of product development, marketing, sales, business development and operations professionals, and I would be very excited to bring along some of the top talent in cyber. Feel free to contact me if you want to learn more about how to become part of the amazing team here at Comcast!
Upcoming virtual event: Top 5 Predictions For the Future of Security Data Lakes Webinar
In the past year, security teams have had to endure many new cybersecurity challenges—from managing hybrid work environments to sustaining major ransomware attacks, catastrophic vulnerabilities, and supply chain risks. As an industry, we must take a step back and look at these challenges from a different angle. We should rethink the technologies and data architectures that are in place today to understand why they are no longer serving their purpose.
Register here.
Interested in joining the CTS Cybersecurity team? We are hiring for the following roles.
United States Based Positions
India Based Positions
Sr Manager, Marketing (Brand & Thought Leadership)
QA Manager
BluVector (Federal Government) ATD/ATH Customer Support & Professional Services Lead
Software Development Manager:
Manager, Software Development
Manager, Software Development
Development Engineer 4:
Development Engineer 4
Development Engineer 4
Development Engineer 4
Development Engineer 4
Development Engineer 3:
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
SDET Engineer 3:
Quality & Automation Engineer 3
Quality & Automation Engineer 3
Quality & Automation Engineer 3
Quality & Automation Engineer 3
Learn more about the DataBee Hive.
Read More
A new ground game for broadcasters?
The 2020’s have been a decade of big changes, but for broadcasters and MVPDs it’s already been a time of particular transformation. Last year saw the reallocation of C-band spectrum for 5G services and the subsequent migration of providers, either to a new satellite, or to a reimagined approach to delivery – namely moving away from satellite delivery entirely and into a terrestrial-based delivery model that can pave the way for full IP-based delivery. Now that the 5G spectrum transition is, for the most part, in the rear-view mirror, is this “ground game” approach still a viable path forward for MVPD’s?
Terrestrial delivery is something that we’ve been talking about actively since well before the C-band transition (you can read more about it here). “The ground-based option is something we developed in response to the FCC mandate that affected so many of our customers,” explains James Toburen, Affiliate Sales executive for Comcast Technology Solutions. “At first, impacted MVPDs were really in a situation where they absolutely had to make a decision on what to do; but now it’s a decision that’s grounded more firmly in the benefits to the business, which are numerous.”
From “doing dishes” to a path towards full IP delivery
We get into the weeds on the technical aspects of terrestrial distribution in this blog, but as a quick recap:
Terrestrial delivery replaces the satellite feeds with broadband IP networking, typically fiber, at the cable plant.
These feeds are still aggregated and assembled into experiences that are delivered to customers.
Although technology to pre-compress and bundle satellite signals has lowered infrastructure costs from the “dish farms” of the past, fixed bandwidth issues still exist that make it challenging for providers to offer new services; plus, satellite is a uni-directional delivery method that doesn’t allow for the interactive experiences that customers expect and enjoy today.
With a move to internet delivery, MVPDs gains more scalability and flexibility, and the ability to offer more dynamic services.
“With Managed Terrestrial Distribution, it’s not a completely new delivery model; programming can still delivered to customers via QAM,” Toburen continues. “But our terrestrial distribution paves the way for transitioning away from managing that satellite dish-based headend. With IP-delivered video, you’re gaining not just a lot of flexibility and redundancy, but you’re delivering even more content that folks want.”
Five reasons to transition to terrestrial: a deeper dive
MVPDs and operators are adopting Managed Terrestrial Distribution as a foundational step in their digital transformation; it’s a transition that not only lowers risk but also leads to new dynamic content experiences for an evolving, more competitive marketplace.
For more information beyond the blog links above, read our guide: Five Reasons to Transition to Terrestrial Distribution. It outlines the immediate practical benefits to reimagining your delivery model – gaining quality, efficiency, and reliability – while also providing insights into the long-term benefits to services, business plans, and ultimately your bottom line.
Download the guide here.
Read More
Global Advertising: the connected, collaborative ecosystem
In this business, thoughts about “agility,” “time-to-market,” and “personalization” are used every day, but the rapid pace of upheaval in 2020 and 2021, whether it be from pandemic or social unrest, gave advertisers real-world opportunities to test their mettle; to see if their processes and technologies could:
shift at the speed of their audiences,
deliver ads on-time to every screen, and
collaborate with providers and emerging media platforms in order to inform data-driven content decisions more effectively.
If all the world’s a stage, it’s incumbent on advertisers and agencies to be positioned for success before the curtain lifts.
“The international advertising ecosystem has really evolved dramatically in a very short time,” says Simon Morris, Head of Enterprise Sales for CTS’ Advertising Suite. “On one hand, from a content standpoint you’ve got to be prepared with media and messaging for different consumers, and be able to execute or pivot at a moment’s notice. On the other hand, the complexity of executing accurately, from country to country, is an enormous undertaking. This is precisely why we’ve developed such solid foundational partnerships with companies like Peach, Innovid, and The Team Companies.”
Peach: Connected data and local support to “de-frag” the ad ecosystem
If you’ve ever de-fragmented your computer’s hard drive, then you’ve cleaned up your data so the entire system can run more smoothly. Peach is a lynchpin in a comprehensive global ad operation, currently enjoying its 25th year serving advertisers agencies, broadcasters, and publishers by bridging the data divide across a global customer base. Together, we offer a full-service solution to deliver spots to viewers in more than 100 countries. This partnership results in a comprehensive video ad management solution with a global network of local experts to provide reliable, trusted, centralized control of increasingly complex ad operations.
The combination of Peach’s local expertise and global capabilities deliver benefits that extend all the way into reporting that capitalizes on the connected data to bring back insights across the global ecosystem. Their agnostic API integrations across ad ecosystems, and collaboration with companies like Innovid, another one of our partners and one of the world’s largest independent ad platforms, elevate the benefits even further:
Innovid: Innovations for every ad experience
Earlier this year, Peach announced their platform integration with Innovid. This successful integration leverages the Innovid Creative Bridge to make moving content between platforms and customer folders seamless and straightforward. It’s more than just a process improvement: Peach stewards creative assets through rigorous accuracy and quality controls, managing distribution to eliminate the need for duplicative file uploads and downloads. From there, Innovid’s ad serving and hosting, as well as their creative toolsets, allow clients to customize content, get it to the right screen, track it, and then optimize performance – a continuous improvement loop.
The TEAM Companies (TTC) – a strategic alliance for metadata mastery
The complexity of modern advertising doesn’t just apply to the fragmented device / viewer ecosystem. Advertising is a creative endeavor, which means contracts, agreements, and talent / usage rights that vary wildly from asset to asset. Violations in rights or permissions aren’t just costly in financial terms but can also damage valuable relationships that need to be protected. Global advertising efforts take an already-complex problem and give it more to contend with. CTS’ Ad Management Platform is deeply integrated with the TTC platform to solve for all of it, providing end-to-end control of commercial rights management. Advertising companies can now track an asset’s use across media channels to ensure adherence to rights and policies no matter when or where an ad gets played.
Last year, TTC joined forces with Cast & Crew, a powerhouse that has served the entertainment industry for decades with financial services like residuals, payroll, and taxes. The union bolsters TTC’s strength, and its ability to expand services and global expertise.
Ultimately, it’s all about campaign performance
To be an advertiser in today’s global ecosystem is sort of like being an action hero on top of a bullet train racing at top speed. The real-time demands and the need for accurate execution are all built on top of technology that’s moving at a breakneck pace. Fortunately, there are experts in the disciplines, tools, and techniques needed to inform and act accurately on today’s decisions, and illuminate the path forward. As advertisers, agencies and providers learn to build a more efficient, simplified, and intelligent operation, they increase the amount of time and resources they can focus on understanding and optimizing performance across every active campaign.
Read More
What's Next For Media and Entertainment Technology in the New Roaring '20s?
It takes more than just a killer catalog or cutting-edge creative to stand out in today’s media markets. The winners are harnessing the power of emerging technologies to simplify operations, drive more value from data, and bring analytics to the forefront of decision-making. There’s a lot to take in, and to pay attention to.
Realizing goals for 2022 in an agile and cost-effective way will depend in large part on advertisers, MVPDs, operators, and content and streaming providers’ willingness to look beyond the current ways of creating and delivering media to new possibilities provided by emerging media and entertainment technology.
As advertisers and media and entertainment providers of all types seek to create highly personalized viewing experiences for customers, they will move toward embracing technologies that allow them to increase engagement, deepen relationships between audience and brand, and find new ways to monetize.
Smarter workflows with ai / ml
Artificial intelligence (AI) and machine learning (ML) have a lot of science fiction and misapplication applied to them; but these powerful technologies underpin the next evolution of video workflow automation.
As content and video providers search for new avenues to broaden their reach, AI represents an exciting opportunity for businesses to do more with data. In fact, analysts like ABI Research expect revenue from AI and machine learning from video ad tech to surpass $9.5 billion.
Operators and providers need a better way to understand how to get the highest performance out of their content investments. Emerging solutions will address these opportunities, and VideoAI™ is a new framework of learning technologies designed to do just that. VideoAI scrutinizes video content – video, audio, closed captions -- to create actionable metadata that streamlines operations and improves content ROI. This rich metadata can be applied in creative ways to improve workflows, give advertisers contextual information to target ads more effectively, and to create new experiences for live and on-demand viewers.
As AI technology proves itself in real-world video applications, more companies will move to adopt it to drive revenue growth and create tailored experiences for consumers, particularly as the end of third-party cookies approaches in 2023. One report suggests that AI-powered audience solutions will grow by as much as 20%. Are you ready for this sea change?
From soup to nuts, automation is on the menu
Automation is a buzzword across all tech sectors, and for good reason. Not only are AI and ML solutions the next step in the evolution of automation, savvy businesses will recognize that taking a comprehensive and holistic approach to automation will make all the difference in their ability to execute transformation. The right automation strategy saves time and money, allowing businesses to allocate resources from mundane, routine tasks to focus, instead, on strategic undertakings.
Automation in advertising will become especially important third-party cookies become more challenging for campaigns to rely upon. In today’s media landscape, many advertisers operate with a patchwork of solutions across multiple media sources, data providers, and technology partners. Under these conditions, it’s a serious challenge to gain a clear view into the ad creation and delivery process — and advertisers are unable to use new audience targeting solutions to tailor their messages. In a business in which automation has a foothold, plenty of people are still poring over manual spreadsheets.
That’s why a more holistic and unified approach to automation will enable faster and more seamless management of creative, media buying, and distribution. Instead of relying on manual processes and vendor-specific platforms, businesses can deploy a centralized, automated tool to support ad targeting and versioning.
A well-conceived and comprehensive automation strategy will ultimately allow advertisers to really and truly connect the dots across the world of channels and formats and across internal workflows and external partners. Instead of limping along, with piecemeal automation, an across-the-board strategy will bring together distribution, creative customization, and reporting to help reinstate control over a fragmented process, increase ROI, and deliver measurable results.
Redefining the ecosystem with aggregation and terrestrial distribution
In response to an abundance of content destinations, consumers are faced with a new challenge: How do I manage all my choices? Moving from device to device, app to app, service to service – it’s a 21st-century problem in search of a more streamlined solution.
To address this, successful pay TV operators will find a way to transform into “super aggregators.” Not only will this dramatically reengineer the viewing experience to prioritize consumer preferences, it will require operators to find soltuions that work across multiple services and platforms and provide customers with the content they want, when and how they want it.
For operators that have traditionally relied on satellite to get content onto screens, fiber-based delivery represents an exciting opportunity to stay ahead of the curve. Not only does terrestrial distribution allow providers to reduce their footprint and lower overhead, it provides a convenient path to IP while maintaining clear, high-quality service and expanding channel offerings.
Ultimately, in the evolving landscape, the winners will be willing to explore new ways of doing business and embrace opportunities such as emerging media and entertainment technologies to expand their vision and enhance the viewing experience for their customers in exciting and innovative ways.
Read More
Implementing Zero Trust Principles at Scale
The recently released OMB memo M-22-09 is a step toward bringing federal agencies in line with what many organizations in the private sector have been working toward for a while now – constructing a Zero Trust Architecture.
We discussed the strategic implications of this federal directive in a previous article and would like to also offer a tactical perspective on meeting these requirements from a decade of experience developing machine learning cybersecurity technology with the U.S. Government.
Beyond adopting a new buzz word, embracing Zero Trust means remastering the tested principle of trust nothing, verify everything in order to secure a network with untold endpoints geographically dispersed over a landscape fraught with increasingly sophisticated cyberattacks.
Encrypt data in motion
This includes internal and external data flows, as well as applications – even email. A vital first step is to encrypt all DNS traffic using DoH/DoT today and encrypting all HTTP traffic using TLS 1.3. While doing so you don’t want to compromise security monitoring so make sure to implement local DNS resolvers whose logs can be used to analyze the clear-text requests.
From a castle to the cloud
As the requirement to support remote users increases, embracing the security features resident in cloud computing certainly a critical first step to safely enable access from the Internet for both employees and partners. Following this macro-level migration trend of large IT organizations clearly recognizes that many cloud providers have already adapted to some measure of Zero Trust Architecture but there is not a one size fits all for every organization and their cybersecurity risks. Of course, no transition happens overnight and there will be some services that never do. For those use cases, we’ll still need to secure and monitor the needed on-premises resources as part of the broader security posture.
Logical segmentation
The federal directive to develop and implement a logical micro-segmentation or network-based segmentation is clear in the memo. The challenge then becomes finding ways to limit and, if necessary, quickly identifying the lateral movement of any adversary who might gain a foothold within your network is imperative going forward.
AI enabled hunting
Endpoint security certainly plays a role in moving toward Zero Trust, but as EO 14028 emphasizes this also involves developing a real-time hunting capability rooted in machine learning. And to effectively hunt government-wide, and most critically address the rise of zero-day attacks, collection of telemetry from systems without EDRs installed is invaluable – since an adversary would simply hide where the defenses are weakest. Achieving Zero Trust isn’t just about prevention, but comprehensive and continuous monitoring.
Hundreds of thousands of new malware variants are being developed daily. Global ransomware attacks rose by 151% last year1 and the average recovery cost per incident more than doubled to $1.8 million.2 This memo codifies what those stats make clear – in order to secure our nation’s vital infrastructure, we need intelligent solutions that go beyond monitoring systems for known threats.
BluVector provides advanced machine learning cybersecurity technology to commercial enterprise and government organizations. Our innovative AI empowers frontline professionals with the real time analytics required to secure the largest systems at scale.
Learn more about our philosophy of Zero Trust and understand how we deploy within existing security stacks. For more BluVector thought leadership on ZeroTrust, read Zero Trust: A Holistic Approach by Travis Rosiek.
Read More
Zero Trust: A Holistic Approach
The President’s Office of Management and Budget (OMB) released M-22-09 last month, announcing it will require all federal agencies to move toward a Zero Trust Architecture (ZTA) by FY24.
BluVector has worked closely with large government agencies since our inception, and we’re encouraged to see clear direction toward a common cybersecurity strategy.
After more than a decade of offering our machine learning technology to secure our nation’s most sensitive data and helping to protect one of the largest and most advanced ISPs in the western world, we’d like to offer our perspective on Zero Trust as a holistic philosophy toward cybersecurity.
Trust nothing, verify everything is a time-tested principle that has been crucial to designing modern IT networks. It’s also a tall order and we invite agencies and large-scale enterprise organizations to view Zero Trust as a continuous journey, not an end state. We continue to recommend implementing a layered defense that increases visibility across the entire network and connected endpoints to analyze all data for any malicious code.
Perimeters aren’t what they used to be
Even though Zero Trust doesn’t mean Zero Perimeter – like it or not, as remote work becomes the new normal for more federal and civilian employees, we are moving away from the security of trusted networks. New identities and devices join our expanded networks daily, running new applications and pushing more data from unknown places. In this brave new world, it’s more important than ever for the cybersecurity teams protecting critical infrastructures to assume other users, systems and networks are already compromised. These practices have been adhered to by cybersecurity professionals from time immemorial and must not be forgotten as our industry adopts a new buzz word into our lexicon. The task of hardening these infrastructures against growing threats is paramount and all who undertake it should choose their approach early enough to ensure it aligns with their planned IT investment.
Encrypted doesn’t mean protected
Encrypted data can be deleted, accessed later in memory after it has been decrypted, re-encrypted, or stolen and saved for future decryption with more advanced technology. In a ZTA, it’s imperative to encrypt all network traffic in transit, both DNS and HTTP. But operating on an open network gives the adversary many vantage points to monitor your network traffic as well. And if they can gain access through a backdoor left open on a legacy system with pre-existing accounts, whether your DNS is encrypted or not is the least of your worries. In an IoT environment, securing all the endpoints you can won’t secure your system, but might simply create a false sense of security. Remember, every single point of failure has the potential to become target number one.
Finding the adversary already in the system
Some large technology ecosystems may be currently compromised with threats that can lie dormant for months, maybe years. Putting new locks on a compromised system won’t mitigate this threat, but instead may obscure it, further enabling entrenched bad actors.
Moving toward a ZTA must be phased in, networks swept clean and continuously monitored for malicious behavior while adopting new technology and processes to better protect systems that will be open to everything – any human error or patch delay will result in immediate vulnerability.
Considering the current threat environment addressed in this memo, we recommend building a layered cybersecurity defense in order to effectively institute Zero Trust principles across any large enterprise – especially those defending our nation’s vital institutions. This will require developing a high level of cybersecurity maturity across your entire organization, investing in technology that empowers your security team to hunt effectively while protecting them from alert fatigue, and adopting policies that will allow your broader workforce to operate safely in the Wild West of the open Internet.
BluVector provides advanced machine learning cybersecurity technology to commercial enterprise and government organizations. Our innovative AI empowers frontline professionals with the real time analytics required to secure the largest systems at scale.
Learn more about our philosophy of Zero Trust and understand how BluVector can be deployed to enhance your security stack. For more BluVector guidance on implementing ZeroTrust, read Implementing Zero Trust Principles at Scale by Scott Miserendino, Ph.D.
Read More
Converging Media Technologies, Continued
In November we were proud to host the CTS Connects Summit for the second year. We frequently talk about the convergence of broadcast and digital workflow and this year it was even more evident that this decade will be marked by a convergence of media and advertising technologies, as real-time services, security, and monetization techniques all become more intertwined. This year’s summit had a natural focus around creating unity between industries, fostering collaboration, flexibility, and security, with particular emphasis on experiences that improve the value equation on both sides of the screen
Read More
See spot run the world
The story of advertising in today’s world is a tale of incomprehensible and increasing complexity. We can look back with nostalgia at the time when ad spots were all the same size, the same length, the same broadcast format; but in today’s world of real-time media placements and creative optimization, brands need a complete end-to-end approach not just to reach the world, but also to understand, what’s working and what’s not.
Read More
Digital-First Pay-TV: Four Takeaways
Pay-TV is going through a significant change as the market adapts to new consumption models and competitors. Operators have responded in a variety of ways, but so far, no consensus has emerged as to which models work, when they work, or why. Organizations who pivot to a digital-first approach — specifically, thinking from a converged broadcast and OTT perspective — not only create opportunities for themselves but can also respond more effectively to capitalize on them.
Read More
Customer Connections Newsletter July
Advertising production is an increasingly complex component of delivery, and many layers of specialization are required.
From file prep and media management to quality control and distribution, you need access to a team that provides personalized solutions and evolving technologies and products.
Professional post-production services combined with centralized ad management simplify and unify creative, talent, and distribution workflows across channels. Production Solutions from Comcast Technology Solutions offers advertisers and agencies visibility and control of their campaigns, along with real-time creative intelligence that enables creative optimization, targeting, and ROI.
Read More
MVPDs: Down-To-Earth Distribution For Rural Areas
Multichannel video programming distributors (MVPDs) in smaller, rural areas have more options than ever before to stay competitive – including creative approaches to their media delivery and distribution models.
Read More
What is a Next Generation Network Intrusion Detection System?
Intrusion detection was first introduced to the commercial market two decades ago as SNORT and quickly became a key cybersecurity control.
Deployed behind a firewall at strategic points within the network, a Network Intrusion Detection System (NIDS) monitors traffic to and from all devices on the network for the purposes of identifying attacks (intrusions) that passed through the network firewall.
In its first incarnation, NIDS used misuse-based (rules and signatures) or anomaly-based (patterns) detection engines to analyze passing traffic and match the traffic to the library of known attacks. Once the attack was identified, an alert was sent to the security operations team.
While the technology continues to play a key role in the majority of enterprises, the Network Intrusion Detection System has fallen out of favor for two key reasons:
The rules-based engines used for detection were subsumed into the Next Generation Firewall (NG-FW), making it more cost effective for some organizations to deploy a unified capability;
Threat actors are adept at executing attacks that evade the signatures/rules/patterns used by both the traditional Network Intrusion Detection System as well as the unified NG-FW.
Machine Learning and the Rise of Next Generation Network Intrusion Detection Systems
Like a traditional NIDS, the function of a Next Generation IDS/IPS is to detect a wide variety of network-based attacks perpetrated by threat actors and contain these attacks, where feasible, using appropriate controls. Unlike a traditional NIDS, this technology leverages machine learning powered analytics engines that are capable of identifying attacks that evade traditional misuse-based and anomaly-based engines.
Network attack types that should be addressed by an NG-NIDS include:
Malware Attacks:
Malware is malicious software created to “infect” or harm a target system for any number of reasons. These reasons span simple credential access and data theft to data or system disruption or destruction. Today, an estimated 30% of malware in the wild is capable of evading traditional signature-based technologies. Most organizations addressed this through the deployment of endpoint detection and response (EDR) technologies. In relatively homogenous environments with firm control over the endpoint, endpoint controls may be adequate. For organizations with an array of client and server technologies, limitations over patch and update frequencies, IoT devices, or endusers over whom there is limited control, a strategically deployed NG-NIDS acts as a primary defense against “unknown” malware.
Below are common use cases wherein a Next Generation Network Intrusion Detection System can play a protective role:
Malicious websites – These attacks generally start at legitimate websites that have been breached and infected with malware. When visitors access the sites via web browser, the infected site delivers malware to the endpoint. Alternatively, doppelganger sites can be used to disguise malware as legitimate downloads.
Phishing/Spearphishing emails – Threat actors trick endusers into downloading attachments that turn out to be malware. Alternatively, threat actors trick users to click on a seemingly legitimate link to visit a website, from which malware is delivered.
Malvertising – In this case, threat actors use advertising networks to distribute malware. When clicked, ads redirect users to a malware-hosting website.
In each of these cases, the NG-NIDS sits between an internal user and external site and is capable of detecting malware and issuing a block request to a firewall or endpoint manager to contain the threat. In situations where the enterprise uses a split tunnel architecture or allows mobile workers to access the Internet without restriction, the NG-NIDS will see suspicious activity emanating from an infected device once reconnected to the corporate network. NOTE: While a NG-NIDS can be highly effective and easier to deploy than endpoint technologies, it is highly recommended that organizations use both. It is certainly the most effective means of protecting an organization with a mobile workforce.
Attacks that “Live off the Land”:
One of the more frightening and rapidly emerging categories of attack is known as a “living off the land”, fileless malware, or “in-memory” attack. These attacks are specifically created to start or complete an action that is untraceable by today’s security tools. Rather than downloading a file to a host’s computing device, the attack occurs in the host’s memory (RAM), leaving no artifact on disk. Powering down or rebooting an infected system removes all artifacts of the attack; only logs of legitimate processes running remain, thereby defeating forensic analysis.
How is this attack accomplished? The attacker injects malware code directly into a host’s memory by exploiting vulnerabilities in unpatched operating systems, browsers and associated programs (like Java, JavaScript, Flash and PDF readers). Often triggered by a phishing attack, the victim clicks on an attachment or a link to a malware-infected website or a compromised advertisement on a reputable site.
Once the malware is in memory, attackers can steal administrative credentials, attack network assets, or establish backdoor connections to remote command and control (C2) servers. Fileless attacks can also turn into more traditional file-based attacks by downloading and installing malicious programs directly to computer memory or to hidden directories on the host machine. The threat actor can also employ a variety of tactics to remain in control of the system after a shutdown or reboot.
The role of the NG-NIDS is to intercept suspicious machine code (usually in the form of obfuscated JavaScript, Powershell, or VB Script) and emulates how malware will behave when executed. Operating at line speeds, the NG-IDS determines what an input can do if executed and to what extent these behaviors might initiate a security breach. By covering all potential execution chains and focusing on malicious capacity rather than malicious behavior, the NG-NIDS vastly reduces the number of execution environments and the quantity of analytic results, thereby reducing the number of alerts that must be investigated.
Worms:
Worms are a form of self-propagating malware that does not require user interaction. WannaCry, for example, targeted a widespread Windows vulnerability to infect a machine. Once infected, the malware moved laterally, infecting other vulnerable hosts. Once the target is infected, any number of actions can be taken, such as holding the device for ransom, wiping user files or the OS, stealing credentials, or scanning the network for vulnerabilities.
A strategically deployed NG-NIDS, sitting in an internal network, is capable of detecting lateral spread of the worm and issuing a block request to a firewall or endpoint manager to contain the threat.
Web attacks:
In a web attack, public facing services – like web servers and database servers – are directly targeted for a variety of reasons: to deface the web server, to steal or otherwise manipulate data, or to create a launching pad for additional attacks. The most common means of attack in this category include:
Cross-Site Scripting (XSS) – An attacker injects malicious code into the web server which, in turn, is executed on an enduser’s browser as the page is loaded.
SQL Injection (SQLi) – An attacker enters SQL statements to trick the application into revealing, manipulating, or deleting its data.
Path Traversal – Here, threat actors custom craft HTTP requests that can circumvent existing access controls, thus allowing them to navigate to other files and directories.
An NG-NIDS, sitting behind the firewall and in front of a Web or database server is capable of detecting these attacks and issuing block requests to an application firewall.
Scan Attacks:
Scans are generally used as a means to gather reconnaissance. In this case, threat actors use a variety of tools to probe systems to better understand targets available and exploitable vulnerabilities.
An NG-NIDS, sitting behind the network firewall, is capable of detecting these probes and issuing block requests to the network firewall.
Brute force attacks:
The threat actor attempts to uncover the password for a system or service through trial and error. Because this form or attack takes time to execute, threat actors often use software to automate the password cracking attempts. These passwords can be used for any number of purposes, including modification of systems settings, data theft, financial crime, etc.
An NG-NIDS, sitting behind the network firewall and/or at strategic points within the network is capable of detecting brute force attacks and issuing block requests the network firewall.
Denial-of-service attacks:
Also known as distributed denial-of-service attacks (DDoS), DDoS attacks try to overwhelm their target – typically a website or DNS servers – with a flood of traffic. In this case, the goal is to slow or crash the system.
An NG-NIDS, sitting behind the network firewall, is capable of detecting DDoS attacks and issuing block requests to the network firewall.
Read More
Why Network Detection and Response (NDR) Solutions are Critical to the Future of Federal Agencies’ Cybersecurity
Executive Order 14028, “Improving the Nation’s Cybersecurity,” signed by President Biden, shows the US federal government’s growing focus on cybersecurity.
The 2021 Executive Order requires U.S. federal government departments and agencies to focus on their cybersecurity framework and address the risks of malicious cyber campaigns.
This increased focus comes as internal audits of federal agencies’ security have indicated that number of federal agencies’ cybersecurity programs are not adequately or fully protecting them against modern cyber threats.
Like businesses and nongovernmental organizations, federal agencies face a rapidly evolving cyber threat landscape that requires an equally fast-evolving cyber security framework. In the current environment, signature-based detection is limited, especially when it comes to detecting zero-day threats and malicious activity that has never been seen before. Known hackers can even take actions to disguise themselves, like changing their techniques or processes, to avoid detection from an NCPS. (hsgas.senate.gov report, page 11)
To protect themselves against these evolving threats, government agencies can benefit greatly from security solutions that can detect intrusions in time for government security teams to respond and manage the harm that they could do to the organization. Since most cyberattacks occur over the network, network visibility is essential to government agencies’ ability to protect against evolving threats. This makes an advanced network detection and response (NDR) solution an essential component of government agencies’ cybersecurity defenses. Individual agencies should implement the NDR while a broader replacement for NCPS is considered.
Federal Agencies Face Increasing Cyber Threats
US government agencies are one of the biggest targets of advanced persistent threats (APTs). These threat actors have significant resources and expertise dedicated to the development and use of various tactics to infiltrate and cause damage to target organizations.
Federal agencies face various cyber threats from these advanced threat actors. Some of the most sophisticated and dangerous attacks these organizations face include ransomware infections, fileless and in-memory malware attacks, and the exploitation of unpatched zero-day vulnerabilities.
Ransomware and RaaS
Ransomware has emerged as one of the greatest threats faced by most organizations, including federal government agencies. In fact, DHS’s Cybersecurity and Infrastructure Security Agency (CISA) has launched the Stop Ransomware project specifically to provide coordinated prevention and response to ransomware attacks across the entire US government. In February 2022, CISA reported that 14 of 16 critical infrastructure sectors have been targeted by sophisticated ransomware attacks.
With the emergence of Ransomware as a Service (RaaS) business model, the ransomware threat has grown significantly. With RaaS, ransomware developers put their malware in the hands of other cyber threat actors for a split of the profits.
The ability to identify the malware before it gains access to an organization’s systems or to detect attempted data exfiltration is essential to minimizing the cost incurred by a ransomware attack.
Fileless Malware and In-Memory Attacks
Many traditional antimalware systems are file focused. The typical antivirus scans each file on the computer, looking for a match to its library of signatures. If it finds a file matching the signature, it quarantines and potentially deletes it from the disk.
Fileless and in-memory malware are designed to evade detection by these solutions by avoiding writing malicious code to a file on disk. Instead, the malware operates entirely within running applications.
With fileless malware attacks growing significantly year-over-year in 2021, the percentage of attacks that traditional antivirus solutions can detect is shrinking quickly. Threat detection techniques based on identifying anomalous behavior or network traffic are essential to detecting these types of attacks.
Exploits
New zero-day vulnerabilities and exploits pose a growing threat to federal agencies and their security. These exploits can bypass security solutions that depend on signatures to detect known attacks and are blind to novel threats until new indicators of compromise (IoCs) are released.
New vulnerabilities such as Log4j have caused a scramble to patch vulnerable systems, and cyber threat actors often move quickly to exploit vulnerable organizations after a new vulnerability is discovered and disclosed. In fact, half of new vulnerabilities are under active exploitation within a week.
In recent years, the rate at which zero-day vulnerabilities are discovered and exploited has increased significantly – with more than twice as many zero days exploited in 2021 as in 2020. Without the ability to identify and block attempted exploitation of these novel threats, federal agencies are continuously playing catchup after the latest vulnerability is discovered.
While these novel attacks cannot be detected using signature-based techniques, other approaches can identify these threats. For example, network traffic analysis that identifies command and control traffic between the malware and attacker-controlled servers can tip off a security team to an undetected malware infection.
How NDRs Can Help Solve Federal Security Challenges
As cyber threat actors expand and refine their tactics and techniques, detecting and mitigating attacks becomes much more difficult. Additionally, as government IT systems expand and evolve, protecting them against cyber threats becomes more challenging for security teams, especially as government agencies struggle to attract and retain scarce cybersecurity talent.
Managing the threat of ransomware, in-memory malware, zero-day exploits, and other cyber threats requires federal agencies to have tools that enable them to detect and respond to potential threats rapidly. Network detection and response (NDR) solutions are an essential component of a federal agency’s cybersecurity architecture and can be used with EDR to get a more complete solution, often with NDR solutions taking the lead in initial attack detection.
“While EDR can provide a more granular view into the processes running on the endpoint and in some cases more finely tuned response options, NDR is critical for maintaining consistent visibility across the entire network.” – ESG’s “NDR’s role in Supporting the Executive Order on Cybersecurity”
NDR solutions provide multiple features that can help an agency’s security operations center (SOC) to identify and respond to potential threats, including:
Network Visibility: Visibility into federal agencies’ networks is essential to detecting potential threats early within the cyber kill chain. NDR solutions can help to collect, aggregate, and analyze data about network traffic to help SOC analysts to identify and respond to potential threats.
AI/ML Threat Detection: Security analysts commonly face an overwhelming number of security alerts, making it difficult to differentiate between true threats to the organization and false-positive detections. Artificial intelligence (AI) and machine learning (ML) solutions can help analyze and triage alerts, reducing false-positive rates and enabling security personnel to focus their time and attention on true threats to the organization.
Creating a holistic approach with Zero Trust: All federal agencies are required to move toward a Zero Trust architecture by 2024. Successful ZTA implementation requires robust and flexible tools that work quickly and can scale, like the BluVector platform. It’s important to remember that ZTA does not address threats already in a network, and that weaponized data can elude ZTA security. Having complete visibility – including network, device, user, file, and data visibility – is critical to ZTA.
Recommended Responses: During a cybersecurity incident, a prompt and complete response is essential to minimizing the cost and impact of the attack but can be difficult to achieve under tense circumstances. BluVector’s NDR solution can provide recommendations about how to handle certain types of incidents, enabling security analysts to quarantine or remediate the threat more efficiently.
Support for Automation: Security teams commonly struggle to meet expanding workloads as organizational infrastructure becomes more sprawling and complex and cyber threats grow more sophisticated. With the ability to automate repetitive tasks and responses to common threats, NDR solutions can help to ease the burden on overstressed personnel and enable them to focus their time and attention where it is most needed.
The advanced threat detection and response capabilities of an NDR solution can help security teams to identify these subtle threats and respond quickly and effectively to minimize the risk to sensitive government data and critical systems.
BluVector Leading the Way in a High Threat Era
US federal agencies continue to face significant cyber threats from skilled and well-resourced threat actors. These cyber threats use various techniques to gain access and cause damage to government systems, including the use of ransomware, in-memory malware, and the exploitation of novel vulnerabilities. Often, these attacks are specially designed to fly under the radar and avoid detection by widely used security solutions.
While these attacks may be designed to be subtle and difficult to detect on the endpoint, they still need to gain access to agency systems and perform command and control communications over the network. BluVector’s Advanced Threat Detection and Automated Threat Hunting solutions can monitor an agency’s network traffic and sort out genuine threats from the noise, while efficiently supporting SOC analysts’ ability to detect and respond to advanced cyber threats.
BluVector has worked closely with large government agencies since our inception and is here to support the journey toward a common federal cybersecurity strategy. For over a decade, our solutions have enabled government agencies to achieve essential visibility into their solutions while rapidly and accurately identifying threats using AI and ML. Learn more about BluVector’s government solutions and our over 99% threat detection rate.
Read More
Fast-Food Ads Get Faster Service
“Getting the order right” – it’s job one for every quick-service restaurant (QSR), no matter if the order is coming from the front counter, an app or service, or the drive-thru window – and no order is the same. The end results might be different on every order, but some things have got to be there every time to keep customers coming back. Every order has to be served accurately, on-time, and in the highest quality. Hmm….where’s the advertising analogy here?
Read More
Customer Connections Newsletter May
Comcast Technology Solutions and PremiumMedia360, the leading company in TV advertising data automation, announced a new strategic relationship. The integration with the Comcast Technology Solutions’ Ad Management Platform enables TV advertising buyers and sellers to automatically synchronize and solve reconciliation discrepancies throughout the ad buy lifecycle. These new capabilities stand to prevent revenue loss associated with dropped spots due to order execution errors and invoicing delays. The integration can help save work hours typically spent in solving data discrepancies between buyer and seller systems.
Read More
Customer Connections Newsletter April
TV isn’t going anywhere — it’s going everywhere, transitioning from a broadcasting-only industry to a multiplatform video market, comprising linear and digital, streams and on-demand offerings, available across a growing range of platforms and devices. TV and video advertising are uniquely valuable, and as the market continues to shift, the ad industry has a big opportunity to lead the pace of change
Read More
MVPDs: Moving From Orbit Into The Cloud?
The 2020s are already proving to be a decade of sweeping change for MVPDs and cable operators around the U.S. The repurposing of C-band spectrum for 5G services is a plan that was accelerated last year after an agreement between the FCC and satellite operators. Although it’s been on the industry radar for a while, it feels like the whole thing is picking up pace as companies work to position themselves — and their technology — for a successful future. That future isn’t necessarily going to need a satellite.
“The two main considerations for MVPDs right now are: a) What does our distribution look like immediately post-C-band reallocation, and b) Where can we go from there?” explains Jill May, Manager of Product Development for Comcast Technology Solutions. “The first step is to deal with the immediate: Two-thirds of the C-band spectrum has been reallocated to 5G carriers. This means that existing services need to transition to a new satellite. For example, our services that have operated from the SES-11 satellite will migrate to the AMC-11 satellite this year. It’s not just a matter of keeping businesses whole, as there are some technology upgrades and efficiencies that come along with the move; but from there, the questions we started asking here were ‘Is a satellite feed really required for every business?’ or ‘What would it look like for a satellite delivery model to ultimately include and/or move to an IP-based model?’ That’s a big focus for us as MVPDs think about what their new big-picture plans could look like.”
MANAGED DISTRIBUTION GAINS NEW FLEXIBILITY
Satellite video distribution has clearly been around a long time. It’s extremely reliable, and it also used to be the only option for a lot of companies. Today, network advancements and build-outs, especially over the last few years, result in some seriously attractive options for media distributors looking to remain competitive in both price and features. “A move into a terrestrial distribution model has a lot of upside,” explains May. “Speaking from a Comcast network perspective, MVPDs can leverage a network that benefits from deep annual investment and evolution. Not only is there the security and encryption that a company needs to protect its investments and revenue, but also the speed, scale, and reliability that its customers can count on.”
Another big benefit that MVPDs can get from terrestrial distribution is a reduction in headend equipment and operating expenses across the operation. Headend equipment can be reduced by as much as half, along with a corresponding reduction in rack space, power, and cooling. On-premises equipment would need to be upgraded to a new platform, but a customer migration doesn’t need to happen in one gargantuan make-it-or-break-it effort. “What we’re planning is more of a hybrid model during the transition,” says May, “so business continuity can be maintained through a customer migration strategy that works for the partner’s needs.”
IP-BASED MVPDs — A NATURAL EVOLUTION
Once the operations of an MVPD or operator have left low-earth orbit for an earth-based distribution and content delivery network, the foundation is there to keep the customer experience in a state of continuous evolution. “We see an IP platform really as the natural next step,” says May. “There are technology approaches that we’re approaching as a white-label way for a terrestrial operator to offer more services and high-tech features. It’s a compelling way to improve an operator’s role as a content and streaming hub for their customers’ multiple streaming subscriptions.”
Listen to Jill May’s presentation on Managed Distribution during the CTS Connects Summit to learn more.
Read More
New Ways To Monetize SCTE 224
The SCTE 224 standard is fantastic for video operations enabling blackout management, web embargos, time-shifts, regionalized playout, and a host of other content rights-based program level video switching capabilities. The applications don’t end there; some new applications make the standard really attractive to bolster revenue generation opportunities for both programmers and operators.
Read More
Customer Connections Newsletter March
Do you have the meaningful insights you need to optimize creative and drive business results?
Creative Intelligence enables you to look at your omnichannel campaigns based on intelligent metrics from across platforms aimed at engaging your target audience. Armed with information on how your campaigns, and messages are performing in all channels, you can maximize your creative investment, optimize creative messaging and deliver the most relevant ad to the right customer.
Read More
An unpredictable year brings renewed focus to (and on) media businesses
For an industry as diverse as media and entertainment technology, there were some strong, common themes during the CTS Connects Summit last year that are only more relevant as 2021 starts to kick into high gear.
Read More
Navigating Content Rights for Operators
There is no denying that Operators — MVPDs and virtual MVPDs (vMVPDs) — are in a complex place in the video value chain. The requirement to manage rights across all content providers can be overwhelming, time consuming, and expensive. The need to streamline this function is critical so Operators can focus on delivering quality content at the right time to the right place for their customers.
No matter what the size, location or whether virtual or not, every Operator has to determine how they handle content restrictions, on the fly changes, and regionalization requirements from their programmer partners. In the past Operators would receive content pre-switched, and the content was delivered directly to that region via satellite to the IRD. This is a relatively hands-off approach but costly. Today, with the transition from satellite to terrestrial distribution and increasing capabilities of higher bandwidth video (4K/UHD, 8K etc.), responsibility for that control and switching is transitioning to Operators. This creates tremendous opportunities for cost savings and efficiency through leveraging technology that automates the process through machine-to-machine communication.
KEY BENEFITS FOR OPERATORS
For Operators, this transition enables some key benefits with the right technology and partner choice. First, by enabling this switching environment internally, they are centralizing the solution and then using their own infrastructure to distribute it to their regions. This applies to either vMVPDs or regular MVPDs since both need a central system to manage the content switching. For the MVPDs this offers them the ability to shutdown legacy facilities, remove the reliance on hundreds of IRDs, and scale faster with their IP modernization. In addition to these massive cost savings, by making this transition, Operators can scale their infrastructure faster, enable improved auditing capabilities, increase up time from their old infrastructure, and reduce operational costs by not having to manually manage their content switching system. Lastly, there are some unique linear addressable ad insertion use cases that with implementing the right solution the Operator can easily take advantage of and put into use.
RISK AND OPPORTUNITY
With all these benefits, there are still some risks to take into account. What are the risks you ask? Let’s start with the most significant one – delivering the wrong program to the wrong audience at the wrong time. Not only can it damage a brand through negative PR but it also creates tension between the Operator and content providers, and can ultimately lead to significant legal actions and carriage disputes. There are numerous examples of this happening today and blackout management is getting more prevalent in today’s complicated video distribution world that includes different device types and potential for global distribution. Because content rights are becoming more valuable and exclusive, especially for live programming, regionalization/personalization more critical, and Operator/Programmer relationships more of a differentiator, the need for automated system to manage these rights scenarios is crucial. Essentially the system operates as an insurance policy against the potential devastating risk of showing the wrong programming to the wrong audience. Additionally, the manpower required to manually manage the increasing complexity increases exponentially if an automated system is not in place.
The other key risk to consider is technology choice. Once an Operator decides to implement the centralized switching solution, it’s critical that they go down a path that’s scalable at every level – channel count, decisioning, ease of use etc. If not, they will find themselves wasting resources across the board. The SCTE 224 standard has become the best standardized language to manage the automated machine-to-machine communication for controlling rights, web embargos, program substitution and is growing in acceptance for managing ad insertion. It’s critical for Operators to use a standardized approach to help manage metadata and rights communication with programmer partners. SCTE 224 is ideal in this environment. In addition to providing clearly defined a messaging and communications protocol, it enables Operators to layer on a linear rights management to manage all of the nuances of implementing the defined actions carried by the SCTE 224 data. This is critical to making decisioning work at scale.
To successfully implement an automated rights management solution, it’s important that the solution lets the user build a metadata ingest platform that scales at a level that can handle complex SCTE 224 ingest scenarios. It should support a variety of source types and formats and not have to create or rebuild each ingest adapter/parser that is unique for each programming partner. While SCTE 224 offers a great standard way to communicate, there can be significant variations in the way it’s implemented. One content provider could use the 2015 version of the standard, another the 2020 standard and yet another could use the same year standard but have different metadata fields being communicated.
It is important to avoid compatibility issues, or Operators will always be playing a costly game of catch up with their development teams. For example, if a system accepts a variety of sources and data structure (easy in), manages everything with strict rules and structure internally, then it will have reliable and consistent behavior when working with a range of data sources from various content providers. However, if a system is built such that each ingest point to the lowest common denominator through rigid hardcoded solution for each partner, Operators greatly limit their flexibility as content partners make changes over time.
SCALE TO SUPPORT MILLIONS OF DECISIONS
It is common to have 100s to 1000s of different policies, audiences, and rules coming in from various content providers that each require automated decisioning made for them to determine which content should be played or not and where it should be played. Therefore, for the system to work correctly, a decisioning system must be implemented that can scale to support millions of decisions daily. This is essentially the brains of the operation – executing video switching decisions based on the metadata rules, policies, and audiences being received from the content programmers. Implementing this functionality requires seamless integration with the adapter/parser as the metadata comes into the system and then integration with your video infrastructure at the encoder, vIRD, packager, or manifest manipulator level to facilitate video changes based on the audience and policy rules in the metadata. The decision manager notifies the Operator’s video supply chain whether or not a program should be played as well as where and who can see it automagically based on the rules defined in the SCTE-224 data stream. If necessary, the decision manager can also “talk” to the Operator’s Ad Decision System to help facilitate unique programmer by programmer ad rules and insertion instructions.
Comcast Technology Solutions (CTS) is uniquely positioned to enable the Operator content switching requirements by leveraging our Linear Rights Management (LRM) solution. It is provided as a SaaS (Software as a Service), running in the public cloud, minimizing the need for physical hardware, and making it quick to start implementing and using. We normalize all sources that will drive our customers systems to a standard SCTE 224 format and then leverage a backend rules engine and decision manager to enable the switching of video content from a centralized control system. It is designed with an easy-to-use web-based UI that enables operations personnel to track, audit, and make changes as necessary. We integrate seamlessly with your video stitching infrastructure or can provide that functionality as a service ourselves. As your needs change over time, it is simple to make changes and roll them out immediately across your whole network.
Read More
Customer Connections
Dear Valued Customers,
When I started last January, I had hoped to meet most of you personally, but obviously this wasn’t possible. In place of in-person meetings, your sales and account teams have been connecting virtually to help you with your business needs. And, we have hosted several educational webinars, lunch and learns, and our CTS Connects Summit in November to share the latest in ad tech along with our Comcast Advertising partners FreeWheel and Effectv.
2020 was a year unlike any other. We are seeing buy- and sell-sides working together more closely than ever before, with the creative taking center stage. Comcast Technology Solutions is here to help you succeed in this new environment with creative management, media activation, and automation technologies that will power the next generation of advertising technology.
In 2021, we are dedicated to continuing to connect with you to ensure we’re exceeding expectations as your trusted partner and keeping you informed of new products and services. To this end, we are pleased to share our new Customer Connections newsletter. Each month, we will bring you the latest news, data, and developments from Comcast Technology Solutions and the industry. We hope you’ll find it an informative tool to help drive results and get the most out of our solutions.
Thank you for choosing us as your advertising technology partner. We look forward to working with you this year and to continuing to provide the technology and solutions your business needs to adapt and grow.
Wishing you a happy and healthy 2021!
Cheers,
Simon Morris,
Head of Sales, Advertiser Solutions
NEW TECHNOLOGY MAKES AD DELIVERY EVEN EASIER TO USE
Comcast Technology Solutions is proud to introduce a new technology enhancement that further simplifies our Ad Delivery and Creative Management tools.
The WalkMe application provides end-to-end support for a quick and easy product experience.
WalkMe is designed to enhance your experience and streamline all the activities and tasks your team performs every day, across the application. Through on-screen step-by-step instructions you can immediately find answers to your questions and get your orders out quickly.
Try it out for yourself or request a demo.
INDUSTRY LEADING POST-PRODUCTION & VERSIONING
As you know, post-production and asset customization are often the last steps in the production process, but can end up impacting the success of your campaign. If you don’t have the right message, the right version, encoding or accessibility baked into your spot, you could end up missing out on vital opportunities.
Comcast Technology Solutions makes post-production easy with our Production Services. Voice-overs, reslates, versioning, closed captioning and more are done in our in-house studios. All assets can then be sent to thousands of linear and digital destinations automatically.
And, when combined with our centralized ad management platform, you can simplify and unify creative, talent, and distribution workflows across channels. Plus, get creative intelligence into how your ad performed, allowing you to further personalize, improve targeting, and ultimately drive greater ROI.
Let us help you with your post-production needs. Learn more about Production Services.
Read More
CTS Connects: Allison Olien Talks Business Continuity
In this video, Allison Olien, VP & GM of Communication and Technology Provider Solutions, discusses the importance of business continuity for her customers and how Comcast Technology Solutions is supporting customers.
Read More
Video Trends Vlog Episode 6: On Virtual Channels
Content providers and programmers are always on the lookout for new ways to monetize their content. Non-linear, on-demand streaming experiences may seem to dominate the conversation, but it would be completely wrong to interpret the rise of on-demand as a consumer rejection of the linear, “always on” broadcast experience, which is more popular than ever. More accurately, it’s a story about the increasing power of consumers to choose what they want to watch. What if you could put them together, and launch a channel that’s always full of stuff that a specific viewer wants to watch? It’s a compelling offer that’s now more attainable than ever.
Comcast Technology Solutions gives businesses a way to implement virtual channels that complement non-linear experiences with cool new niche channels that can attract and maintain like-minded audiences. Keep watching as Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development talk with Matt Smith, Executive Director of Business Development and Strategy about how, with the right technology approach, businesses can quickly and cost-effectively augment their consumer offerings with tailored linear experiences.
Read More
Video Trends Vlog Episode 5: On Content Delivery — Getting the content to the user
How you deliver content to the user can have big implications on quality of experience. How fast, and reliably, content is delivered is often a major factor in signal interruptions, latency issues, and buffering.
In episode 5 of our series, Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development talk with Matt Smith, Executive Director of Business Development and Strategy discuss getting the content to the user, finding the best performance for your content, the value of the Comcast experience and infrastructure for delivery.
Read More
Video Trends Vlog Episode 4: Content Monetization Strategies and Technologies
New features and greater interactivity add up to a richer experience for viewers – and stronger returns from your investments in content and technology. To maintain an operation that evolves with audiences, it’s important to stay centered on what experiences you want to provide to consumers – and how to make those experiences more relevant at the individual level. It’s not just about enabling content, but how to blend monetization approaches and merchandising to optimize the revenue potential of every program.
Join Matt Smith, Executive Director of Business Development and Strategy, Joe Mancini, Director of Product Development and Peter Gibson, Executive Director of Product Management as they talk about content management strategies and technologies that drive stronger profitability for multi-platform video commerce.
Read More
Video Trends Vlog Episode 3: Go-To-Market Strategy
2019 is proving to be another huge year for the industry, as some of the biggest entertainment brands are poised to launch new streaming and on-demand destinations of their own. Where business models used to be split up more clearly between subscriber-based approaches (SVOD), ad-supported models (AVOD), or shopping-cart transactions (TVOD), businesses are pivoting to a hybrid approach with the flexibility to incorporate all three – with the power to shift based on viewer insights and consumer trends.
In this video, Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development both share some insights on how to go to market with an advanced, cost-effective technology plan that helps companies reach revenue targets.
Read More
Video Trends Vlog Episode 2: Getting Nerdy With SCTE 224
SCTE 224 – known affectionately as “Scuddy 224” – refers to an expanded Event Scheduling and Notification Interface (ESNI) standard that was launched in 2015 by the Society of Cable Telecommunications Engineers (SCTE). Simply put, SCTE 224 was borne from the increased need for metadata to provide a deeper set of instructions on how and when a piece of content can be used.
SCTE 224 expands on previous standards, enabling content distribution based on attributes that are now critical to our mobile world, such as device type and geographic location. Audience-facing electronic programming guides (EPGs) are also created with this communication in order to set accurate expectations with viewers as to what’s going to be available and when.
Listen to Joe Mancini, Director of Product Development for Comcast Technology Solutions, explain why it’s so crucial to implement a way to manage the increasing complexity of linear rights across a complex sea of device types, delivery platforms, geographies, and legal requirements.
Read More
Video Trends Vlog Episode 1: Live and VOD Trends
A consistently fresh blend of live and on-demand content puts businesses on a strong path to attract viewers and keep them engaged. We’re seeing a shift in the industry where more content is being served via terrestrial delivery, while still working to preserve a broadcast-quality experience – an experience that’s not just high-quality, but a consistently reliable one regardless of the viewer’s device type or location.
Join Matt Smith, Executive Director of Business Development and Strategy, as he sits down with Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development in the first of a five-part series of short videos where they cover some of the biggest VOD trends and critical business challenges for the video industry.
Read More
Channel Origination Overview
Channel origination has come a LONG way in just the last two decades. Allison Olien, Executive Director at Comcast Technology Solutions, provides a peek behind the curtains to see what video delivery used to be like, how it looks today, and where it's headed.
Read More
Metadata Makes Linear Video Exciting Again
Linear streaming services and alternative video distribution paths continue to increase in both diversity and reach. So, where is the next-level growth catalyst for linear video going to come from over the next 10–15 years? What’s going to keep it attractive in the eyes of viewers? It’s all in the (meta)data. The metadata associated with video can power complex use cases, enable more intelligence and automation, and extend users’ video experience beyond their first screen. Let’s dive deeper into some of these use case examples.
Read More
Regionalized/Time shifted Playout, Affiliate Distribution, RSNs, and Player Entitlements
Managing a broadcast network at scale is a difficult task even for the most knowledgeable broadcaster. One of your biggest challenges is efficiently and economically creating your live channel without having to invest in unnecessary infrastructure. This is especially true if your channel requires significant time shifting or regionalization. Add in requirements for Regional Sports Networks (RSNs), Local Affiliates, time shifted channel playout, the transition from satellite distribution to IP, TV Everywhere (TVE)/Over The Top (OTT) and you’ve got a handful.
Read More
Taking Control of Regionalization with SCTE 224
Media and entertainment companies around the world deliver content to all screens that spans multiple time zones and regions. To create a more compelling viewing experience and keep viewers watching for longer, they may deliver time-delayed content across time zones and allow for local programs to be inserted in the regions they serve. There is a growing trend to add greater regionalization capabilities and make video consumption even more relevant to consumers in different geographies. The objective is to increase revenue from advertising and improve audience retention over time.
As the level of sophistication around controlling how a channel should be presented to the viewer increases, so to does the level of complexity. This covers a range of scenarios for how and when we want regions to behave differently from each other.
Examples include:
Event blackouts for viewers in certain regions are not entitled to see;
Content replacement using streams from other feeds;
Content insertion made specifically for a region that includes local native languages;
Dynamic ad insertion; and,
How to handle cases where viewers travel outside of their local area, but still want to watch the home channel on a mobile app.
Enabling Technology
As an industry, we can enable greater regionalization and have control of the process by using SCTE 224, an Event Scheduling Notification Interface (ESNI) standard that lets us describe what the audience for a channel can see over time. In other words, it creates the rules we need to control content access and therefore lets us address the complexities of managing regional content.
For a system to be practical, particularly at scale, it should be controlled centrally. This way, rules we want regional participants to follow can be pushed out to each region, so they know in advance what to do for each channel. Since SCTE 224 gives us a common format, the provider can automate its responses to these rules.
What is delivered by using SCTE 224 are trigger events that are sent in advance of the live stream for each channel. Each region is provided with its own specific set of trigger events it will use to implement the prescribed operations. These trigger events may be shared with other regions or may be unique to a region. SCTE 224 gives us the power to make these determinations. It also lets us make changes as needed, and post updates to affected regions immediately.
For the content originators, the result is that all content access rights are controlled centrally and made available to participants in a common form. This makes execution less prone to error since a common format is used by all participant regions.
While SCTE 224 defines how we want to manage the regions, we also need to associate this with video in the channels themselves. This is achieved using SCTE 35 in-band triggers — trigger data that is embedded in the stream carrying the channel. Triggers define exactly when an event needs to take place. When an SCTE 35 trigger arrives, it can be matched with the rule provided by SCTE 224 to ensure the system takes the correct action at that time. The benefit of this method is it accurately aligns the rules we want to enforce with the video and creates a clean viewing experience for consumers.
Enabling Workflows with Comcast Technology Solutions
Comcast Technology Solutions is uniquely positioned to enable practical and powerful solutions for regionalization in support of hundreds of channels and all the regions in your network. This includes the ecosystem used to enable the system at each regional participant so that IRDs switch to the correct sources, blackouts are applied, DAI is triggered, and users watch the right variant of a channel for their current location.
Our solution is provided as a SaaS (Software as a Service), running in the public cloud, minimizing the need for physical hardware, and making it quick to start using and implementing. By normalizing all sources that drive your system to a standard SCTE 224 format, we make it easy to set up and fine tune each region. This includes defining regions by zip code, setting the rules for channel access and interaction with dynamic ad insertion. As your needs change over time, it’s simple to make changes and roll them out immediately across your whole network.
With Comcast Technology Solutions, you have a powerful and flexible way to implement regionalization that can be managed from anywhere with an internet connection.
Read More
Video Quality: Table Stakes for Audience Loyalty
“When consumers are paying for something or receiving video from a big-brand media company, they expect a premium high-quality experience. It has to be as good as TV, on every platform and device – and that’s our main priority in the years ahead.”
The above quote comes from one of the executives who participated in our newest research project with MTM (which is available at the bottom of this page). It’s not exactly a revelatory statement, but I’d argue that in my years with this industry I haven’t seen any evidence that consumer expectations for premium quality end with premium services. It’s quite literally the cornerstone of every video-centric service: if the video quality isn’t up to par, the best marketing in the world won’t matter, and frankly neither will the best content.
In Hindsight, Just How Clear Will 2020 Be?
This decade has started off with events that have upended a lot of trend lines – or sent them skyrocketing, as the case may be. Deloitte Insights recently published the fourteenth edition of their Media Insights survey, and it provides some fascinating data around how the COVID-19 pandemic has impacted streaming services. We already know that usage in every category shot through the roof – that’s been one of the biggest media stories of the year, as giant bandwidth consumers like Netflix and YouTube even agreed to temporarily reduce their usage by double-digit percentages by throttling back on 4K and hi-definition programming.
From a quality standpoint, it’s all a stark reminder to media companies that they’ve got to be ready for anything. For many consumers who purchase subscriptions and on-demand content, or who are supposed to be influenced by advertising, the pandemic has had an impact on both spending habits and budgets. Set against the backdrop of increased usage and cancel-at-any-time subscriptions, it’s clear that media destinations have to demonstrate more value on a daily basis in order to maintain screen time with consumers who are looking to cut costs.
Strong initial offers, incentives, or tentpole “big-event” content can do wonders for short-term gains, but the actual viewing experience has to be consistently on point in order to cultivate short-term watchers into a growing community and to build audience loyalty. It really is the sword that media companies need to keep sharp on both edges:
On one side is content – simply put, providing the things that people want to keep watching; not just to attract viewers, but potentially advertisers as well.
On the other side is the fact that on a global scale, you’ve got to serve that content to an ever-changing planet full of screens, most of which are mobile.
A content and distribution plan that’s built for quality is the first step for being competitive in the coming decade. From there, successful audience building needs a continued dedication to relationship building and performance optimization in order to get the most value out of every creative and operational investment. We reached out to a cross-section of our peers to see what that looks like.
New Paper: The TV 2025 Initiative
Comcast Technology Solutions has partnered with international research and strategy consulting firm MTM to present the TV 2025 Initiative; a collection of industry perspectives on the continued transformation of the video industry, and thoughts about what the future may hold for streaming services as we embark on a new decade.
Senior executives from over two dozen large media and entertainment companies participated in the research, sharing their current experiences and future expectations over a wide range of topics, tackling questions such as:
Now that streaming has finally come into its own, What are others in the industry expecting, from a quality and scale standpoint, for streaming’s near-term (and long-term) future?
What best practice trends are developing? How are media companies building a mix of internal teams and partner-driven expertise? How will your growth impact your technology’s ability to deliver broadcast-quality experiences?
At the rate of change in just the first part of this year, 2025 already feels like it’s practically around the corner; what do today’s executives think will be driving industry transformation? What can be learned in order to chart a strong course from here to there?
Get the paper right now, right here.
Read More
What’s Your Five Year Plan?
It’s been a long quarter this week.
At least around our virtual office, that’s been a general consensus pretty much all year long. It’s a feeling that gets stronger as we look a back to the beginning of the year. Remember, way back in January? Before the concept of “social distancing” was in our daily lexicon? Or “virtual office” became a default setting? Remember when 2020 just looked like the start of another busy decade? Today, it feels different. It is different. I don’t think it matters where your skills or services sit in the overall scheme of media services and technologies. Even a pure entertainment destination feels more like an essential service than ever before.
But even as every week has a make-or-break feel to it, from content performance to network load to playback quality, all it takes is a little five-year perspective to recognize an insescapable truth: 2015 was nothing like 2020. And 2025 is likely going to be a whole new ballgame in and of itself. We’re all working hard to pivot with our audiences to stay relevant; but those long-term roadmaps are still as important as ever.
More Devices, More Power, More Viewing
At the end of 2019, Deloitte published its first Connectivity and Mobile Trends survey, which provides some powerful insights for advertisers, agencies, and media companies as they define their technology roadmaps and partnerships for the coming decade. It’s no secret that the upward trajectory of video advertising is placing more pressure on every mechanism and process that brings an ad from production to multi-platform consumption.
Deloitte’s data shows that within an average U.S. household, there are currently around eleven connected devices, at least seven of which have screens for video. This is a huge leap from just a few years ago: in 2014, a similar survey by Ericsson showed an average of around five connected devices. The focus of the survey wasn’t just about what’s going on today, but also to see how connectivity impacts consumption, and how behaviors might change as data gets faster.
Think about the basic paradigm shifts we’re dealing with in our workplaces in homes, and how these shifts impact content consumption. In our own research (more on that below), industry professionals we contacted made particular note of the fact that a lot of the changes in consumption habits, spurred on by quarantines and social distancing, aren’t going to go away. Coupled with the faster, more capable networks of the not-too-distant future, more viewers have first-hand experience in what streaming has to offer. But what happens when “connected” becomes a foregone conclusion for even more devices? Certainly, the media market is going to get even more diverse, driving a real need for more automation, better data, and more emphasis on personalized experiences that win screens.
Streaming’s Growing Competitive Landscape
In just the past year, premium streaming content has blown up, with major content owners like Disney, HBO, and NBC Universal (also a Comcast company) bringing their wares to bear with an in-house offering. Streaming as a whole has really come into its own, with the experience itself commanding more respect (and investment) than the “bolted-on” feel of older iterations that just barely satisfied the goal of providing “something in the OTT space.”
With so many big media names launching new offerings of their own, it’s important not to overlook the increasing potential for smaller, niche channels to thrive. In fact, as we all spend more time focusing on customer journeys and personalization of content, it’s never been a better time for new audience-specific destinations to partner up with the platforms and technology providers needed to stand up a brand that’s new, differentiated, and advertising-supported.
New Paper: The TV 2025 Initiative
What’s your five year plan? Comcast Technology Solutions has again partnered with international research and strategy consulting firm MTM to present the TV 2025 Initiative; a collection of industry perspectives on the transformation of TV and thoughts about what the future may hold for streaming services as we embark on a new decade.
Senior executives from over two dozen companies participated in the research, sharing their current experiences and future expectations over a wide range of topics, tackling questions such as:
Now that streaming has finally come into its own, how are you managing your services? Are there differences between approaches based on complexity, or based on a service’s relationship with a broadcast service?
What best practice trends are developing? How are media companies building a mix of internal teams and partner-driven expertise?
At the rate of change in just the first part of this year, 2025 already feels like it’s practically around the corner; what do today’s executives think will be driving industry transformation? What can be learned in order to chart a strong course from here to there?
Get the paper right now, right here.
Read More
Get your head in the clouds
Make “me-time” more accessible to your audience, regardless of the situation.
No matter how your business sees the future, keeping pace with evolving viewing habits is critical to growing your audience. Consumers now expect the ability to curate their own viewing experiences, including where, when, and how they watch.
According to Deloitte, on average more than 60% of Millennials and Generation Z consume streaming video daily. By comparison, less than 30% of Baby Boomers stream video daily.
Therefore, it’s imperative to make the user experience seamless across all screens to reduce user confusion, and friction to using your service. This means linking to on-demand content and functionality from the broadcast experience with services like instant restart, and making all devices behave the same. Now is the time to streamline the viewing experience by converging broadcast and digital to reduce costs and overheads.
Innovative business models can help make your offering accessible for ever-more niche groups of users. For example, if the data says your millennial viewers are only interested in your reality TV shows, then make a reality TV show bundle. If the Baby Boomers want classic movies or news, you can also do likewise.
While people adapt their viewing habits, and to using new services, at different paces, technology keeps evolving. Historically, technology has required linear and digital businesses be run as two separate units. But, by converging broadcast and OTT systems, broadcasters and operators can significantly reduce the doubling-up of efforts to drive savings and increase operational efficiencies.
With a Cloud TV platform, you can put an end to consumer confusion, and create engaging user and brand experiences, while providing the tools needed for business model innovation.
Maximize reach and revenue with a complete cloud-based solution that works on any device, with any business model, and launch unified user experiences across all devices.
Read More
Automobile Advertising: Keeping Dealers In Alignment
Automobile manufacturers advertise the experience of driving and using a car. Automobile dealers do too, but more importantly they advertise the experience of buying and owning that car from their dealership. Together, both manufacturer and dealer are working to court new car buyers in a concerted effort to win loyalty on multiple levels – and if everything stays on track, over multiple car purchases. In fact, according to a 2019 white paper by AutoPoint*, the post-purchase experience means 52% more to brand loyalty than in any other industry. So, when it comes to advertising, dealers need all the love they can get from the carmakers they represent.
As advertising gets more complicated, and ad destinations get more diverse, the need for manufacturers and dealers to be in alignment has never been greater. At the top, brand continuity and marketing quality benefit immeasurably by helping the ad operations at the dealer level to operate smoothly. And, it must be stated that the sophistication of those individual dealers varies from shop to shop. An individual outlet needs the same media as a high-powered dealer network with multiple brands and locations, and needs to get their ads completed and in-market just as quickly, but likely with a smaller budget. The easier it is for each constituent in the advertising workflow to find, share, and manipulate ad creative, the better.
An “Internal Collaboration Engine” for Advertisers
For one of our clients, a major global automotive brand, improving the speed and accuracy of media management with dealers is a two-way benefit: for the brand, it’s easier to keep campaigns on course, and for the individual dealers it’s improving their ability to get their work done and delivered on time.
Big-budget creative is filmed and created by the manufacturer for a wide variety of vehicles, from sports cars to sport-utility vehicles. Dealers need this footage as the foundation for their own spots by adding their own messaging: promotions, incentives, sales floor footage, service center specials, etc. Like every advertising effort, every commercial must be transcoded into a multitude of formats. Multiplied across products, and with multiple campaigns active simultaneously at any given time, it gets really complicated for dealers to find and access the media files they need.
“As we explored how to improve this internal relationship, we recognized this fantastic synergy between a problem our partner needed solving, and a similar problem we had already solved for internally in a completely different area,” explains Simon Morris, Senior Director of Product Sales, Advertiser Solutions. “We had built a tool that made it easy for our broadcast partners to find our ad creative easily through a secure cloud-based tool, and then grab the right version with the correct aspect ratio and coding. So, to create something for automotive dealers, the foundation was already there, and that meant it was easy to organize and display content in a way that worked for our automotive partners. We continue to focus on building tools that solve wider industry needs, and in particular ones that are focused on driving operational efficiencies, speed and transparency.”
The portal isn’t just a static “box in the sky,” but also a vehicle to improve communication within the internal ad ecosystem. No one wants to use creative that might be outdated, and all content needs to be in accordance with talent and other contractual rights associated with it. Integrated with the Ad Management Platform, the new portal accomplishes this, giving dealers a straightforward way to get automatic updates and notifications of newly available creative.
New Article: The Great Car Chase
The manufacturer – dealer relationship is not unique to the automobile industry. Many industries have similar advertising relationships with franchisees that stretch globally: consumer-packaged goods companies, restaurant chains, or other complex product creators whose ongoing consumer relationships will be nurtured and maintained at a local level.
In our new paper, The Great Car Chase, we use the automotive industry as our vehicle to examine advertising complexity at the highest level, and how to gain control of it. Global companies can – and must – shift to a centralized architecture that improves both transparency and control across all of their global ad traffic. In doing so, they can also position themselves to incorporate new technologies that will deliver tailored content in real time; elevating the performance of every campaign. Read it to learn more about how to:
Maximize ROI by optimizing through a unified solution that promotes collaboration and informs better decisions across the entire workflow to elicit stronger, positive buyer responses
Optimize on the fly to meet tight deadlines, handle versioning needs automatically, and respond quickly to changing opportunities
Simplify access and management of creative assets for local dealerships.
Deliver campaigns worldwide using one centralized platform.
Take back control and become the conductor of your brand across any market, any screen.
*Customer Loyalty Drivers – Current and Future, AutoPoint, April 2020
Read More
The High-Performance Content Catalog: INSIGHTS TO GET THERE
Media destinations are largely defined by what’s in their catalog of content, but what really matters is what audiences are actually watching. It’s an obvious statement that premium content commands top dollar because it drives more screen time. But what’s going on with the entirety of your offering? Title-specific analytics are crucial for valuating a program, but programmers and providers alike can really benefit from a more holistic view of how larger parts of their catalog are working for them (or not working, as the case may be).
Smarter decisions around content aren’t just about knowing what to buy and what to create. Understanding what audiences aren’t watching is as valuable as knowing what’s doing well. We’ve all experienced going to a service we use regularly and scrolling through suggested programs similar to ones we’ve previously viewed (or whatever content needs an ROI boost). After that it can seem like an exercise in diminishing returns, but most of us have dug deep into a programming guide only to find something fantastic that we didn’t even know was offered. With some better-informed merchandising or marketing moves, the entire catalog’s ability to create stickier audiences improves.
To provide these macro views across a wide spectrum of content engagement, Comcast Technology Solutions is introducing Insights – a new service to give companies a holistic view into the overall health of their content catalog.
INSIGHTS: SOME NEEDED CONTEXT FOR YOUR CONTENT
“Insights is a new service that is focused specifically on business impact,’” explains Peter Gibson, Executive Director of Product Management for Comcast Technology Solutions. “So, it’s less about the number of times a viewer streamed a piece of content, or how quality is impacting playback, but more about how your content is tracking with audiences as a whole.” Insights provides answers to questions like:
Are audiences watching what you think they’re watching?
What’s getting binge-watched? And for how long per session?
Have you become a viewing habit for your audience?
Combining both historical and a daily view of the actual content consumption by your audience, practical decisions can be made based more on real-world context, with less conjecture. For example, perhaps you’ve been purchasing a lot of dramatic programming, but your audience is watching a lot of comedy. Insights can help to provide clarity where parts of your catalog might be underutilized due to lack of discovery, which might spur some additional advertising or some new merchandising decisions. Or, it might show that content creation and acquisition dollars could be reassessed based on the holistic impact they’re having on audience engagement and retention.
HOW INSIGHTS WORKS
Insights is a new feature within Comcast Technology Solutions’ Cloud Video Platform. Information is collected on the server side, which translates into minimal client-side implementation work to get Insights up and running. Once it’s up and running, Insights combines historical data with current performance measurements to provide context over time. The Insights dashboards include a wealth of data not normally found within analytics solutions. Among the multitude of benefits:
Binge Tracking
Understand how much audiences are watching in one session, and for what period of time. Clearly we ALL want to encourage binge-watching. Insights provides some needed clarity around not just which shows are finding their audience, but also how those audiences are watching with tools like Device and Content Insights.
Measuring Engagement
How much do audiences really like your service? Are audiences coming back actively and consistently (a sticky, engaged viewer), or is it just something that they’re getting billed for (a high churn risk)? Engagement measurement can lead to opportunities to re-jigger marketing strategy or direct outreach / offers to viewers.
Attention Indexing
Attention Indexing looks at your entire catalog to see how audiences are really watching. For instance, a new show might get a lot of play, but . . . .is it getting played in its entirety? Or are folks tuning in to a longer movie and only playing part of it. Catalog insights might determine that folks are clicking on content because it’s merchandised, but they don’t stick with it to completion. Lots of starts with low completion rates…that’s a problem that most out-of-the-box analytics tools aren’t designed to illuminate.
MACRO-LEVEL UNDERSTANDING FROM DAY ONE
“Insights can immediately help you make content catalog merchandising, marketing, and policy decisions with data based on your historical video activity,” explains Gibson, “Plus, you can easily add it to your existing analytics solutions if needed.” In the space between how you want your content to be used and the reality of your audience’s behaviors, Insights gives you an added layer of business intelligence that brings you to a deeper understanding.
Learn more about Insights, our CTSuite for Content and Streaming Providers, and our CTSuite for MVPDs and Operators.
Read More
Ad Management for Political Campaigns
As the calendar slips into the start of a new decade, the 2020s are poised to roar for advertisers ahead of one of the most anticipated U.S. election seasons in memory.
Need proof? Check out what the industry experts are predicting. Kantar Media forecasts political campaigns for U.S. federal office will spend $6 billion on paid media placements this year. Broadcast is projected to take in $3.2 billion and cable $1.2 billion, according to the Kantar study. The study did not include ads sponsored by Political Action Committees (PACs).
Of the $6 billion in political campaign spending this cycle, Kantar expects 20%, or $1.2 billion, to go to digital.
According to Cross Screen Media, there could be eight million broadcast airings of political ads in 2020, an increase from 5.5 million in 2018. A lot of the political ads will air on local newscasts. According to a report from Kantar CMAG, political advertising took up 43% of available slots on local news in battleground markets, compared to just 10% at the start of the season.
EMarketer estimates US advertisers will spend $42.58 billion on digital video placements overall, amounting to 28.1% of digital ad spending. Most of those placements will be bought programmatically, with 82 % of US digital video ad spending forecast to be transacted in automated channels next year.
Why are experts predicting such large advertising spends this season? In addition to a banner ad year, experts are also expecting a tremendous voter turnout.
Turnout in 2018 was the highest it had been for a midterm election since 1914, according to University of Florida political science professor Michael McDonald, a nationally recognized elections expert who maintains the voter data site United States Election Project. Next year, he predicts, 65%-66% of eligible voters will turn out, the highest since 1908, when turnout was 65.7%.
For political advertisers to capitalize on their advertising opportunities and reach voters at the right time and right place, they need a campaign built on speed, mobility, quality, reliability and scale. To stay ahead of the polls and make an impact fast, you need to get your message to voters faster, at higher quality, without worries and hassles.
“Video advertising is essential to modern political campaigns, helping to build awareness of both candidates and issues. Fast, efficient placement of broadcast-quality advertisements is essential for a high-functioning campaign,” said Richard Nunn, Vice President and General Manager of the Ad Platform at Comcast Technology Solutions.
The CTSuite offers a advertisers a new solution for political clients designed to get their message to potential voters quickly, and across more touchpoints than ever before.
Manage campaigns, ad spots, payments, closed-captioning, versioning and distribution all from a mobile platform “on-the-go". Fully transparent order entry and management provide views into the lifecycle of spots from upload to final delivery.
Cast your vote for efficiency...
and learn more about ad delivery with the mobility and scope to keep pace with your passion.
Read More
Why mobile should drive your CDN strategy
A strong relationship between content creator and consumer is built on reliable, consistent, playback experiences. Don’t leave your customers stranded without the connection they need to the content they love. A global world demands consumers have universal access where, and when, they want it.
According to eMarketer, 75% of all video plays are now on mobile devices. Mobile video traffic accounts for two-thirds (63%) of all mobile data traffic, and will grow to account for nearly four-fifths (79%) of all mobile data traffic by 2022 – representing a 9X increase since 2017, according to a 2019 report by Cisco. By 2022, Cisco projects that: mobile will represent 20% of total IP traffic. The number of mobile-connected devices per capita will reach 1.5. The average global smartphone connection speed will surpass 40 Mbps ad, Smartphones will surpass 90% of mobile data traffic.
In fact, consumers in the United States spend nearly as much time watching videos as they do working, nearly 40 hours per week, according to Deloitte.
As viewing experiences become increasingly mobile, the importance of delivering content seamlessly across vast geographic landscapes is paramount. Content creators must ask if their CDN is smart enough to always chart the shortest, cleanest, most efficient, and highest-quality pathway to every screen?
Meet your audience where they are with a multi-CDN strategy
Depending on the content delivery strategy utilized by the creator, consumers may, or may not, be able to enjoy the highest-quality viewing experiences.
Today, more video content is being watched than ever before, with 85% of all internet users in the U.S. watching online video content monthly on their device of choice (Statista, 2018). The global video streaming market is estimated to be worth $124.57 billion by 2025, according to Market Watch.
Speed alone will not ensure your content gets to where it needs to as quick as possible. The edge-tier of every CDN provides points of presence (PoPs), where your audience and your content interact. By employing the use of more than one CDN, you get more PoPs for your programming. Additional CDNs can widen your delivery footprint and allow you to serve customers from many more locations and over different network routes – this is especially true when attempting to deliver content to a global audience.
The variability of performance is another reason to utilize a multi-CDN strategy. Individual CDNs vary in their performance across different locations, network, and time of day. No single CDN can perform well in every area of the world. Quality of experience will differ depending on their proximity to points of presence, as well as capacity and utilization.
A multi-CDN media delivery strategy ensures downloads are seamless and downtime is not a concern, no matter your location or device.
Preparing for tomorrow today
Today, content is created for global audiences. According to YouTube, 80% of users come from outside the U.S., and YouTube services 88 countries in 76 languages, or 95% of all internet users.
Ericsson’s November 2018 Mobility Report, predicts video traffic to grow 35% annually through 2024—increasing from 27 exabytes per month in 2018 to 136EB in 2024. This means video’s share of global mobile traffic will rise to 74% from 60% today.
As content creators look toward the future, how and where consumers view their creative will occupy increasing levels of mindshare when devising delivery network strategies.
Avoid the additional resources required to manage multiple partners independently, and leverage Comcast’s multi-CDN fabric to deliver your broadcast quality content to any device. Take advantage of the Comcast CDN and other pure-play CDNs using DLVR multi-CDN routing technology with direct interconnects to more than 250 global networks.
Learn more about our CDN Suite of services
Read More
Harness the full capabilities of the marketplace with advertising automation
Today, brands debate whether to prioritize broadcast or digital video channels in their advertising campaigns. But why choose? A blended, multichannel, strategy can increase ROI by as much as 35%, according to the Advertising Research Foundation.
However, according to a survey by 4C and Advertiser Perceptions, lack of full-funnel visibility and control, led half of marketers interviewed to say the inability to plan media seamlessly across platforms is a top challenge.
A typical Fortune 500 advertiser currently cobbles together solutions from dozens, or even hundreds, of media sources, technology partners and data providers. As a result, it’s nearly impossible to get a full picture of the media supply chain from ad creation to delivery.
While ads may still reach the end user, inefficiencies built into these workflows create costly redundancies and time-consuming manual processes, hindering an advertiser’s ability to take full advantage of the sophisticated targeting capabilities now available.
“To the outside world, yes, ads still get to their end user,” said Richard Nunn, VP and General Manager of Advertising at Comcast Technology Solutions. “It isn’t broken in its entirety. It works, and it has worked for 70 years. But the reality is that it doesn’t work well enough. The process should be faster, more seamless, more cost effective and free of errors.”
Create at the speed of automation
In the past it could take weeks and millions of dollars to manually create hundreds of video ad versions - matching creative with audience-level targeting. Today, advertisers can create hundreds of ad versions in under 60 seconds. Why is this necessary?
Technology is evolving to provide advertisers with ways to better understand viewers contextually and respond not just by serving up ads for more relevant products, but also to tailor the ads themselves to elicit more positive buyer behavior and support a better viewer experience.
“Being able to match targeting to creative execution on the fly is really the holy grail of marketing,” Nunn said. “It gives consumers the personalized experiences they’ve shown they want to engage with, and it reduces the complexity of execution by removing many of the manual interventions that were slowing advertisers down.”
Technologies like dynamic creative optimization (“DCO”) hold the promise to a new generation of more personalized ad service – a win-win for audiences and advertisers alike. But in order to fully realize this dream, advertisers need more than a DCO solution.
They also need to adopt more a unified approach to enable faster, more seamless management of creative, media buy and distribution. Many advertisers and their agency partners still rely on emails, spreadsheets and individual vendor platforms to communicate media and creative availability, delivery and reporting. A centralized, automated ad-management tool where media buys are identified, filled and tracked is an essential component to sophisticated ad targeting and versioning, allowing personalized spots to be delivered in minutes, not days.
“This isn’t the silver bullet, but it solves one of the big underlying problems – automation and trying to connect the dots in this world of converged channels and formats,” Nunn said.
Bringing together many of these “dots” – including distribution, creative customization and reporting – in one management tool helps restore advertiser control over an increasingly fragmented process. It empowers advertisers and brands to deliver near-real time customization to satisfy consumer expectations and drives measurable results.
Comcast Technology Solutions is reimagining the ad management and ad versioning processes by introducing automation – helping advertisers generate customized ad creative capable of driving a measurable increase in ROI.
Click Here to Learn More
Read More
Unique new ways to maximize classic VOD content
What’s old could be new again.
From music and fashion to television and movies, culture and content have a way of coming back around and finding new audiences — especially in an age where your new favorite (and long ago canceled) show is only a click away.
“Your 13-year-old may discover a TV series from the 80s like Knight Rider, or Miami Vice, and fall in love with it,” Comcast Technology Solutions Matt Smith said. “There are countless shows sitting out there that could provide value.”
Across the globe, there are huge back catalogs of videos and shows without an opportunity to shine. With the right technology, your favorite old show could find a new audience and help drive new revenue.
Today, 78% of people watch online videos every week, and 55% view online videos every day, HubSpot reports. Video is expected to make up 82% of internet traffic by 2021, according to Cisco, and content providers are searching for the next classic show to crossover to a new generation.
In the most eye-catching example, WarnerMedia paid a reported $425 million over five years to get Warner Bros. Television’s “Friends” away from Netflix, and now plans to offer it on its streaming service HBO Max, set to launch in spring 2020.
However, using library content to build a new platform isn’t a recent phenomenon. When cable channels first began to take off in large numbers during 1980s and ’90s, retired broadcast network series served as their lifeblood. “The Andy Griffith Show,” the biggest hit sitcom of the 1960s, was among the most popular shows on TBS, and reruns of NBC’s procedural crime drama “Law & Order” turned A&E into a viewer destination, according to the Los Angeles Times.
Grow and curate your audience
By 2020 there will be close to 1 million minutes of video crossing the internet per second, and by 2022, online videos will make up more than 82% of all consumer internet traffic — 15 times higher than it was in 2017, according to Cisco .
With the Virtual Channels platform from Comcast Technology Solutions, you can grow an existing fan base — or create one from scratch — by pulling complementary content together into seamless experiences that like-minded viewers will keep watching. Launch a brand that is truly fan-centric. Or, aim for a particular demographic with crowd-pleasing programming.
Discover a new way to turn existing on-demand and live programming into a linear “always-on” over-the-top digital experiences. Stitch together a curated programming, and schedule and serve it to audiences seamlessly.
So, break out your pastel T-shirts and linen suits. Get your black Ferrari convertibles revved up. And, put on your favorite pair of Ray-Bans. There’s enough neon, and potential syndication gold, out there to bring in additional ROI like it’s 1984.
Learn more about how Comcast Technology Solutions makes it easier, and more affordable than ever, to add choice, value, and revenue to your video offerings.
Read More
CDN: Media Delivery for Gaming and Virtual Reality
Developers spend years crafting the newest games. When the next big release date comes, the last thing creators and consumers want are slow speeds and long download times.
Worldwide online gaming traffic reached 915 petabytes per month in 2016 and is expected to increase by 79% in 2019, according to Cisco Systems. As the scale of online traffic grows, the need for delivery networks to carry the content does as well. The Cisco Visual Networking Index (VNI) forecasts a 36% compound annual growth rate in Content Delivery Network (CDN) traffic per month from 2017 to 2022. As file sizes grow exponentially, the need for CDNs to provide continuous, reliable and fast access become ever more critical. The number of end-users downloading content is also expanding quickly, making localized delivery that much more important as geographical demographics expands.
In addition, the appetite for gaming is expanding to eSports, virtual reality, and other new forms of media and entertainment, changing the way many think about these offerings. According to a 2018 survey by the Entertainment Software Association, 79% of gamers report video games provide mental stimulation, as well as relaxation and stress relief (78%). The role of video games in the American family is also changing, with 74% of parents believing they can be educational for children, and 57% indicating they enjoy playing with their child weekly.
The economics of eSports and VR
Esports is already big business. Thanks to advertising, sponsorship and media rights, eSports revenue is expected to reach $1.49 billion by 2020, according to Newzoo, a gaming industry analytics firm. North America will generate $409 million in 2019, the most of any region, the report found. China will generate 19% and South Korea 6%, with the rest of the world comprising the remaining 38%.
In July 2019, 16-year-old Kyle 'Bugha' Giersdorf, from Pennsylvania, was crowned king of Fortnite and took home the $3-million-dollar grand prize. The tournament, held in New York's Arthur Ashe tennis stadium, had a total payout for all levels and ranks of $30 million dollars.
Earlier this year Comcast announced it would be building a $50 million eSports stadium in Philadelphia, and would be partnering with Korean phone company SK Telecom to create an new eSports organization called T1 Entertainment & Sports, which will field teams in a number of different video games, including League of Legends, Fortnite, and Super Smash Brothers.
The global VR gaming market is expected to be worth $22.9 billion by the end of 2020, according to Statista. VR is expected to generate $150 billion in revenue by 2020, according to Fortune, and 500 million VR headsets are expected to be sold by 2025, according to Piper Jaffrey.
When the true VR tipping point comes is the object of much speculation, but one way it could penetrate the consumer market more deeply is by starting with public venues that allow users feel comfortable with the technology. A recent article in Variety discussed how Las Vegas is transforming into a kind of virtual reality hub, with VR arcades and location-based VR entertainment centers opening up in casinos and shopping centers. One of the first casinos to embrace VR was the MGM Grand, which opened a free-roam VR arena in cooperation with VR startup Zero Latency in September of 2017.
A 4K video can consume up to 10 GB of data per hour. A 4K VR experience will require several times that bandwidth. While transporting vast amounts of data, VR content providers, like game developers, need to solve the challenges of latency, quality and speed.
CDN, interactivity, and media delivery
The streaming of interactive content like games and VR differs from video streaming in one major way. While video streaming is a one-way delivery from servers to your device, games and VR are a two-way street. Every time a button is pressed, the signal must travel to the server where the experience is being run. Then, the signal from the server has to go back to represent the result of pressing the button.
A multi-CDN media delivery strategy ensures this process is seamless and downtime is not a concern. If one server goes down, the next is ready to compensate and deliver cached content to those users by caching content nearest to your destination. This improves performance and reduces errors by offloading bandwidth from the origin.
The more servers, the better the odds a consumer is going to get the highest-quality speeds without lag.
Thanks to this two-way street, viewers and gamers can provide feedback in real time, join interactive events that coincide with live media, and unlock rewards within content.
While interactivity might have started in gaming, the tactic will likely carry over into nearly every kind of media audience. The interactivity of advanced advertising, where viewers can feed back in real time, or possibly join an interactive event, becomes increasingly attractive.
A 2017 report from Magna found interactive video ads drive 47% more time viewing a message compared to a non-interactive ad. Even if viewers don’t click, simply having the option to interact makes the ad 32% more memorable than non-interactive ads, the report found.
Learn more about how Comcast Technology Solutions global media delivery innovations can help reimagine your strategy, and prepare your business for what comes next.
Read More
Q&A with Allison Olien: How today’s opportunities can be tomorrow’s foundation for MVPDs
Allison Olien, Vice President and General Manager for Comcast Technology Solutions, collaborates with MVPDs and Content Providers to introduce the newest, most innovative offerings. She took some time to chat with us about how she sees the evolution of the industry for small and medium sized MVPDs, and new opportunities for growth.
Read More
Self-Service Advertising: Reimagining advertising in the U.S.
Self-service advertising models are bringing transparency, control and efficiency to the forefront of the international market. While here in the United States, we typically opt for managed-solutions – a distinction changing fast due to the rise in self-serve digital platforms.
Running campaigns through fully-managed platforms puts the responsibility and control in the hands of a third party. Someone else uploads creative, makes changes or optimizations, and sends back insights. For those with limited time or resources, this can be the right option.
However, for those with the appetite, a self-service advertising platform capable of navigating an entire campaign lifecycle, from planning to execution and reporting, presents major benefits.
We’ve seen the use of self-service advertising platforms for demand and supply-side platforms increase as knowledge and expertise has increased. Agencies and brands now directly use these platforms to deploy and manage advertising spend and activity.
Why Self-Service?
The self-service model empowers advertisers to create and change assets quickly, deliver ads in near-real time, and take control of campaign workflows from start to finish. By limiting the number of hands touching any given campaign, self-service offers increased brand safety, as well as deeper insight into each part of the workflow.
For agencies, manually executing campaigns and working with each advertiser individually can be costly. Automating the process cuts down on time and resources. In addition, self-service can help build links between media agencies, post houses, creative agencies and clients as they work together in a shared platform.
In 2017, IAB commented on the rising trend of self-service models in their annual Internet Advertising Revenue Report saying, “of the roughly 9 million small and medium-sized businesses in the U.S., 75% or more have spent money on advertising. Of these, 80% have used self-service platforms, and 15% have leveraged programmatic advertising. These findings point to significant ad spend going to publishers with these capabilities, and presents a growth opportunity for those that can add self-service and programmatic geared to these businesses into the mix.”
Barriers to Change
I hosted a February 2019 IAB panel discussing how self-service can help solve the complexity of disparate workflows. Peach, an advertising partner of Comcast Technology Solutions, delivers millions of ads every year to more than 100 countries internationally, where the adoption rates for self-service are the opposite of what we see in the U.S.
“We’re up at 98% self-service. So, pretty much everything is self-service internationally,” Peach Head of Enterprise Sales Alex Abrams said during the panel discussion. “In the States, it’s probably 98% managed-service. It creates inefficiencies and barriers to move into 24/7 automation.”
Peach transitioned from managed-service to self-service about four years ago, when automated workflows and technology made it possible at scale. The U.S. market has been more reluctant.
Owning the workflow
Justin Morgan, Senior Director of Marketing Automation at Comcast, described the impetus for a self-service approach, expressing the market’s desire to create efficiencies by having control over each portion of the workflow.
“I think there is a wave of companies wanting to own their data. Own some production elements. Own really anything they can,” Morgan said during the panel discussion. “Put simply, I can either give direction to a company to put something in implicitly, or I can just put it into the system myself. Obviously, the latter is more efficient and faster, and probably cheaper.”
“We should have all of that at our fingertips. So, from that perspective, everything we are doing now is really to move toward a self-service model,” Morgan said.
The evolution toward self-service may be daunting to some, but there has never been a better time to rethink your strategy.
According to the most recent IAB Internet Advertising Revenue Report, digital video advertising revenue reached $7 billion in the first half of 2018, up 35% from the previous year. Cisco estimates by 2021, 80% of all the world’s internet traffic will be video.
In addition, research from Digital Content NewFronts’ 2019 Video Ad Spend Report showed brands expected, on average, to increase spend by 25 percent to $18 million on digital video in 2019—with nearly two-thirds (59%) of ad buyers saying they planed to increase their advanced TV spend.
The U.S. market is poised to reap the benefits of a model designed to reduce complexity and cost while optimizing the revenue potential of each piece of content.
Global trends toward reducing inefficiencies, enabling automation, and taking control of each part of the workflow are indicators of the growing domestic opportunity for these services and for self-service advertising solutions.
Learn more about how Comcast Technology Solutions self-service innovations can reinvent your advertising workflow.
Read More
Three industry terms to refresh and evolve in 2019
We’re well into the new year. The months ahead will be filled with messages of innovation, disruption, and . . . . well, monetization. It was actually the word “monetization” here in our office that sparked talk about some of the most overused terms in our industry. Monetization is certainly not a term that’s in danger of being phased out. In fact, it’s going to be even more prominent as business models and audiences evolve. That said, it did start a fun conversation about some of the terms and phrases getting pretty long in the tooth around.
With Q1 almost in the rear-view mirror, we’re reflecting on all of the new innovations and opportunities surfacing across the video ecosystem. While we operate today with a host of relevant buzzterms like hybridity (more on that below), or converged workflows (linear, non-linear, digital, broadcast), here are some of our picks for terms in serious need of an upgrade:
THE OBVIOUSLY “FRAGMENTED AUDIENCE”
2008 called – it wants its market insights back. We all get it: today’s audiences are connecting with content in multiple ways, over countless device types, platforms, and services (oh my!). We’re far enough past the early days of streaming adoption that there are plenty of industry professionals out there who have not only spent their entire adult lives as multi-screen, multi-platform content consumers, but their entire careers have taken place within a multi-platform delivery ecosystem. To have a conversation about modern video delivery is to accept as a given every audience can be broken out by how they watch, where they watch, and when they watch. The real conversation isn’t about a new phenomenon of audience fragmentation to be mindful of, but more about serving media accurately and more effectively under any circumstance. It’s about keeping the audience engaged no matter the device through a great experience – whether the experience hinges on quality, discoverability, share-ability, or, let’s be honest, price.
ARE WE OVER “OVER THE TOP” YET?
Can we let this one go yet? Of course not. It’s now the de facto way to describe digital delivery as opposed to broadcast. This doesn’t change the fact today’s market is no longer as cut-and-dry as subscriber services vs. over-the-top of subscriber services. It made more sense when serving content “over the top” of other subscriber services sounded revolutionary—because it was. Now that delivering media over the internet is standard operating procedure, it begs the question “over the top of what?” Digital delivery isn’t going over, under, or around anything – and in reality, digital delivery is often going with something else. The obstacles now between you and your audience (other than catching attention to begin with) are technical challenges like latency, metadata management, file size, formatting, hi-def quality demands, etc. So, if we’re stuck with OTT as a term that now means more than it used to, let’s at least agree we’re well into OTT 2.0: an age of mature digital delivery that enhances discoverability, personalizes commerce strategies, and merges broadcast and digital workflows to intelligently evolve along with consumer devices and habits.
ABOUT THAT “MONETIZATION” WORD AGAIN. . .
It’s the word that started the conversation, so it seems apt to wrap with it as well. Looking back at my list, I’m seeing some commonalities: terms we can’t escape, we repeat a thousand times a day, and that’s true meanings continue to evolve. According to Merriam-Webster, monetization as a term was “coined” (I know, bad pun) circa 1879, at which point it literally meant “to establish as legal tender.” It’s of course expanded to mean “extract revenue from,” but the deeper meaning in our industry is much more than the static act of putting a price tag on a program. Hybridity of your monetization strategy is more than just blending subscriber or ad-based approaches. To extract the most value out of content requires context – not just within the relationship that exists with each viewer, but also the relationship of each piece of content with what’s going on in the world, and the ability to pivot accordingly and intelligently.
I should probably add a disclaimer: none of these terms are going away any time soon. We’re sure to be applying each for the foreseeable future. For those of us who eat, sleep, talk, write, and work with these words on a daily basis, we’re always looking for the next great term to repeat into ubiquity. Got any suggestions?
Read More
The State of Digital Transformation in 2019
In 2017, we started off the year talking about the convergence of digital and broadcast workflows. It was inevitable: by bringing the consistent quality and process maturity of broadcast with the agility, reach, and monetization capabilities of digital, companies are creating, as we said at the time, a new ecosystem of win-win scenarios. In truth, it’s really a win-win-win, as consumer choice – and power – have only increased the benefits at the individual viewer level.
2018 saw the acceleration of change in pretty much every measurable way. The capability of viewing devices, from folding mobile screens to gargantuan “wallpaper” in-home displays, continues to drive consumer quality expectations. At the same time, advances in video management and delivery continue to find creative ways to exceed those expectations by wrapping broadcast-quality reliability into first-rate experiences and personalized, hybrid business models that remain relevant to consumers daily.
Our initial partnership with research and strategy company MTM resulted in our first TV Futures Initiative – an international effort to identify the trends that are shaping the relationships between content, consumer, creators, and providers. For 2019, we’ve again partnered with MTM, along with FreeWheel (also a Comcast company) to produce the new research paper, 2019 TV Futures Initiative: Digital Transformation – A Framework for Success.
A FRAMEWORK FOR MEASURING AND TRACKING INDUSTRY EVOLUTION
One of our primary goals in creating this paper was to help media companies with a structured way to strategize and self-analyze in a rapidly changing market. The digital transformation framework we identify consists of six key areas to consider and prioritize:
Content: It’s not exactly a secret that job one of every media destination is to offer content that’s relevant and valuable to consumers. The proliferation of standalone and subscriber services creating original top-quality content has blown the competitive landscape wide open, but it’s more than just the popularity of a program – it’s also about the quality and variety of presentation. Broadcasters and providers are adding value through new streaming services and content partnerships that build audience engagement. UHD programming continues its rise, and while emerging technologies like 360 video and VR/AR might not earn a place on your roadmap today, they are increasingly worth exploring.
Products: How are you bringing your content to market? It’s important to take a hard look at how content is bundled and packaged, and how those decisions play out across platforms, devices, and regions. “Know thy audience” must be a mantra for providers to be heard above the noise of a crowded market, so any learnings about consumer behavior and response needs to be turned into actionable and agile planning.
Delivery: The consumer-facing demands of right now, any-device excellence put tremendous pressure to perform on infrastructure and processes, requiring new ways to eliminate outdated and disparate workflows in favor of a more unified delivery model that can keep up. Content, especially high-demand content, needs to be delivered seamlessly to multiple linear and non-linear destinations seamlessly, with a close eye on security. Protection of content is a serious concern for live programming (like premiere sporting events) as well as on-demand services, and in today’s environment it’s incumbent on each provider to ensure that the value of their assets aren’t compromised.
Customer Management: The importance of cultivating a healthy transactional relationship with consumers cannot be overstated. Direct-to-consumer business models provide unprecedented control, but like the saying goes, “with great power comes great responsibility.” Consumers expect easy, intuitive support as part of their relationship, and companies are building audience relationships that are longer and stronger by meeting folks on their terms – with tailored payment solutions and incentives for customers who bring more people to the brand. Technically, even an ad-supported service is transactional in nature, because consumers are being asked to watch messaging that paid for some eye-share on a program or stream. Informed focus on the customer experience is the only way to find and follow the north star for each brand.
Sales and Monetization: The word that rules our world here is Hybridity. The market continues to shift and grow. Monetization takes on an even deeper sense of urgency when it’s costly to produce. Incorporating an advertising component into a subscriber service, offering specific content as a standalone purchase, tailoring offers based on individual consumer habits – they’re all dynamic ways to maximize the revenue potential of content so long as the underlying technology and management processes can support it.
Technology and Operations: The common thread that runs across every point in the transformation framework is how companies are leveraging new technologies and techniques to close the gap between content and consumer. New approaches may supplant or augment existing components of your workflow, technology changes need to be prioritized, and analyzing the impact of how these advancements will impact the skillsets needed by your team are all considerations (side note: we published a white paper that introduced a different framework for this point alone, called Media Technology and Lifecycle Management).
GET THE RESEARCH
The 2019 TV Futures Initiative research paper illustrates the above points in greater detail, and provides a handful of case studies from companies that are applying these tenets to transform their operations and succeed with consumers globally. You are invited to download the paper for yourself as a tool to:
Gain insights into key digital trends from each of the above points
Better understand the challenges and opportunities ahead
Learn from firsthand experience of other companies
Apply this format to your own internal assessments
Read More
The “Retro Look” For All The Wrong Reasons
The sheer quantity of quality video out there makes it a fantastic time to be an advertiser or agency. So many new devices, new audiences… the possibilities are literally becoming endless. Here’s the thing though: be it live streaming, linear, on-demand, broadcast, you name it – consumers are increasingly savvy about how premium video should look and sound like. Audiences are being trained every day to expect “the good stuff” from every device, and ad quality doesn’t get a free pass. In fact, it’s even more important: seen amidst a crisp, clear, high-definition program, a poor-quality ad looks even worse. When an ad break shifts from an HD or UHD program over to a commercial with quality that feels like a ’70’s sitcom, it can directly impact consumer perception and, in turn, campaign performance.
YOUR ADS ARE BEING PLAYED ON BETTER SCREENS
Let’s focus for a moment on in-home screens and living-room entertainment to see how fast consumers are jumping on the high-def train: Do you remember way back when (oh, a couple of years or so ago), when the price for a 4K screen averaged thousands of dollars and comprised just a small piece of the market?
According to Digital Tech Consulting, 4K TV’s cost around $900 today on average, down from an average of $4,000 just five years ago.
The Consumer Technology Association predicts that 4K Ultra-High Def (UHD) screens will be in 41 percent of U.S. homes by the end of 2019. That’s up from sixteen percent just two years ago.
Throw in the ubiquity of powerful mobile devices with fantastic display technology and you’ve got a global audience that experiences more than ever when an ad is pixelated, if the sound level is out of whack with the rest of the program, or any other technical issue that comes between their screen time and your intended brand message.
AD QUALITY IS FOUNDATIONAL TO BRAND SAFETY
We sat down with Richard Nunn, Comcast Technology Solutions’ new Vice President and General Manager, Ad Suite, for a short Q&A session about where the advertising industry was headed. The quality question was certainly part of the discussion.
“The quality issue for both broadcasters and advertisers is a problem, and it’s a challenge across the industry,” Nunn said. “If you break it down, I think the first point is the fragmentation of channels that are out there, different (file) sizes and formats is a major problem for the buy side. They have to create multiple assets of varying sizes that they’ve never had to do before, ten times what they’ve ever done before.”
The advertising industry has always understood the end-user experience as crucial to the ad-supported programming model. From a brand safety standpoint, we spend a lot of time and energy working to improve brand safety for the advertiser, especially in a more automated ad buying and selling environment. It’s a major concern – ensuring that ads are delivered only to brand-safe environments is certainly a foundation of a successful ad strategy. But even in a situation where an ad suite has built “destination confidence” with advertisers, protection for the user experience – in the form of ad quality that’s commensurate with programming quality – is as important as ever.
HOW TO WIN IN 2019 AND BEYOND
Live streaming is an increasingly important and attractive market for a campaign to tap into, but users are more quality-sensitive than ever. As reported by Broadcasting and Cable, a new report from Conviva provides advertisers and agencies with some neon signposts about where live streaming is headed, and what success looks like:
Not only were global viewing hours of streaming video up by 89% last year, but in fourth quarter alone the increase was a whopping 165% over 2017.
Likewise, video plays were up 74% for the year, and 143% for fourth quarter.
At the same time, consumers are leaving poor experiences in greater numbers:
There was an increase in the abandon rate in 2018: Conviva reports that 15.6% of potential consumers quit a video due to problems with the experience. In the previous year, the abandon rate was more like 8-9%.
For live-streamed content, 16.7% of viewers quit poor experiences, which is up from 11.2%.
Other video destinations, such as vMVPDs or connected TVs, also experienced increases in abandon rates, albeit smaller (up about 2%)
The takeaways? Dodgy spot quality just won’t cut it anymore. It’s important to note that the consumer’s perception of quality is formed from an aggregation of everything that happens prior to the actual viewing experience. With that in mind, brand safety is really a shared goal between an advertisement and the program that hosts it. Poor quality from either side can – and does – have a detrimental effect on the other.
Video consumption habits continue to increase, evolve, and change. Consumers have more ways than ever to enjoy video, they’re smarter about what their screens can deliver, and they’re increasingly less inclined to sit through a sub-par experience. It’s incumbent on every partner in the ad delivery workflow to focus on the end result so that they complement the programming being offered to customers.
Read More
Video Monetization Models: To Maximize the Value of Content, Hybridity is the New Normal
From the research that comprises our recent paper Survive and Thrive: The Changing Environment for Content Services and Video Services, one of the surprising takeaways was that “being discovered” surpassed operational costs as the number one business challenge facing survey respondents. Along those same lines, “maximizing customer lifetime values” was in a virtual tie with operational costs for second place. The clear takeaway here is that success in video means to drive engagement up, while driving costs down. Today’s video monetization models are geared towards accomplishing these fundamental goals while building a fantastic content-consumer relationship – one screen at a time.
BUILDING ENGAGEMENT, FROM AD SPOTS TO PREMIUM CONTENT
It’s that “relationship” thing that’s led us to our current industry conversations around maximizing customer lifecycle values, hybridity and flexibility in commerce and video monetization models, and so on. Advertisers are in the same boat, quickly becoming savvy to the unique audiences at different destinations. It’s a constant learning experience for every video-based business, which is not just a statement about consumer behaviors. It’s also about “unlearning” outdated processes and manual procedures in favor of radical new ways to consistently: a) get to a screen quickly, b) merchandise effectively, and c) serve a consistently crisp, clear audio/visual experience.
The advertising industry, for example, understands that there are different audience cultures at different video destinations, and that campaigns are more successful if they’re tailored to be more aligned with the content that’s carrying them. The trick is to be able to participate in whatever folks are watching. The increased shift to on-demand viewing means that advertisers need a much faster response time from their ad delivery workflow, while gaining cost efficiencies that free up needed resources for targeted ads with shorter shelf-lives (we talk more about that in this blog). Industry adoption of a centralized cloud-based delivery model is an important first step towards modernizing the process of buying, selling, and delivering video products.
Within the video industry, relationship building is a complex balancing act between how consumer behaviors change at the macro audience level, and how an individual’s preferences change over time. There’s a direct tie back to advertisers as well: brands that need a deeper relationship with consumers – automotive brands, for example – are going to gravitate towards destinations and delivery methods that can bring them into a closer orbit with actual buyers.
ORGANICALLY GROWN AUDIENCES ARE HEALTHIER
There’s never been a better time to build an audience than today. That said, it’s also an increasingly global audience, which means that commercial strategies need the technology horsepower and know-how to pivot based on regional changes in addition to larger lifecycle management concerns. Some merchandising requirements might seem like no-brainers, like the ability to describe video assets in multiple languages as well as in images; but nurturing relationships at that scale means solving for the entire end-to-end – from app/storefront to multi-currency transactions, in different ways, with different customers, in different regions.
From a technology standpoint it’s imperative to have the ability to change up the way your content is merchandised, and the way promotions and offers are displayed, so that the user experience can be tailored to support all these factors easily. There are a ton of factors to consider:
Content might be bundled for specific audiences by category: such as genres, series, sequels, and trilogies. Or, dynamically bundle content targeted for specific devices and access rights (e.g. one movie for the iPad, two movies for all devices, or a collection of media that dynamically changes).
Pre-set access rules and pricing templates might be applied for different viewing rules like pay-per-view events, season passes, single transactions, movie bundles, or subscription access.
Product tags need to be easily applied at the video and bundle levels to make it easier for folks to search for and select content
For a subscriber-based model, pricing and special offers can get complicated quickly. Maybe a code-based promotion is the way to go (i.e. “enter FREEWEEK to start your trial”), or maybe a discount is applied based on a percentage, or at a fixed rate, or based on a minimum time commitment, or . . . you get the idea. Ultimately, a true hybrid video monetization model is something that needs a lot of technology behind it, to enable companies to turn their consumer data into actionable intelligence – and then into stronger, longer relationships.
THE BUSINESS OF “PLAY”
Demographics and buyer personas exist to categorize the constituents of a market, so that businesses can understand them and do a better job catering to them. Video brands looking for a deeper one-on-one with customers need the flexibility to shift and pivot on a more personal level. For more information we invite you to read our new article The Business of Play: Monetization in a Video Economy. A truly savvy business model begins with creative out-of-the-box thinking; but implementing it is a technology challenge that’s evolving right along with delivery quality and screen resolution.
Read More
VOD MONETIZATION: NEW TERRITORY FOR LOCAL ADVERTISERS
Are you watching more video-on-demand (VOD) programming? Of course you are. We’re living in a binge-watching economy where entire seasons of a program are delivered all at once and VOD consumption is a growing component of the average viewer’s video diet. The market opportunity for advertisers is nothing less than explosive: according to a 2018 report by the Video Advertising Bureau (VAB), ad impressions for VOD have experienced a dramatic increase from 6.3 billion in 2014 to 23.3 billion last year. So how do you capitalize on this 4x growth?
DISPARATE TIMEFRAMES VS. TIME-SENSITIVE ADS
For businesses and agencies at the local and regional levels, VOD advertising can be a tough nut to crack, primarily because of the convoluted process of placing and managing VOD ad spots. It can take anywhere from 24 to 72 hours to make spots available for play through multichannel video programming distributors (MVPDs). A 24-hour turnaround would already be too long for VOD to be viable for short-term promotions or late-breaking opportunities. If it takes even longer, it means that the viability of a campaign with a very short shelf life would be inconsistent across destinations. There are a lot of reasons why VOD processes (and timeframes) are all over the map:
Dynamic Ad Insertion (DAI) for set-top-box (STB) VOD requires additional metadata in a specific format (CableLabs 1.1) in order to work properly with the intended program. The metadata requirement lies on the MVPD side, but content providers need the software and people to write it. Many commercial spots are delivered either without it, or with metadata that’s written in an incompatible format.
DAI for digital destinations requires a variety of media formats. Content providers need the correct software to solve for these destinations, as well as video experts who can ensure that all transcoding / transformation is performed properly. The post-transcoding playback experience must be at its best, prior to being subject to any of the other factors (such as geography and network traffic) that may impact the consumer’s playback.
Every MVPD also has to account for its internal own delivery mechanism – or multiple mechanisms. They may outsource their work, maintain their own technology stack, or both. Larger distributors may have additional traffic coming from internal clients, such as advertising for news, sports, or other programs that the MVPD offers.
Adding to the overall complexity of VOD ad delivery is volume: to say there are a lot of ads to process would be a gross understatement. For national spots alone, just one major provider is moving hundreds of thousands of spots annually; and when it comes to VOD delivery, spots are still being relegated to a manual queue, mainly due to the challenges above.
Everything adds up to non-linear inventory that’s a real challenge to monetize effectively, especially during critical Nielsen C3/D4 periods that impact the value of advertising. Local ads or short term, time-bound promotions aren’t a good match for VOD playback unless they can be swapped out in near real-time once they’re no longer viable.
EXPANDING VOD MONETIZATION WITH THE CLOUD
With a centrally-managed ad library and management architecture that resides in the cloud, the disparate VOD workflows across delivery endpoints are brought together into one consistent, simplified, and accelerated process. This is huge news for both sides of VOD monetization: not only does it open up new revenue opportunities for ad purveyors, but it also improves business for content providers on the buy-side of the equation looking for more ways to sell ad space. The centralized location within the overall ad delivery pathway is key, providing both advertiser/agency and content provider/broadcaster with visibility and ease- of-use. Where the previous workflow takes up to 3 days to complete, the new process can be completed in hours.
(function(n) {
var url = "//comcast.postclickmarketing.com/adstor-workflow",
r = "_ion_ionizer",
t = n.getElementById("ion_embed"),
i;
t.id = r + +new Date + Math.floor(Math.random() * 10);
t.setAttribute("data-ion-embed", '{"url":"' + url + '?_ion_target=embed-1.0","target":"' + t.id + '","appendQuery":false}');
n.getElementById(r) || (i = n.createElement("script"), i.id = r, i.src = (n.location.protocol === "https:" ? "//8f2a3f802cdf2859af9e-51128641de34f0801c2bd5e1e5f0dc25.ssl.cf1.rackcdn.com" : "//1f1835935797600af226-51128641de34f0801c2bd5e1e5f0dc25.r5.cf1.rackcdn.com") + "/ionizer-1.0.min.js", t.parentNode.insertBefore(i, t.nextSibling))
})(document);
Ads and promos are now aggregated into one common location.
Digital/ad operations teams can locate assets (both external ads and internal promotional spots) via web-based UI.
Assets can be viewed, rules applied, and distribution initiated to preset destinations.
Information needed for ad decisioning services (ADS) is immediately generated.
Ops teams can copy and paste the information into ADS, and associate the creative assets with the correct campaign immediately.
The across-the-board benefits translate into more than just better performance. They imbue the entire operation with the needed agility to respond to changes in short order, and make last-minute revenue opportunities possible.
Read More
Comcast Technology Solutions Certified Partners Program
Video standards and customer expectations are constantly evolving and new technology solutions are required to stay competitive. There are more than a dozen platforms on which consumers watch video, each with support for varying video streaming, caption, and digital rights management (DRM) formats, not to mention application development APIs and platform capabilities. Artificial Intelligence and Machine Learning are profoundly changing our understanding of not only user behavior and your business’s health, but the content itself. Monetizing your video content requires ever greater sophistication, not only by providing different business models (AVOD, TVOD, SVOD, XVOD), but also optimizing customer acquisition and retention (CRM, paid and organic search, social). Distribution must be scaled, and measured to ensure timely delivery, no buffering, and fast time to first frame.
Video is hard to do right, and the consistent quality of every video experience is foundational to a healthy consumer lifetime value. Choosing the right mix of vendors and partners to launch a video solution is critical to commercial success. Comcast Technology Solutions is fortunate to partner with leading technology providers across the industry, and around the globe, to deliver best-in-class experiences to our customers, and our customers’ customers. To that end, we’re excited to announce the launch of the Comcast Technology Solutions Certified Partner Program to address these complexities.
CERTIFIED PARTNER BENEFITS
We work closely with our Certified Partners across a broad ecosystem of specialties to ensure the most consistently reliable, trusted, and cost-effective integrations with Comcast Technology Solutions’ portfolio of products and services. Together with our Certified Partners, we continually assess, improve, and evolve our joint offerings to ensure our customers benefit from rapid time-to-market, and our combined, emerging roadmap updates.
When our customers select a Certified Partner, they are assured that the partner’s product integration into Comcast Technology Solutions’ technology stack is well-scoped, follows our best practices, and has passed rigorous technical validation.
Certified partners receive training and support on our technologies, as well as guidance on each project from initial product integration all the way through to customer delivery.
Partner integrations are evaluated and certified for technical compliance, as well as for transparency and alignment across our sales, marketing, product, and delivery teams.
All Certified Partners build their integrations against the same set of Comcast Technology Solutions specifications, which ensures a level playing field where partner product solutions can predictably interoperate, differentiate and shine.
INITIAL LAUNCH
Our initial launch of Certified Partners includes industry leading partners in the Audience Insights & Analytics and UX & Application space.
Jump TVS, Streamhub, and Wicket Labs each offer unique solutions to enable audience measurement and rich insights, as well as the overall health of the business.
Accedo and Diagnal are experts in video application development, offering a variety of products and solutions that will grow with every business.
We are proud to welcome them to the program. Comcast Technology Solutions and our Certified Partners across the ecosystem enable customers to put more focus on their business, and less on their technology stack. Because Certified Partners’ solutions are well scoped and vetted, our customers know what they are getting, and when they are going to get it. By deploying Certified Partners’ solutions, Comcast Technology Solutions customers realize faster time-to-market and thus time-to-value.
If you’re a current customer looking for a well-integrated partner solution, take a look at our current roster of Certified Partners here.
If you’re a partner or vendor who is interested in bringing your solution closer together with Comcast Technology Solutions’ products and services through our Certification program, contact us!
COMCAST TECHNOLOGY SOLUTIONS CERTIFIED PARTNERS PROGRAM
Comcast Technology Solutions
2018-11-23
2018-11-23
Blog
https://www.comcasttechnologysolutions.com/sites/default/files/2016-09/CTS_Final.jpg
Video standards and customer expectations are constantly evolving and new technology solutions are required to stay competitive. There are more than a dozen platforms on which consumers watch video, each with support for varying video streaming, caption, and digital rights management (DRM) formats, not to mention application development APIs and platform capabilities. Artificial Intelligence and Machine Learning are profoundly changing our understanding of not only user behavior and your business’s health, but the content itself. Monetizing your video content requires ever greater sophistication, not only by providing different business models (AVOD, TVOD, SVOD, XVOD), but also optimizing customer acquisition and retention (CRM, paid and organic search, social). Distribution must be scaled, and measured to ensure timely delivery, no buffering, and fast time to first frame. Video is hard to do right, and the consistent quality of every video experience is foundational to a healthy consumer lifetime value. Choosing the right mix of vendors and partners to launch a video solution is critical to commercial success. Comcast Technology Solutions is fortunate to partner with leading technology providers across the industry, and around the globe, to deliver best-in-class experiences to our customers, and our customers’ customers. To that end, we’re excited to announce the launch of the Comcast Technology Solutions Certified Partner Program to address these complexities. CERTIFIED PARTNER BENEFITS We work closely with our Certified Partners across a broad ecosystem of specialties to ensure the most consistently reliable, trusted, and cost-effective integrations with Comcast Technology Solutions’ portfolio of products and services. Together with our Certified Partners, we continually assess, improve, and evolve our joint offerings to ensure our customers benefit from rapid time-to-market, and our combined, emerging roadmap updates. • When our customers select a Certified Partner, they are assured that the partner’s product integration into Comcast Technology Solutions’ technology stack is well-scoped, follows our best practices, and has passed rigorous technical validation. • Certified partners receive training and support on our technologies, as well as guidance on each project from initial product integration all the way through to customer delivery. • Partner integrations are evaluated and certified for technical compliance, as well as for transparency and alignment across our sales, marketing, product, and delivery teams. • All Certified Partners build their integrations against the same set of Comcast Technology Solutions specifications, which ensures a level playing field where partner product solutions can predictably interoperate, differentiate and shine. INITIAL LAUNCH Our initial launch of Certified Partners includes industry leading partners in the Audience Insights & Analytics and UX & Application space. Jump TVS, Streamhub, and Wicket Labs each offer unique solutions to enable audience measurement and rich insights, as well as the overall health of the business. Accedo and Diagnal are experts in video application development, offering a variety of products and solutions that will grow with every business. We are proud to welcome them to the program. Comcast Technology Solutions and our Certified Partners across the ecosystem enable customers to put more focus on their business, and less on their technology stack. Because Certified Partners’ solutions are well scoped and vetted, our customers know what they are getting, and when they are going to get it. By deploying Certified Partners’ solutions, Comcast Technology Solutions customers realize faster time-to-market and thus time-to-value. • If you’re a current customer looking for a well-integrated partner solution, take a look at our current roster of Certified Partners here. • If you’re a partner or vendor who is interested in bringing your solution closer together with Comcast Technology Solutions’ products and services through our Certification program, contact us! .
Comcast Technology Solutions
Read More
Broadcast Quality? It's Still a Thing.
What has a stronger impact on an audience’s perception – a great experience or a singularly awful one? In a world where every viewer is also a social media publisher in their own right, we could certainly point out the consequences of poor video performance (like we did in our March eBook). As consumers, we really don’t need to look much further than our own behavior. If something’s not loading quickly, or playing without interruption, it’s not getting played.
The hunt for consistent broadcast quality across device types is the foundation of any long-term viewer relationship. Today, content can not only be viewed across devices, but also across devices via different platforms. For example, your living room smart TV can play the same piece of content through its own built-in platform, through your cable provider, through a game console, or some other connected device. Where folks choose to gorge themselves on video is an experiential thing, and more often than not it’s an experience that’s inexorably tied to other experiences. No matter where these experiences occur, broadcast quality is increasingly the standard.
Technology leaders weigh in . . .
We asked over 200 technology managers in our industry to share their perspectives on where we are at in the quest for broadcast quality and what keeps them up at night. We’ve published the results in a whitepaper called Survive and Thrive: The Changing Environment for Content Services and Video Services. In this paper, the quality question offers some interesting online/broadcast comparison: Over-the-air broadcast and pay TV linear channels are an important part of the respondents’ distribution strategy (with 44 percent and 34 percent of respondents, respectively),
When asked the question “when will online video meet / exceed TV quality and reliability,” the results showed pretty significant split between the respondents who think “we’re already there” and the ones who view parity as a next-decade thing :
Regardless of where they felt we were in the process, consumers are certainly happy with the across-the-board improvements to resolution, color, and overall playback quality: Ericsson ConsumerLab reports an 84% satisfaction rate with online video from their own study[1].
Exceeding (and shaping) tomorrow’s consumer expectations
With the accelerating commercial adoption of 4K / UHD / HDR screens, and the eventual arrival of consumer devices that can take advantage of the ATSC 3.0 broadcast standard, we are of course seeing a massive increase in available 4K programming across the entire provider ecosystem. While clearly a significant improvement in picture quality, the sheer size of ultra-high definition video files bring new challenges of their own. As consumers acquire more of a taste for (and an ability to play) “the good stuff,” every touchpoint in the delivery workflow is more heavily taxed by having to store, transcode, and deliver files that are up to four times bigger than standard. It’s a lot of change that needs to be managed all at once, especially when the premium prices for premium content create premium expectations. It’s no wonder that 92% of our respondents said that meeting or exceeding broadcast quality and reliability is still important.
To learn more about how our respondents feel about a wide range of topics, including the use of cloud technology and artificial intelligence, download the whitepaper here.
Read More
The Changing Video Distribution Services Environment for Content Providers
We recently partnered with Broadcasting & Cable and TV Technology to sponsor a new research paper called “Survive and Thrive: The Changing Environment for Content and Video Providers.” For this paper, the survey targeted technology managers who are responsible for the implementation of video distribution services. Participants were asked to rank the importance of ten of their most pressing business challenges. Discoverability topped the survey as the number one challenge, with 71 percent of participants rating it higher than maximizing lifetime value or controlling operational costs. With over 200 premium video services in the market, it’s becoming increasingly difficult for brands to get noticed; but consumers have to find the front door before any other performance metric matters.
The attached infographic illuminates some more findings from the research:
(function(n) {
var url = "//comcast.postclickmarketing.com/NewBayResearchInfographic",
r = "_ion_ionizer",
t = n.getElementById("ion_embed"),
i;
t.id = r + +new Date + Math.floor(Math.random() * 10);
t.setAttribute("data-ion-embed", '{"url":"' + url + '?_ion_target=embed-1.0","target":"' + t.id + '","appendQuery":false}');
n.getElementById(r) || (i = n.createElement("script"), i.id = r, i.src = (n.location.protocol === "https:" ? "//8f2a3f802cdf2859af9e-51128641de34f0801c2bd5e1e5f0dc25.ssl.cf1.rackcdn.com" : "//1f1835935797600af226-51128641de34f0801c2bd5e1e5f0dc25.r5.cf1.rackcdn.com") + "/ionizer-1.0.min.js", t.parentNode.insertBefore(i, t.nextSibling))
})(document);
So, where are content and video providers placing their focus?
Personalized Experiences
The subscriber-based service model clearly works, but what’s the secret ingredient that keeps members paying – and actively using – a service? The answer is probably unique to each provider’s audience, but the end result is the same: a media experience that learns how to remain relevant to paying customers by showcasing the right content, using member insights to guide decisions, and delivering trusted, high-quality viewing.
Cost-Effective Technology
Multi-platform video delivery is complicated and expensive. It behooves companies to quickly capitalize on partnerships and efficiencies that can free up resources for content creation and curation. The results of this survey might indicate that the promise of the cloud is still viewed with caution, but with rising content costs and more direct-to-customer offerings on the horizon, time to market is a critical success factor that cloud solutions can greatly improve.
“Hybridized” Monetization
It’s increasingly common to see an ad when watching something that’s part of a subscription that you’ve already paid for. Providers are getting smarter about how to organize and monetize their experiences, not just to maximize the value of each asset, but also to make purchase decisions easier. Tiered memberships, targeted incentives, and promotions that incentivize new viewers are all part of a mature monetization strategy.
Want to know more? Download the research paper, “Survive and Thrive: The Changing Environment for Content and Video Providers,” here.
Read More
Your Player: How Fast to First Frame?
Did you “like” or “dislike” a video this week?
Watching video is an emotional experience. Sure, most programming is supposed to be – your favorite program, a riveting news piece, or even a lousy movie will all elicit a specific feeling, but even the most straightforward and not-very-interesting piece of content is still elevated when delivered by video as opposed to straight text. Why? Simply put, video is essentially an amalgam of every other communication medium. With your brain engaged by text, pictures, and sound, stories become more memorable and knowledge is transmitted and retained more completely. With a plurality of our senses working in concert, we’re not only more inclined to make purchase decisions, we’re also more likely to abandon a video that isn’t getting the job done:
Video is such an effective sales tool that according to recent research from Hubspot, 81% of polled businesses are using it – that’s up from 63% just last year. [1]
Over 85% of online viewers who stop watching a video left because the loading time was too long. [2]
67% of viewers say that quality is the most important factor when watching live streamed programming [3]
The long and short of it is this: whether someone is clicking “Play” or the video is set to start with a page load, every second between the intended start and the actual start brings you closer to viewer abandonment. No matter if your goals are to inform, inspire, elicit, or entertain, playback quality really, really matters. To add more complexity to the mix, our multi-screen consumption habits require that your video performs well no matter if it’s being viewed in a living room, at a desktop, on a phone, or anything else in your life that has a screen on it. Meeting consumers on their terms only works if their expectations are being met or exceeded.
PDK6: OPTIMIZED PLAYBACK PERFORMANCE
Both speed and quality drove the innovations contained in the newly-released sixth version of our Player Development Kit (PDK6). Built from the ground up for the most up-to-date browser standards for playback, UI rendering, and autoplay requirements, PDK6 is optimized to be the fastest to first frame, with playback at lightning-fast speeds in most cases. PDK6 brings a host of improvements to the consumer experience, as well as to operational efficiency:
Better advertising: PDK6 loads faster, renders ads natively, and supports both client and server-side ad insertion out-of-the-box and integrates with ad-serving platforms.
Unmatched flexibility: The streamlined and mobile-friendly UI building make PDK6 easy to implement and customize, while also providing a consistent player experience across browser platforms and device types.
Broad industry support: PDK6 is pre-integrated with third-party analytics vendors, and provides support for analytics solutions from providers such as comScore, Nielsen, Google, NPAW, and much more.
To learn more about the capabilities of PDK6, download the technical one-sheet here.
[1] “The State of Video Marketing in 2018,” Hubspot, January 24, 2018.
[2] “2017 Video Streaming Perceptions Report.” Mux, April 13, 2017.
[3] “What Audiences Expect from Live Video,” New York Magazine and Livestream, infographic, 2017
Read More
Digital Rights Management - a Multi-Platform Checklist
Digital Rights Management (DRM) is, at its core, a way to protect the long-term value of intellectual property in a content-driven economy (for those interested in the history of DRM, Vice.com’s Motherboard published a fantastic article last year on the subject). Content access has to be managed across a dizzying combination of device types and streaming formats, not to mention mobile devices that can travel in and out of different programming areas. No matter where someone is, what device he/she is using, or what platform is running, your technology stack must be capable of understanding and granting consumer requests instantaneously.
A primary driver of DRM complexity is simply that there’s no single “right answer” to enable DRM – every destination uses a rights management approach to meet their needs. There are several components in a complete service that you should look for so you can:
Establish licensing
Negotiate handshakes between the consumer’s player and the right DRM provider for the viewer’s interaction
Streamline the entitlements process to make it a seamless and unobtrusive part of a consumer’s experience
JITP/E for Content Prep and Encryption
Storage is an issue that gets more challenging as consumers demand higher and higher quality video. Better quality means bigger files, and each one of those files needs an iteration that’s formatted for consumption by all the browsers, mobile apps, set-top boxes and other TV-connected devices you’re working to reach. Just-In-Time Packaging and Encryption (JITP/E) creates format-specific video on request, reducing redundant file storage. JITP/E also enables the multitude of adaptive bit rate (ABR) packaging format+encryption schemes on the fly so that it’s unnoticed by the consumer. Once a video is encrypted, it also needs to be decrypted for playback based on the end-user’s device / platform requirements such as:
HLS / Fairplay for iOS and OSX devices
DASH / Widevine for Android and Chrome devices
DASH / PlayReady for Microsoft devices and set-top boxes (STBs)
Key Management Service
Key management processes are unique for each DRM method and not something where you can just “flip a switch.” Whether it’s configuration and testing, encryption, user authentication, or playback, you’ll need to maintain relationships with every key management company required to reach a device-diverse audience. Here at Comcast Technology Solutions we let clients dictate just how much of those functions they want to control, or we can manage the entire DRM relationship / process ecosystem.
License Management
License management, token authentication, and decryption key transmissions are all parts of your DRM process that need to provide a solid pathway from user request to content delivery. Once a video is packaged and DRM-protected for the platform and device of a specific playback request, it’s ready for the user to receive it from a server within your content delivery network (CDN) – provided that the user is authenticated and a license to unlock the content for playback is granted and provided.
Entitlement Setup and Modification
Entitlements establish the conditions that a user needs to meet in order to be entitled to watch a program. This conditional access is really complicated in a multi-platform environment and must be assessed for each request. Just some of the factors that your entitlements process must solve for:
Users might have entitlements on multiple devices
There might be specific limitations, such as a cap on monthly usage, tiered access to specific files or higher-definition video, or limitations created by the device itself
Off-line viewing might have unique requirements of its own
It can sometimes be easy to confuse license delivery with entitlements, yet each brings a unique set of capabilities to the overall DRM solution.
Licensing: DRM and its associated license control the physical access to the video file such as how many times it can be played on a specific set of hardware and for how long before an entitlement check needs to be performed again.
Entitlements: The entitlement system controls the consumer’s access to the asset based on the business or purchase model employed (e.g. is the user currently a paying subscriber either as part of the native D2C experience or via a TV-V model, did the user just purchase rental access to an asset than only allow the asset to be viewed a certain number of times in the rental period).
Using these two access control systems in concert allows content providers to offer a wide array of flexible monetization and protection combinations. The ability to maintain dynamic control over user entitlements is more than just a crucial management function. It’s also a vital part of your monetization plan when consumers can easily add more variety into their experience. Upselling to a higher tier of access, or simply making it easy to buy individual content titles, are functions that have an impact on entitlements. Your DRM solution should give you a clear snapshot of all the entitlements granted to viewers, and the ability to easily modify those rights.
Ultimately, your end-to-end DRM solution needs to be tailored to your needs, your technology stack, and your business model. We developed our DRM approach for flexibility so that once a client’s DRM solution is customized and deployed, additions or changes to the methods used by different devices can be easily integrated to keep the entire workflow updated.
You can learn more about Comcast Technology Solutions’ Digital Rights Management capabilities here.
Read More
Attaining Long-Term Subscriber Love
The attraction of a subscription VOD business model is a reliable revenue stream, powered by a personalized experience that evolves to remain relevant and valuable to viewers. Monetization choices are a big part of that, and have an indelible impact on the way subscribers value their services.
Planting Seeds for Organic Growth
It’s easy to see how effectively a popular show can bring a competitive destination to the attention of a larger audience. Binge-watching is a relatively new phenomena, and the increasing number of industry awards for over-the-top (OTT) movies, series, and other programming is testimony to the drive for great content that really resonates with audiences. A great show can lead to explosive growth in new subscriber sign-ups, especially when paired with the launch of new add-on options that build more value, or creative introductory offers that entice and invite deeper participation at a more attractive price. Once the allure of introductory offers and big-ticket content segues into daily use, however, media brands are learning to engage and even incentivize their members to help grow the brand organically. But what does organic growth really mean?
Simply put, it’s an emphasis on growing from the inside out. Not only through ads, search optimization, or other marketing, but also by improving the value of each relationship and cultivating a base of happy customers who will share your content with others. Just like a garden you’d tend in your back yard, organic growth starts with consistently delivering the things that keep your customers healthy – which for video includes things like a great playback experience, rich and unique content offerings, and an enduring sense of value. From there, you can provide tools for your fans to use, and incentives like gift vouchers or refer-a-friend programs that make participation rewarding and easy.
New white paper: Advanced Commerce: SVOD
Comcast Technology Solutions has just published a new paper that speaks specifically to the advanced commerce tools and techniques that SVOD destinations can employ to lengthen and strengthen subscriber relationships. Sections include:
The “Perfect Subscription VOD Model” — Your technology choices support your ability to meet each viewer on his / her terms, no matter what that means today – or what that might mean tomorrow. Subscription terms may shift based on business objectives, and offerings may evolve into a hybrid of subscription, ad-based, or transactional components to appeal to a wider range of consumers.
Making it Easy to Locate and Buy Content — Your user experience (UX) and user interface (UI) design can build experiences that elevate the lifetime value of subscribers through seamless monetization opportunities and simple, effective quality of life enhancements.
Enabling Tailored Promotions and Custom Offers — Just like any other relationship, the successful SVOD business is constantly evolving along with the habits and preferences of consumers.
Read More
The Quickening Pace of Video Delivery Workflow Convergence
Last year we kicked off 2017 by referring to it as “The Year of Converged Workflows.” And in many ways, it certainly was, with both broadcast and over-the-top (OTT) video companies learning from – and working more closely with – each other than ever before. At this year’s National Association of Broadcasters (NAB) convention in Las Vegas last month, it was clear that no matter what the video delivery vehicle is, conversations and innovations focused on merging the quality and reliable performance of broadcast with the efficiencies and multi-platform power of IP delivery. With the mid-point of the year on the horizon, here are just a couple of ways in which the world of converged workflows will continue to accelerate:
Improving communication between workflows
The landscape for 2018 and beyond is uncharted territory. In the short term, 4K programming and premium on-demand content will continue to increase the size of the average file, and the quality expectations of the average viewer. In the longer term, new forms of consumer video (VR and AR, for example), will need even more complex supporting information than a “standard” video playback. This will require a robust, intelligent video delivery superhighway that can quickly process and communicate a mountain of data. Today, the industry is laying a foundation of standards and protocols that will provide more flexibility and better machine-to-machine communication. Two highlights:
SMPTE 2110: The Society of Motion Picture and Television Engineers announced this suite of transport protocols last year, and as adoption increases it promises to be a big step in the right direction for a converged broadcast / IP workflow. SMPTE time-stamps the video, audio, and ancillary data of a video signal and then sends them in separate IP streams, which enables the receiving end to ensure a better playback -- or to just receive specific parts (audio only, for example) without having to accept the entire program.
SCTE 224: Each video relies on a massive amount of accompanying metadata in order to be delivered accurately, to honor all policies, restrictions, and rights, and to enable robust search discoverability. The SCTE 224 standard, which we’ve written about extensively (here’s our March blog on the subject), is not ubiquitous yet; however it’s gaining traction as an effective way to bring QAM and IP delivery into alignment. On the vMVPD front, FuboTV announced their adoption of the standard in March.
Closer collaboration between OTT and pay TV
Audiences don’t see broadcast vs. digital as an either-or proposition. Over half of US homes have both pay TV and OTT services, according to a new study by Parks and Associates. Furthermore, the same study shows the value of pay TV’s presence in the home as an important platform by which digital media brands can connect with families through a premium living room viewing experience. OTT users report watching their services from their television screens at least 50% more than other platforms such as PC’s, smartphones, or tablets. Consumers are empowered by choice, but the sheer amount of choice can make it challenging for consumers to build a personalized “library of content access” that makes it easy to manage and discover content. Every destination thrives based on its ability to deliver content that resonates with audiences, but it’s not just “the next big show” that builds lasting relationships with viewers. Local programming, live sports and events, niche programming, and monetization strategies that simplify the management of subscriptions and transactions are all part of a long-lasting consumer-content relationship.
It's worth keeping in mind that collaboration is not a matter of traditional broadcast workflows being supplanted by IP delivery in its entirety. Instead, think of it as an ongoing evolution of the hybrid models that are in operation today. After all, the points of complexity within the video delivery workflow have definitely shifted. As technology innovations simplify the production and delivery of quality content, it has also created a diverse landscape of possibilities as to how video can be used and sold. The “best of both worlds” approach solves for today’s opportunities while maintaining the adaptability needed for tomorrow’s video market.
Read More
You’re Only Live Once
The excitement of any major live event, like this year’s college basketball playoffs, aside from being responsible for countless hours of distracted coworkers, is a great time for our industry to witness the improvements to live streaming in action on a massive scale. Digital destinations are providing a better core experience to consumers, with improved framerates and cool new personalization features. When it comes to straight-up video quality and reliability, however, satellite broadcast is a great way to spearhead a big ticket, multi-platform live event, sports or otherwise. By combining both broadcast and digital into a well-planned live event, consumers get the best of both worlds.
“Best Effort” does not an SLA make
IP-based delivery is light-years ahead of where it was just a few years ago, but as more and more consumers develop an appetite for “higher- and higher-quality” video, the widening data stream continues to push the performance limits of every platform. Advertisers and investors understand the value of reaching a gargantuan audience, but they also – rightly – seek assurances that viewers are getting an experience that supports their business objectives. A best-effort IP-based delivery means that there’s no guarantee that the end result will meet any quality metrics, or that the stream will even be successful.
Dan Rayburn, streaming video analyst at Frost & Sullivan, pointed out a common problem with live streaming in a recent interview with Fast Company: there are so many potential failure points, solving for them all is an expensive proposition that most services don’t account for. “When a backup doesn’t kick in, or you don’t have it, or you haven’t planned for it, or you don’t want to spend the money for it, well, your live event’s going to go down,” said Rayburn. Accountability for streaming quality is still a challenge. It’s one thing to talk about how many people tuned in to a streaming event, and another thing entirely to talk about lag, buffering, or streams that were lost entirely.
Broadcast, on the other hand, provides the availability and reliability that consumers have relied upon since the first days of live television. It’s that straightforward, low-latency, consistent quality that digital delivery is benchmarked against. When millions of dollars (or more) are on the line, it can be an exceptionally costly corner to cut.
The satellite-forward, service-first model
The best delivery architecture is one that leverages the strengths of broadcast and digital into a seamless multi-platform workflow – piloted by expert event engineers who plan every transmission and quarterback the communication from start to finish. Look at it this way:
IP-only: lower cost, complex failsafes required, “best effort” performance
Broadcast: more expensive, rock-solid best practices, high availability and reliability
Strategic use of both: trusted, cost-effective performance that supports both linear television and cloud experiences
It’s impossible to overstate the value that an experienced team brings to a top-tier live event; not just in overall quality, but in cost savings as well. A complex live-streaming event, such as a high-profile awards show, is a great example. There are cameras everywhere: the red carpet and interview areas out front, around the audience, the main stage, backstage, etc.
A well-planned and executed event will map out all redundancy paths, but also will prioritize the use of broadcast signal. For example, the main stage cameras would be captured via broadcast to ensure quality, but ancillary feeds could be digital-only, saving costs.
Satellite and digital captures are then converged into a master control so that the complete program can be compiled, encoded, and distributed across platforms.
“Simulated” live events open up more possibilities
The hybridized broadcast / digital model is really a best-of-both worlds architecture that not only bakes in the video quality and process redundancy expected of a major live event, it also opens up opportunities to appeal to viewers in new ways. A program can be created that captures the immediacy and advertising allure of a live event, while also incorporating existing video assets into one seamless program. A live studio audience portion, for instance, could be woven into a playlist with sections of non-live material, encoded and then distributed just like a linear television feed. It’s a cost-effective way of producing a special “one night only” event, or a way for a large organization – a coast-to-coast church broadcast, for instance – to maintain a live “feel” with the dependable quality that keeps viewers engaged.
Live programming is going through the roof, with media brands of all stripes working to realize – and monetize – experiences that bring viewers closer to a true “front row experience” than ever before. That said, there are no do-overs in live video; so before the world tunes in, it’s worth the extra planning to ensure that viewers on any platform get the kind of quality that keeps them in their seats.
Read More
Multi-CDN Optimization: The Importance of Measuring Each and Every Stream
One way a CDN competes for your video delivery business is by showing you data that proves that it is faster than other CDNs. Every CDN has a handful of industry reports, third-party tests, and/or head-to-head customer bake-offs that prove that at least sometimes, under some conditions, they are fastest.
But none of that data answers the questions that really count: how well does each CDN perform for you, delivering video to your end users? And how can you combine two or three CDNs to get the best video experience for every user?
Any CDN’s overall performance is an average of how that CDN performs on millions of deliveries; across many regions; to dozens of different devices; connected to hundreds of access networks; at different times of the day and days of the week; and averaged over hundreds or thousands of customers – all under constantly changing conditions. Reducing all of those variables to a single, average statistic ignores all the complexity and volatility inherent in the real-world conditions under which end users watch video.
For example, each customer has its own line-up of CDNs, and is mapped to a particular subset of each CDN’s resources; each customer’s users are distributed differently – geographically, across access networks, and across devices; and the demand profile against each customer’s content catalog varies and is unique.
The most effective way to manage an environment this complex – and ensure the best possible results – is to measure the performance of each CDN as it delivers each and every video stream, and then use those specific, detailed measurements of your audience’s actual video experiences to assign CDNs stream-by-stream, in real time.
And the actual numbers prove it.
We looked at two customers – let’s call them Publisher 1 and Publisher 2 – each of which uses three CDNs (two of which are the same). Over a 30-day period DLVR, a Comcast Technology Solutions partner for the Comcast CDN, made over 500 million CDN decisions for these two publishers. Both publishers rely on DLVR to make CDN performance the priority, but each has very different business rules for CDN decisioning.
A comparison of the decisions DLVR made for these two customers reveals a few key data points:
CDN performance varies significantly from one publisher to the next. These two customers showed very different levels of hazard conditions, even among the same CDNs.
It’s important to measure every stream to optimize video delivery for performance in a multi-CDN workflow. CDN performance varies from moment to moment, delivery to delivery, and from customer to customer.
Predictive, performance-based multi-CDN switching is the only path to getting the most impact out of a multi-CDN workflow.
DLVR data demonstrates the difference in CDN decisioning for each publisher. Publisher 2’s business rules are set up to switch to CDN 1 as a last resort to maintain performance, whereas Publisher 1 relies on CDN 1 more often.
There’s clear value in having multiple CDNs. But a metric that averages publisher results into a blended measurement tells you very little about your specific CDN performance that is useful or actionable. Measuring every stream as it plays, and then acting on those measurements stream-by-stream to select the best CDN for each user delivers the best possible experiences across your audience.
Check out more information on how Comcast CDN and DLVR work together to provide robust multi-CDN strategies.
Read More
The New Ad Management? Meet me in the cloud
The relationship between time-shifted viewing and advertising is shaped by the increasing power of consumer choice, and the need for processes that allow advertisers to meet customers on the customers’ timetable and device preference. For advertisers who want to reach their intended customers, the sheer variety of devices and destinations has introduced unprecedented levels of complexity. Once time-shifted viewing is added to the mix, advertisers must deliver assets in near-real time in order to fully capitalize on the VOD market. It’s a hurdle that has kept a lot of ads out of the on-demand space.
The advantage of a centralized, cloud-based advertising management model is that it removes complexity from both the buy and sell side of the delivery spectrum, creating a mid-point where every player in the video advertising ecosystem can quickly identify assets and deliver them fast enough to be viable as part of a VOD playback. Along the way, process waste and unneeded manual intervention are eliminated in favor of a streamlined machine-to-machine workflow. I’ll be presenting on this topic at this year’s NAB convention in April and wanted to share a little about how the cloud presents a transformative approach for all delivery, time-shifted or otherwise.
3 days: Not best practice anymore
The process for delivering linear video, aka “publishing for consumption according to a schedule,” might be generally understood, but every content provider has its own method for doing it. VOD workflows, mostly implemented as bolt-on processes to an existing linear workflow, add another layer of complexity.
It takes somewhere between 24 and 72 hours to tailor one spot into every iteration it needs in order to be available for play across MVPDs and other digital distributors. For a large population of time-sensitive ads out there, the wide discrepancies in processing time from iteration to iteration in a traditional VOD workflow make it a tough place for advertisers to fully capitalize on.
Once you compound the challenge across the hundreds of thousands of ads that a major content provider will handle each year, it’s easy to see why the traditional VOD workflow isn’t ideal for advertising assets with a shorter shelf life, like local ads or time-bound promotions. If there was just a way to deliver them faster, and swap them out as soon as they’ve run their course . . .
The centralized “collaborative cloud” model
Cloud advertising management is revolutionizing the entire ad delivery workflow by creating a collaborative environment that can be accessed by players across the delivery path. Benefits include:
Quality-assured mezzanine assets, ready for distribution at the press of a button
Centralized library and asset management terminal
Standards and Practices approval; seamless pass/fail functionality
End-point aware processing and distribution
End-to-end support and visibility, instant asset tracking
Ad operations teams can attain instant access to advertising spots; from there, they can automatically prepare and distribute non-linear ad content to MVPDs and other digital properties. Content providers can source content and tailor versions for all screens. Collecting ad spots together into one common cloud-accessible clearing house results in streamlined, quality assured, and timely campaign execution across any platform.
For content providers, advertisers, or agencies, the speed of execution makes all the difference between a successful campaign and one that under-performs. Ingest, processing, and distribution times are reduced from days to hours; bringing inventory to ready-to-air status, serving impressions faster than ever, and opening up time-shifted viewing to more forms of advertising, such as local ads that generally age too quickly to take advantage of the on-demand market.
Destination-specific rules can be managed more effectively as well. For example, specific rules around acceptable ads for a family-friendly video destination (rules such as no ads with guns, no smoking, etc.) can be managed or applied specifically in a centralized cloud environment. The content provider would need only to access a user interface (UI), locate the asset, review it, and apply rules to guide its delivery.
Less duplication, more monetization? Yes, please
The benefits of the advertising management cloud will be fully realized as more and more businesses move their processes to it, simplifying workflows and adding more options to content monetization strategies. As a “one-stop shop” for all delivery needs, the cloud makes a compelling argument for itself, but future benefits also extend into improved tracking and reduced file duplication, both of which are long-standing, interconnected issues across the workflow.
Regardless of where an ad is originated and where it needs to be played, the centralized cloud model is an elegant way to improve and streamline the advertising delivery ecosystem. The inherent process efficiencies bring increased responsiveness and visibility, and ultimately generate added value and revenue capabilities of each piece of content. This “meet me in the middle” architecture breaks down the wall between ad buyers and sellers and opens up collaboration in unprecedented ways.
Read More
Dead Air is Not an Option
If you’re in the business of keeping eyes glued to screens, you are responsible for delivering to a lot more devices than just the living room television. That said, not every piece of content, live or otherwise, can be delivered everywhere. The rights to distribute and display a piece of content are very specific instructions on where, when, and how a program can be legally used; and those rights don’t always line up perfectly with a provider’s entire base of consumers. When one program needs to be swapped out for another in order to maintain compliance, how does a provider provide an alternative, sometimes in short order, and still maintain engagement? At this years National Association of Broadcasters (NAB) conference in Las Vegas, I’ll be talking about metadata management, the SCTE 224 metadata standard, and how we manage the complex traffic challenges of today’s delivery ecosystem.
Alternate content feeds – not just for sports anymore.
In the past, time changes were generally associated with the linear content delivery of live sports, because sports programming need the most consistent strategy to address issues such as extra innings or rain delays. Now, technology has made live programming a diverse, daily event that goes well beyond sports. Live-streamed niche industry events are more prevalent every year, and rarely finish “right on time.” News-as-entertainment channels thrive on real-time, unscripted content. And concerts? No one is going to tell the hottest band that they can’t do a third encore because the live feed ends at the top of the hour.
Live or not, a piece of content may only have rights to be delivered within certain regions or destinations – it’s what broadcasters traditionally referred to as a “blackout,” but in a multi-platform video delivery model, it doesn’t quite mean what it used to. The FCC had a different set of sports blackout rules from 1975 to 2014 that required cable and satellite operators to obey the same blackouts followed by local broadcast stations. Those rules no longer apply; today, alternate content switching stems from contractual agreements between content owners and distributors. The net result is the same: MVPDs are contractually obligated to follow the rules associated with each piece of content.
When events are delivered over IP to consumers who are on the move, the rules surrounding alternate content take on a whole new level of specificity. For instance: should content be switched to an alternate feed out based on a viewer’s billing address, or his/her physical location at that moment? What strategies provide the most engaging content experience when an alternate content event occurs?
SCTE 224: Modern metadata management
The rules around how a piece of content can be utilized are found as a subset of its metadata. In other words, metadata not only describes a program’s history and enables search discoverability, but it also describes where it can go. Providers can access this information through a web-based Event Scheduling and Notification Interface (ESNI) which provides a way for them to communicate across distribution end-points about content delivery schedules and policies. In 2015 the Society of Cable Telecommunications Engineers (SCTE) approved today’s ESNI standard with SCTE 224, which expands on previous standards and enables content distribution based on attributes that are now critical to our mobile world, such as device type and geographic location. Audience-facing electronic programming guides (EPGs) are also created with this communication in order to set accurate expectations with viewers as to what’s going to be available and when.
SCTE 224 provides a robust framework of extensible markup language (XML) messages. It details descriptions of audience characteristics, and viewing policies associated with each audience, it also allows for channels (Media) and individual events (MediaPoints) to describe start and end times (MatchTime) or in-band signaling (MatchSignal) information. Once you have the event trigger and audiences for events, you can provide the appropriate metadata for the applicable situation. Although the framework is robust, it’s a meticulous and complicated set of processes to bring all your content into one standard format, maintain visibility across a diverse delivery ecosystem, ensure that content is only being delivered where it should be, and communicate clearly to every distribution point.
Read More
Multi-Platform Content Distribution? No Problem - How Content Providers Use Tech to Deliver Extraordinary Experiences
Binge-watching, as an audience behavior, speaks volumes about the value of content amidst unprecedented depth of viewer choice. What does this mean for multi platform content distribution? We can choose from a huge variety of devices capable of displaying video of phenomenal quality and select from a vibrant and growing ecosystem of content destinations and business models. Consider the number of devices through which you access your own video content today:
Do you have a favorite show that’s available through multiple providers, such as a broadcaster, mobile destination, or PC?
If the playback was interrupted by buffering or lag, what are the odds that you’d watch the whole thing? Would you try to play the same program through a different service?
Do you ever purchase content from one service over another, purely due to image and playback quality?
QoE and ROI
Quality of Experience (QoE) is one of the defining performance metrics of our day. Delivering on the promise consistently across devices takes concerted effort and execution across your delivery ecosystem. Important considerations include:
Is the image quality optimal for the viewer’s connection and device?
How often does buffering intrude on a program?
How easily are consumers finding your content?
How are they responding to pricing or advertising?
Multi platform monetization models rely on seamless workflow operation and playback to be measured accurately. Performance-related churn, aside from creating an experience that’s not worth repeating, also makes it really hard to know if your core business model is on the right track.
When it comes to playback quality, is it more likely that a destination will be known for its great playback, or that it’s known to crash, stutter, or lag? As much as we love to get fan mail about how great our playback was, the truth is that it’s the negative experiences that get all the press.
New eBook: Multi-Platform? No Problem. How Content Providers Use Tech to Deliver Extraordinary Experiences
Comcast Technology Solutions has published a new ebook, “Multi-Platform? No Problem — How Content Providers Use Tech to Deliver Extraordinary Experiences,” that looks into ways in which the relationship between content providers and audiences continues to drive new business models and technology innovations. Some of the insights explored:
Operating from the end: Successful multi platform content distribution is bringing the viewer’s journey to the center of operating plans, envisioning the optimal end-state and building an experience to match.
The challenge of modern metadata: In a complex ecosystem of mobile high-definition screens, and amidst countless membership and multi platform monetization models, content rights management is more complicated than ever. New technologies are helping companies to keep the right content on the right screens.
A pragmatic, repeatable framework: To address the ever-changing landscape of new media technology, delivery approaches, and multi platform monetization models, we present a new lifecycle management framework that keeps operations, technologies, and processes harmonized with an evolving audience experience.
Case study — keeping up with a massive fanbase: When a content destination racks up a series of hits, it takes a lot of agility – and network horsepower – to satisfy rapidly growing demand. We take a look at one brand that has transformed the way it manages and delivers its content so that it can keep the focus on creating the stories that keep a global audience tuned in.
Read More
CDN Performance: The Content Delivery "Network Of Networks"
Content delivery networks (CDNs) are well-established as a necessary component of today’s digital video delivery strategy. Closing the physical distance between the server and the end-user is a great way to protect the quality of end-user playback and security of the video files themselves. With lots of CDNs out there, all with different footprints and points of presence (“POPs”), it’s becoming increasingly common for content creators to adopt a multi-CDN approach, which allows them to deploy a best-case “network of networks” to improve performance across the board.
The benefits of a multi-CDN approach go beyond simple math:
The “simple math” part: more POPs are storing content closer to more viewers, resulting in the uniformly high-quality playback experience that audiences expect.
The “even better” part: with the right tools, traffic can be optimized so that CDN performance can be strategically leveraged based on the needs of each viewing experience.
Of course, the trick is to identify the best CDN for each playback experience.
SWITCHING FOR COST AS WELL AS QUALITY
There are several reasons why a business would switch traffic between multiple CDNs.
The fundamental advantage is redundancy: if there’s an issue with one network, a “detour” that employs a different CDN provides a redundant path to get the content delivered on time. Some might be driven by specific business rules, such as diverting traffic to a different CDN once the traffic hits a certain level, or based on other criteria such as contractual usage limits or other requirements. Some CDN routing rules can be completely based on cost-effectiveness, choosing a path from origin to device primarily on price.
Whether the reasons for a multi-CDN approach are traffic-based or rules-based, the best solutions are built to respond to real-time, quality of experience, emphasizing quality measures over any other metric. Optimizing your delivery with a service that provides transparency through proactive, automated, and real-time control over your delivery decision-making protects your audience’s quality of experience and extracts the most value out of your entire delivery path.
HOW WELL IS THAT CDN REALLY PERFORMING?
Ultimately, the single best litmus test of your service’s CDN performance is the experience at the individual user level. It presents a serious measurement challenge: how does a provider really measure playback quality? Not just in theory, but in practical, “that-customer-at-that-time” terms, across devices, platforms, and geographies?
Initially, measurement approaches utilized small static test object downloads and not the actual video stream itself. The test object measurements were aggregated together to paint a generalized, high-level picture of how well a CDN was performing. In today’s age of server tuning by application, most video is played back off of dedicated video servers and not servers used for test object downloads. Consequently, the resulting CDN performance score may not directly relate to your video streaming performance. Fortunately, with advances in technology, there are more evolved approaches today that result in measurable improvements for audiences.
IMPROVEMENTS THAT VIEWERS ACTUALLY EXPERIENCE
Drilling down to individual viewer measurements is a core strength of DLVR, our new CDN optimization partner (you can read more about them here). To get there, DLVR continually measures actual video streams being delivered to end users. As a CDN moves video content through the network to intended destinations, the measurements from different streams can be aggregated into a more accurate performance snapshot. On top of that, customer experience metrics such as buffering and bit rate can now be rolled into a deeper analysis that can have a much stronger impact on quality improvement efforts. It results in a tailored switching protocol that’s informed and driven by actual viewing experiences. Because the measurements are stream-based and not tag-based, the only way a CDN can demonstrate better performance is…by demonstrating better performance. It’s better for the audience, which of course is better for content creators and providers as well.
Learn how to use Comcast CDN and DLVR together to easily enable a multi-CDN strategy and drive down costs based on business rules.
For more about multi-CDN delivery, please download our free ebook, Video To The CDNth Degree.
Read More
Winning the Multi-Platform Delivery Balancing Act
“Consumer mobility” isn’t just about smartphones.
One of the best things about being a consumer in today’s video market is that there’s such an abundance of great stuff to watch. As companies get smarter about improving experiences and content discovery, it’s easy to be torn as to where to spend entertainment dollars. To succeed, today’s business models hinge on maintaining a reliable share of our minds and wallets over a longer stretch time, which favors a multi platform delivery strategy that includes both digital and broadcast destinations.
Audiences are on the move, but ready to watch
No matter what the road looks like between you and your viewers, their journey can change with just one click. High churn has followed fast consumer adoption.”[1] Some key findings from the research that Parks Associates published this summer:
Most OTT services are experiencing churn rates that exceed 50% of their subscriber base.
Motivations for cancelling services are largely based on perceived value.
High churn is indicative of “growing pains” as services work to find audiences and resonate with them.
Outside of Netflix, subscribers are averaging twenty-one months of membership to an OTT destination.
One of the most telling data points from the same research: the same folks who are cancelling OTT services are also spending more overall on their video entertainment than subscribers who stay put. Not just on services, but on physical media as well (DVD purchases / rentals). That begs the question: how do I keep the biggest share of wallet if the wallet isn’t shrinking, and, in fact, might be expanding?
The best audience relationship wins
How are content providers and global operators of all stripes building deeper engagement with their audiences? For the most part, it’s by capitalizing on the opportunity to understand the audience lifecycle at the individual level. By knowing the primary reasons for churn, brands can actively respond to learnings with a more tailored experience, and retention tactics that follow suit:
Content-led churn: “there’s nothing on”
Serve content that builds your audience. Niche-focused or otherwise, creative budgets are expanding to meet equally expanding consumer demands for high-quality, compelling programming.
Make content discovery as simple as possible, improving natural and voice search as well as how choices are presented onscreen.
Build awareness with viewers through data-driven suggestions and communication campaigns.
Technology-led churn: “I’ll return to a great experience”
Know your technology, inside and out, identifying areas of opportunity to elevate multi platform delivery quality. Earlier this year, we introduced Media Technology Lifecycle Management as a framework to get there. You can learn more here.
Align with technology partners that can streamline workflows and deliver value back into the business – as well as cost savings that can be applied to content creation / curation.
Improve the ability for viewers to get help fast. Live support, rapid issue resolution, and proactive communication that demonstrates care for each playback experience.
Intermittent subscribers / viewers: “I’ve already watched what I want”
Stay in solid (but not relentless) contact with lapsed viewers, possibly engaging them through fact-finding exercises that allow them to share.
Showcase other facets of your programming based on what you know about the historical relationship and viewing habits of viewers.
Utilize social media and other avenues to maintain mindshare and keep your offerings relevant, inviting audiences to return.
It’s not just about audiences . . .
Advertisers buy OTT and broadcast ads for different missions. Traditional TV is tops when it comes to reach and ROI, which is why, ironically, some of the biggest TV ad buyers are online and technology companies. As Jeri Smith, CEO of advertising research firm Communicus, puts it: “You can only make so much money on people who already know your brand. To reach everyone else, you need TV.” This is why ad spend is actually increasing. OTT is really effective in the sort of tailored, targeted techniques that keep prospective buyers moving towards a purchase. Broadcast TV delivers more ROI than any other medium by building awareness across demographics. Quality reigns supreme on either side of the aisle though, with audience engagement – that secret sauce of compelling content and superior experiences – ultimately dictating where advertising budgets are going to be spent.
Content providers and global operators are getting really creative at finding ways to keep viewers engaged, while solving for shifting multi-platform delivery challenges at the same time. Whether it’s creating brand new content-driven experiences for fans, or aligning with technology providers that can liberate and augment a brand’s creative resources, it all comes down to a differentiated, repeatable quality of experience that consumers recognize as valuable.
[1] Parks Associates, “360 View Update: Churn and Retention of OTT Video Services,” 2017
Read More
The “anywhere” video delivery platform
Anyone who invests in the stock market knows that the term “diversified portfolio” is the foundation that all of our investment strategies are built upon. By spreading risk around to multiple investments, the chance of failure is massively reduced, and success can be optimized by identifying and making the most out of high-performing areas. It makes a lot of sense to think about your video delivery platform in the same way that you look at your investment portfolio – even if it’s not for the exact same reasons.
A winning, platform-agnostic QoE
The video delivery platform conversation often centers around “broadcast vs. digital,” but to truly capitalize on today’s diverse video market, brands are setting themselves up to succeed on whatever viewing platform that customers prefer. It’s more than just the convergence of workflows and delivery mechanisms – it’s about recognizing the value that each destination brings to the table, and the ability to capitalize rapidly on any shifts in audience preference.
Personalization, discoverability, and ease-of-use are all just as crucial to a high quality of experience (QoE) as playback quality. For example, in November 2017, Adobe reported to StreamingMedia.com that TV Everywhere usage is increasing by 46% year over year, with almost half of all starts being supported by Adobe’s single sign-on (SSO) functionality. Prior to SSO, users had to constantly re-authenticate themselves to use their service on another device, resulting in a failure rate of nearly half of all log-on attempts. It seems simple, but it’s representative of the elevated value that media companies can demonstrate when experiences can transition as seamlessly between video delivery platforms as consumers do.
Of course, all the quality-of-life improvements in the world won’t entice users back to an inferior playback experience. It’s not an either-or proposition, but more of a “quality trifecta” that brings great content, user-friendliness, and superior image quality into one reliable experience. Speaking of image quality . . .
About those 4K and HDR demands . . .
It’s the holiday season, which again means that lots of folks are shopping for new TVs. When I’m talking to friends and relatives about 4K and HDR, it helps to keep it simple: 4K refers to resolution – the number of pixels on a screen, and HDR (high dynamic range) refers to brightness, color and contrast. It also helps to explain that a high-performing screen is still only as good as whatever it’s connected to – and that the associated content is still a precious commodity. It doesn’t take any prognostication to state that in 2018, we can expect an exponential increase in users who are looking to capitalize on the performance of their new flat screen with premium content and connected device upgrades. It’s an interesting story: on one hand, according to SNL Kagan 4K-capable TVs will be in over 26 million U.S. households by the end of this year, with that number expected to double in three years. At the same time, 40% of users report that they don’t see a noticeable difference between 4K and standard-def video – and 17% say that it’s just not worth the money. HDR, on the other hand, delivers the game-changing visuals that command top dollar; but most HDR content available today is in the form of expensive single-transaction VOD purchases.
The takeaway for content creators and providers? Every business plan needs to operate with the expectation that sooner rather than later, 4K/HDR offerings will be as ubiquitous as HD video is today. Top content creators are already offering their original programming in 4K, and 4K streaming is the linchpin of most high-tier subscriber options. The volume equation – much larger individual files, and much more of them – should be factored into the lifecycle management plan of every company’s technology stack.
Your delivery portfolio = a playbook for the long game
Back in September we talked about building a lasting, elastic media brand by imbuing both technology management and business plans with the agility to pivot quickly – and by freeing up resources that can be used to create and curate competitive content. Your video delivery platform portfolio is the ultimate manifestation of this kind of operational agility, and it’s more complicated than a simple term like “platform agnostic” suggests. Is your experience optimized for every device/geography/platform equation? As viewing preferences shift to higher-quality video, will the increase in data demands impact your ability to scale along with your audience? Can you shift focus to the destinations that resonate most strongly with viewers? It’s this sort of continuous analysis and course correction that leads to long-term profitability.
We’ve seen some great things this year, from the first 4K HDR mobile devices to the continued evolution of the home TV as a premiere hub for dynamic content experiences. Next year is shaping up to be more of the same as companies work to provide the content, user experiences, and overall value that lengthen and strengthen the individual consumer relationship.
Read More
How We Watch: Digital Video Trends
A quick search for “OTT adoption” brings up thousands of news articles in 2017 alone. But while the news might seem to skew OTT, how are audiences really watching? Below, we take a look at the broadcast and digital video trends from the year — a good reference point as we look toward the year ahead.
(function(n) {
var url = "//comcast.postclickmarketing.com/VOD-infographic-2017",
r = "_ion_ionizer",
t = n.getElementById("ion_embed"),
i;
t.id = r + +new Date + Math.floor(Math.random() * 10);
t.setAttribute("data-ion-embed", '{"url":"' + url + '?_ion_target=embed-1.0","target":"' + t.id + '","appendQuery":false}');
n.getElementById(r) || (i = n.createElement("script"), i.id = r, i.src = (n.location.protocol === "https:" ? "//8f2a3f802cdf2859af9e-51128641de34f0801c2bd5e1e5f0dc25.ssl.cf1.rackcdn.com" : "//1f1835935797600af226-51128641de34f0801c2bd5e1e5f0dc25.r5.cf1.rackcdn.com") + "/ionizer-1.0.min.js", t.parentNode.insertBefore(i, t.nextSibling))
})(document);
Sources:
eMarketer, Worldwide Digital Video Viewers 2016-2020 Estimates, Jan 2017
eMarketer, Digital Video Viewers by Country (Data Forecast Through 2021), Aug 2017
Magid Media, Futures of the Digital Video Landscape, Oct 2017
Parks and Associates, The OTT Playbook Part II, 2017
eMarketer, OTT Video Viewers, July 2017
SNL Kagan, Nov 2017
eMarketer (Hub Research), Where Do Americans Watch TV? Hint: No, Not on Mobile, May 2017
Read More
Ad Distribution: The Performance Model
Advertising delivery is, at its core, content delivery and share many of the same overarching delivery pain points – such as the increased costs and complexities of serving a multi-platform consumer landscape and the mounting expenses of producing top-quality video spots. Just like the “found money” from choosing locations with regional production incentives, choosing the right distribution partner can bring substantial long-term dividends. The ideal choice equips brands with a delivery discipline that’s constantly being optimized for time and cost savings that can be used to expand and reinforce campaigns.
AUTOMATED DELIVERY WORKFLOWS
The right partner understands the complexity of content delivery, encoding, and transcoding all available formats, the various methods for distributing content and the multiple platforms where content is used. Automate your ad spot delivery workflows – from uploads, to conversions and traffic logistics to status updates and delivery confirmations.
Streamline the management of ad spots with tools that allow for instant review and tracking, easy order placement, and the ability to easily manage and insert metadata into each asset.
PERSONALIZED SERVICE, SUPPORT, AND TERMS
Best-case scenario: your complete delivery service functions as an organic extension of your own organization. This cannot happen without dedicated support that operates at a high level of transparency and 24x7 availability.
“Cost creep” is notorious, especially in the form of “extra” fees and surcharges. Talk to prospective partners upfront about the flexibility of their pricing models, whether or not they impose late fees or reslate charges, or if there is a window for submitting re-pitches at zero cost.
Prepared for the road ahead
Innovation in the advertising arts can often feel like a double-edged sword. On one hand, new delivery technologies and workflows can drastically improve the performance of your workflow and reach of your media assets. At the same time, new avenues of interaction are right around the corner, creating advertising opportunities that introduce new complexities into the mix. It’s an exciting time, to be sure – but it’s never been more important to maintain a technology approach that empowers brands to compete on the strength of their message, no matter where it needs to be seen.
Comcast Technology Solutions is helping advertisers to do exactly that, through a partnership based on decades of practical experience as both an advertiser and content provider. Are you interested in discovering new ways to improve your campaign’s performance? Contact us.
Read More
Centralize Production Services for Better Results
Every media asset has a specifically tailored role and goal. Even within the context of one brand, each campaign can have vastly different post-production needs or versioning requirements. For example, the needs of a national campaign with a variety of ad spots would differ greatly from a regional outreach that might use one common asset, modified weekly to showcase specific offers or on-sale items.
The added value of a “one-stop shop”
Advertisers can more effectively streamline ad production and versioning, which results in cost efficiencies. It’s all about eliminating steps that don’t add benefit to your workflow. The intricacies of content distribution across platforms, from file management to video encoding/transcoding across all formats, all add up to a complicated path to consumers, but a centralized model can shorten that path and speed your content along to its many destinations.
The best-case scenario is to engage a partner with the capability to handle all of your finishing services, while also preserving your freedom to work with other post-production facilities. This way, your clients’ campaigns will have all bases covered. Centralized production services optimize costs by bringing services like these under one roof:
Closed captioning and subtitling
Video encoding for ad measurement watermarking (bvs(veil), nielsen (spottrac), teletrax)
Down converting (hd to sd)
Versioning / tagging
Logo insertions
Transcoding
Professional voice- over services
Digital file creation
Reslates
DVD authoring
Removing complexity and cost from an ongoing campaign
One of our agency customers has a great example of how they brought a clear win back to their client by converging delivery and production services under one roof. This agency is one of the most storied in North America. They’ve delivered award-winning success across a wide spectrum of clients in such industries as luxury cars, alcohol, consumer packaged goods, and financial services. One of their clients, a grocery store chain with a substantial regional footprint, needed a better way to handle their massive weekly needs for ad customization.
The neighborhood store changes sales and product offers constantly, all of which require fresh, up-to-date ads to get the message out. Our production services team was brought in to centralize operations, saving time and money across the board. Here’s how it works: the agency fully produces about half of each day spot. From there, we put a process in place where we could receive video, audio, and static photo files of each week’s new offers, which our team would then compile into a new weekly spot….and then deliver it. This process can now be reproduced across a variety of clients, which helps the agency deliver value to their clients.
Returning focus – and funds – to creative teams
Creating a world-class video ad campaign is an increasingly complex and costly endeavor. At the same time, the proliferation of devices, platforms, and mobile video has turned delivery into a make-or-break component of your plan, and can easily leach attention and resources away from your creative efforts. Companies are taking a fresh look at their production and delivery workflows to create efficiencies that improve cost and quality simultaneously. A trusted partnership that brings everything into one holistic approach can accomplish that.
Comcast Technology Solutions is that trusted partner to some of the world’s most recognizable media brands. Contact us to see how we can help.
Read More
Save Here, Spend There: Capitalizing on Ad Distribution Efficiencies
Do you wish you had more funds for a more robust media campaign? Do you want to explore ad tech, expand into digital, add more reach and frequency, or create additional creative? Comcast Technology Solutions recently partnered with Advertising Age to present a webinar on creating ad distribution efficiencies showcasing ways an advertiser can create “found money” through choosing the right ad delivery partner.
Ad delivery is an oft-overlooked way for advertisers to free up financial resources to optimize every step of a campaign lifecycle, and improve the quality and reach of their ads in the process (you can watch the webinar presentation on demand here). Ad distribution is a necessary stage of a campaign – a solution an advertiser is already paying for, and with a new fresh look at your delivery model one can uncover some huge opportunities to improve your bottom line. Even better, the benefits of a smarter delivery ecosystem can be felt across every campaign.
Quality is an expectation, not a differentiator
Once your well-planned ad strategy has turned into a gorgeous, compelling video ad, that ad needs to get to every desired destination, no matter the location or device. It needs to arrive on time, of course; but it also needs to arrive securely, in the correct format, and ready-to-air. Ad distribution refers to that “final mile” of a video campaign’s roadmap:
Ad delivery puts the finishing touches on a video through versioning, watermarking, closed captioning, and other specifics based on the needs of both the messaging and of the final destination.
Files are transcoded into a multitude of formats for all networks, stations, and media types, quality checked and then…
Ads are delivered for display with reliable quality, underscoring the value of a message instead of undercutting it with poor visual appearance.
Consumers are extremely quality-sensitive when it comes to video. Buffering, delays, or other disruptions have a negative impact on an ad’s perception. One blowback of this is that an agency’s client may feel like the campaign wasn’t well received, when in fact it was a delivery issue that lowered the impact of the ad. The first order of your ad distribution workflow is to protect and optimize the perception of your good work.
Example: When “If it ain’t broke” just isn’t good enough
Your ad delivery processes – and your partners – should utilize best-of-breed technologies and workflows, optimizing quality and cost at the same time. Video advertising can take up a huge portion of an advertiser’s budget. According to IAB’s 2017 Video Ad Spend Study[1], ad spend on original digital programming has nearly doubled since 2015. There are lots of ways in which a modern delivery partnership can bring back client-winning results, while also saving big dollars that can be funneled back into an expanding media budget.
For example, we handle the distribution and production services for all markets of a luxury auto manufacturer. Being that Comcast is the fourth largest advertiser ourselves, this client was intrigued by our host of time- and cost-saving advantages:
Automated ad spot delivery workflows and a centralized storage portal
Ability to instantly review spots, place orders / traffic instructions, and track deliveries
Optimized and consolidated distribution to reduce the cost of sending spots
Free delivery to 1450+ Comcast Spotlight syscodes, no late fees, no charge for reslates
During the onboarding process, we took a deep-dive with this partner into ways they could improve processes and reduce overall spend in relation to ad distribution. Along with delivery, their spots needed a huge volume of versioning to reach a wide and disparate audience. The previous approach was to manually tag both High Definition (HD) and Standard Definition (SD) spots, however the SD spots were less prevalent and only sent on an as-needed basis. Upon receiving the destination list for their first order, we saw that only approximately ten percent of the tagged SD spots were actually being used, representing a lot of unnecessary work and expense. We recommended that only HD spots are tagged, and our automated SD down-convert workflow is employed when it’s needed.
Long story short: once all the benefits stacked up, our recommendations to this client have resulted in an overall savings of 50% in ad distribution costs, in conjunction with their ability to reach over 18,000 online media destinations including national and local broadcast and cable, radio, online, and out-of-home (OOH).
If you’re looking for more ways to extract the most value from your video campaigns, take a look at our recent ebook, Top 5 Ways to Optimize Your Client’s Ad Budget. You can download it here. Or, simply drop us a note – our team loves to find better ways to bring ads and customers together.
Read More
Utilizing a Multi-CDN architecture for peak performance
A multi-CDN architecture delivers much more value than “one good CDN.” Simply put, it’s a way to multiply the benefits of CDN delivery so that the playback experience is engaging and worth repeating, regardless of how or where your programming gets watched.
Read More
Building a Lasting, Elastic Media Brand
Back in March, I wrote a blog with some advice for video brands looking to shore up their technology relationships in preparation for the road ahead. Along those lines, we introduced a holistic framework called Media Technology Lifecycle Management (“MTLM”) and presented it this spring at the National Association of Broadcasters convention (you can read more about it here). One of the core tenets of MTLM is the idea of organizational and technological “elasticity.” It’s worth expanding upon.
Improving response time in any direction
A key success trait for today’s brands is flexibility—the ability to incorporate new ideas and components quickly. As content creators, we want to provide each viewer with a top-tier viewing experience on any screen. When consumer behavior shifts, we want to meet viewers on their terms with experiences and monetization strategies that make people say “this is valuable to me.” Elasticity takes this idea of flexibility and – pun intended – stretches it.
An elastic mentality is one of omni-directional agility, designing a framework that recognizes scalability not just as a two-way street, but also as a consideration for each operational component. For example, you might want to scale up your technology stack today in order to reach a short-term increase in viewers, such as for a big one-time event. That’s a straightforward answer to an immediate business issue, and maybe even a necessary one; but the media industry doesn’t always innovate in a linear fashion. Being able to test in an environment that allows you to stop and start quickly allows for more educated purchasing decisions when it comes to capital hardware down the line.
As breakthroughs enable your content to get to more screens with less infrastructure, some questions you might want to consider include: can your operations be pared back in some areas, and expanded in others, like content creation? Can today’s tech investments insulate your ability to manage increasing file volumes, or incorporate new delivery channels into the mix?
“Spending for elasticity” can guide the financial conversation into areas that better support organization’s agility. Working with mature technology partners to minimize risk and accelerate time-to-market, business units can often more readily adjust for market trends, employing the best tools to get there.
Elasticity in action
At Comcast Technology Solutions, we’ve had the opportunity to witness firsthand – and participate in – some interesting projects that only came about because of a brand’s willingness to embrace an elastic mindset.
One example involved one of our partners, a nationally-known sports organization. They wanted to capitalize on an upcoming event by ‘test-driving’ a new channel. Just a short time ago, such an experiment would have been impossible. The logistics and technology expense of creating and launching a new channel would need a solid business plan as a foundation. For this undertaking, together we devised a cloud-based plan to spin up a new channel for just one weekend. The company gleaned some great insights and closed out the channel.
“BAU 2.0”
The adoption of elasticity as a core tenet of your organizational philosophy is really about cultivating a new “business as usual.” 2018 is right around the corner, and media companies of all types are charting their course for the new year. As your team develops its own plans and roadmaps, consider applying some of the MTLM tenets into a holistic assessment of your assets, processes, and capabilities. How are you limbering up for the challenges ahead?
Building a Lasting, Elastic Media Brand
Comcast Technology Solutions
2017-9-27
2017-9-27
Blog
Back in March, I wrote a blog with some advice for video brands looking to shore up their technology relationships in preparation for the road ahead. Along those lines, we introduced a holistic framework called Media Technology Lifecycle Management (“MTLM”) and presented it this spring at the National Association of Broadcasters convention (you can read more about it here). One of the core tenets of MTLM is the idea of organizational and technological “elasticity.” It’s worth expanding upon. IMPROVING RESPONSE TIME IN ANY DIRECTION A key success trait for today’s brands is flexibility—the ability to incorporate new ideas and components quickly. As content creators, we want to provide each viewer with a top-tier viewing experience on any screen. When consumer behavior shifts, we want to meet viewers on their terms with experiences and monetization strategies that make people say “this is valuable to me.” Elasticity takes this idea of flexibility and – pun intended – stretches it. An elastic mentality is one of omni-directional agility, designing a framework that recognizes scalability not just as a two-way street, but also as a consideration for each operational component. For example, you might want to scale up your technology stack today in order to reach a short-term increase in viewers, such as for a big one-time event. That’s a straightforward answer to an immediate business issue, and maybe even a necessary one; but the media industry doesn’t always innovate in a linear fashion. Being able to test in an environment that allows you to stop and start quickly allows for more educated purchasing decisions when it comes to capital hardware down the line. As breakthroughs enable your content to get to more screens with less infrastructure, some questions you might want to consider include: can your operations be pared back in some areas, and expanded in others, like content creation? Can today’s tech investments insulate your ability to manage increasing file volumes, or incorporate new delivery channels into the mix? “Spending for elasticity” can guide the financial conversation into areas that better support organization’s agility. Working with mature technology partners to minimize risk and accelerate time-to-market, business units can often more readily adjust for market trends, employing the best tools to get there. ELASTICITY IN ACTION At Comcast Technology Solutions, we’ve had the opportunity to witness firsthand – and participate in – some interesting projects that only came about because of a brand’s willingness to embrace an elastic mindset. One example involved one of our partners, a nationally-known sports organization. They wanted to capitalize on an upcoming event by ‘test-driving’ a new channel. Just a short time ago, such an experiment would have been impossible. The logistics and technology expense of creating and launching a new channel would need a solid business plan as a foundation. For this undertaking, together we devised a cloud-based plan to spin up a new channel for just one weekend. The company gleaned some great insights and closed out the channel. “BAU 2.0” The adoption of elasticity as a core tenet of your organizational philosophy is really about cultivating a new “business as usual.” 2018 is right around the corner, and media companies of all types are charting their course for the new year. As your team develops its own plans and roadmaps, consider applying some of the MTLM tenets into a holistic assessment of your assets, processes, and capabilities. How are you limbering up for the challenges ahead?
https://www.comcasttechnologysolutions.com/sites/default/files/2016-09/CTS_Final.jpg
Comcast Technology Solutions
Read More
What is Media Technology Lifecycle Management?
I recently presented a paper introducing Media Technology Lifecycle Management (MTLM), but what does it really mean? Theoretically speaking, MTLM is a simple and effective approach for organizations to help keep operations on a path of continuous improvement and growth. It’s our way of staying on top of industry trends, born from our rich history as content provider, and our experience as a trusted delivery partner to content providers, advertisers, global operators and technology companies. It’s a way to apply critical thought to technology and workflow planning that’s available to anyone. We believe it will help any company working within our dynamic and always-evolving industry.
Media Technology Lifecycle Management charts a cohesive technology course that can be used to continuously assess, plan, orchestrate, and deliver business value within the complex world of linear and digital media. Its purpose is to help align technical requirements and planning through a more incremental, integrated and collaborative strategy. MTLM is a guide, empowering teams and organizations to meet shifting customer behaviors and demands with dynamic new technologies, delivery approaches, and monetization models.
Media Technology Lifecycle Management is designed to help organizations embrace and adapt to shifting industry dynamics, and to increase the value that all stakeholders (internal and external) deliver to the organization. A pragmatic approach to better manage the media technology lifecycle that frees up time to focus on core business initiatives – like serving customers and creating new content – while simultaneously lowering operating expenses. MTLM contributes, in an open and freely available way, to the evolution into a new era of media technology, delivery and consumption.
At its core, Media Technology Lifecycle Management promotes visibility and efficiency throughout a unified process and a shift in mindset to be practiced throughout the organization. We are excited to present this approach to you – and hope you gain value from it as you build your plans for the future.
Download your copy of the Media Technology Lifecycle Management guide paper.
Read More
VOD Syndication: an updated definition
A modern VOD syndication workflow solves two complex content delivery questions: 1) who should receive it? 2) how are you monetizing it? Based on the destination for each VOD asset and the content provider’s business model, the VOD workflow must adjust to best meet the needs of the business.
Read More
From "Out-of-the-Box" to "Off-the-Shelf"
The competitive landscape of the video industry is barely recognizable from what was ten years ago; but the partnership landscape has undergone its own radical transformation. “Partner” implies a deeper B2B connection than a mere “vendor,” which now feels more like a purely transactional, less invested relationship label. Today’s partnerships are more than just a matter of two companies looking to use each other for mutual gain. The best ones aspire to something more: the creation of an organic extension of an organization, where creativity is nurtured and time to market accelerated.
So, why this transformation? Did media companies just wake up one morning and decide to fall in love with each other? In a word, no. This new partnership paradigm is all about self-awareness and healthy, mutual growth. Companies can concentrate their talents, resources, and attention on what they do best by aligning with the complementary skills of like-minded organizations. Both can then achieve what neither can do alone, building a “collaborative comfort zone” where out-of-the box ideas can be nurtured into innovation.
Here are some ways in which we have seen this deeper definition of “partner” result in advancements that solve for the complexities of modern video delivery:
From casual conversation to a more profitable C3 window
One of our clients, a premier name-brand for award winning, binge-worthy programming, has evolved their video delivery to delight a global fanbase through a complex footprint of broadcast and OTT destinations. Our partnership has expanded as well, as we’ve collaborated to find new ways to maximize revenue and audience engagement. From one of these explorations arose a better way to maximize the availability of new video assets during the crucial C3 viewing period.
More viewers watch a show within the first three days (“C3”) than they do during the actual premiere. For this client, which has a multitude of highly successful serial programs, every hour of on-demand availability improves the Nielsen C3 ratings – and in turn, advertising revenue. Between a show’s premiere and its availability to on-demand destinations can be a huge production effort, as platform-specific versions are created for playback across devices and providers. “How can we open up the C3 window more quickly?” is a question that initially started an informal brainstorming session. The question has snowballed into an initiative to find meaningful process improvements that can open up this crucial revenue window more quickly.
Advertising agency meets drama-free delivery
This agency is one of the last great independent ad agencies. For them, a persistent challenge is that every brand they represent comes with unique requirements and contractual needs, making it difficult to “partner up” with one ad delivery partner. Decoupled workflows are commonplace in advertising, so the agency may not own the entire process for its client: for example, one company might be handling production and versioning services, while another is managing the ad delivery process. This partner knows how to craft messaging that elevates the brand for each client, and did not want to think about the complexities of keeping messages current across a multitude of living rooms and mobile devices. The depth of our partnership helps us to take a more creative tack to each campaign, resulting in a trusted delivery model that overcomes process disparity and makes quality ad delivery a virtual non-issue.
One example of this kind of collaboration began with a casual conversation about a large regional grocery chain under this agency’s care. Substantial ongoing work is needed in order to deliver fresh, up-to-date ads that showcased constantly changing discounts and special weekly offers. The answer was for the agency to work its magic, producing half of a 30-second day spot; from there, we put a process place where we could receive video, audio and static photo files of each week’s new offers, which our team would then compile into a new weekly spot. The new program streamlines the entire production and in allows the client’s own creativity to shine, unencumbered by any quality or timing issues.
Same-year success for a new OTT brand
For practically a century, this partner has had an indelible impact on global dialogue through a stable of brands that include some of the most recognizable, lasting names in media. Their goal was not just to build a new video app, but to create a long-form, “lean back” experience that would greet viewers with an always-on smorgasbord of revolving entertainment content. The challenge: take the project from ideation to execution in three quarters.
This is where self-awareness as an organization elevates a partnership to the next level: this media giant is a king of content creation, but it didn’t want to get into the media delivery business – their focus had to remain on creating and investing in great content. Building, maintaining, and constantly upgrading a complex multi-platform delivery architecture is a monumental undertaking in both time and capital. The success of our partnership rests in the trust that both parties are truly playing to their strengths. We launched this groundbreaking, ad-supported service on time, providing the client with a streamlined way to manage content assets and deliver the long-form TV experience it was aiming for. It’s been a huge hit right out of the gate; proving so popular with advertisers and viewers that a planned migration into a subscription service was abandoned. The new app was downloaded more than a million times in mere months, and the average viewing time across platforms is more than 30 minutes. With concept proven, together we’ve designed a rapid deployment, turnkey OTT approach that this partner can apply to other brands in its catalog.
What are some examples of your great collaborative partnerships? Tweet us @ComcastTechSoln
Read More
The TV Future Trends Report
Business Model Innovation In A Multi-Platform World
It’s been ten years since Google acquired YouTube, Netflix launched a new streaming video service, and Apple launched its first iPhone. Fueled by the proliferation of high-quality connected devices and constant improvements to video delivery, the pace of change hasn’t slowed down one bit. As traditional broadcast and over-the-top (OTT) digital video workflows converge, successful businesses are adopting new tactics that support the demand for quality video on any screen while also prioritizing the creation of content that keeps viewers coming back.
Comcast Technology Solutions presents this new report on future TV trends, created by international research and strategy consulting firm MTM that explores video developments in the U.S. in the coming years, and presents a set of industry perspectives on what the future of video delivery and consumption may look like.
Evolution of TV networks in a multi-platform world
Surprisingly, the majority of OTT services – 80% -- are authenticated TV Everywhere apps (TVE) relying on pay-TV providers to manage the customer relationship. Broadcast networks have been quick to embrace and respond to the challenges and opportunities presented by the growth of digital video and multi-platform TV. They have invested significantly in new technology and products, and are well into the journey of creating a sustainable multi-platform business. Generally, new developments fall under three key categories:
OTT products and services: adopting to consumer behaviors and market dynamics.
Advanced video advertising solutions: enabling new formats and content performance measurements.
Technology and operations: driving innovation and elasticity in a fast-changing market.
Original content, created at scale, is a key driver of customer acquisition, and businesses are spending more time and resources than ever before to create it. At the same time, the costs and complexity of maintaining a competitive multi-platform offering are going up as well. Technology cycles are shortening, and it’s becoming increasingly difficult for businesses to see long-term value in building out a proprietary hardware / workflow system that perpetually drains capital.
Market evolution – challenges and opportunities
In the coming years, industry participants expect that the market will be characterized by diverse consumer viewing habits (and devices), converged broadcast and digital workflows, and increased competition as more businesses recognize the value of a video brand. These factors will result in a more complex distribution and monetization landscape, across owned and third-party platforms and channels.
Industry executives across the U.S. TV and video markets anticipate that the next few years will be characterized by diversification, convergence, and competition. Audiences will be faced with a growing range of content offerings across a range of platforms, services and devices – and will look for strong, distinctive brands, reliable cross-platform services, and access to the right content at the right time and at the right price.
TV network business model innovation
Television industry business models will continue to evolve, of course. Market disruptions will continue to be caused by advances in device, delivery, and advertising technologies, and by the ways in which those advances impact the choices and viewing habits of consumers. What steps do global operators need to take to strengthen business models and capitalize upon the new opportunities that will materialize? The report identifies seven building blocks - the strategic foundations of the next generation of elastic business models. Three of those seven building blocks are:
Establish a content innovation capability that can experiment and learn. Content providers need to be able to innovate with new content forms and formats across a range of platforms, and rapidly test and learn how audiences react.
Establish flexible windowing and distribution strategies: Digital distribution has created new windows, content purchasers, and distribution strategies – providers need to innovate, test, and learn to understand how to maximize reach and revenue across every pay window.
Develop flexible content management and multi-platform delivery solutions: As the TV and digital video market become increasingly aligned, providers should look to establish a flexible, cost-effective approach to innovation that can also adapt to an increasingly rapid pace of change.
For more insights and detail on future TV trends, we invite you to download the report.
Read More
VOD Advertising: Looking Past the Upfronts
With VOD programming continuing to rise as a percentage of overall viewing, it’s a good time to take a look at the ways in which providers are offering multiple ways to monetize it – and the challenge in trying to truly understand how the value of a VOD asset changes over time.
Read More
The Evolution of Tune-In Campaigns
It was a lot easier to get viewers excited about “tuning in” to a new program when the living room was the only place to watch it. Back then, the lack of choice made it a lot easier for consumers, too; but we don’t just decide “what” to watch anymore. We also decide how and when we’re going to watch it. Any new program has to fight an uphill battle to reach new viewers across a sea of devices and competition. Exceptional content can certainly do its part to sell itself, but optimizing its value in the marketplace takes a scientific approach.
The Upfronts season recently concluded for broadcasters, a time where they showcased programming plans for the year in order to generate interest and entice media buyers to purchase advertising space “up front” in addition to the buying that will occur throughout the year (referred to as “scatter” ad spend). In 2011, The Interactive Advertising Bureau started its own similar event called NewFronts, to provide a similar “futures market” arena for digital programming. NewFronts isn’t representative of the entire digital content universe – some providers are combining broadcast and digital upfronts in their own presentations. Still, it does drive home the fact that regardless of how diverse our TV habits have become, advertising remains as a growing economic engine for the industry.
A great show needs lots of mindshare in order for a media buyer to consider it a hot property for other advertising. But how can a simple “watch my show” commercial rise above the clamor from hundreds of channels? To execute a complete and compelling tune-in campaign, advertisers keep their focus on the audience in order to find the best way to connect a piece of content to the right viewer. This means:
1.Finding new ways to engage the right audience:
When viewers keep the online conversation alive, a new season or episode can transcend the individual tune-in experience and becomes more of a community event. By using social media ads to increase the engagement and participation of an existing fanbase, advertisers can put fans to work for the cause – generating more buzz and online dialogue. Creating ways to turn viewers into ad distributors (i.e. posting a trailer to Facebook) is a winning formula to bring in new eyes and coax lapsed viewers to tune back in.
2.Meeting each viewer on his/her own terms – and device:
These days, a good show can really get around. It might be shown on broadcast television, through a content aggregator (like Netflix or Hulu), and through a direct-to-consumer app. Each destination needs an ad version that’s tailored for the environment it’s being delivered in, whether it’s mobile, social, or through any over-the-top (OTT) device connected to the television. And, when a program moves from live linear programming to video-on-demand asset, the details of an ad’s message need to change too.
The Ever-Changing Call to Action
It’s that last bit – monetizing a media asset throughout its lifecycle – that makes managing a tune-in ad campaign feel a lot like herding cats. A top-tier program might start its ad campaign months before its premiere. A countdown approach would require a change to every ad spot on every platform as the message changes from “May 2018” to “In Three Weeks” to “Tomorrow” to “On Demand.” The sheer volume of asset versioning and tagging for one program can be astronomical. And, now that every piece of content exists in perpetuity, there’s a whole new layer of tune-in advertising to bring viewers back to old content that might be getting a new lease on life. If Beverly Hills, 90120 gets a third remake, you can bet that the original series is going to get a monetization make-over.
The TV lovers among us know that as content consumers, we’ve got more control over the future of our programming than we ever have before. The advertisers among us know that if they want to win the hearts and minds of a video-driven world, they’re going to have to do it one screen at a time. Fortunately for them, new delivery mechanisms are allowing them to concentrate on what matters most – creating a compelling message that can be heard above all the noise.
Interested in learning more about tune-in or other high-intensity campaign delivery? Contact us
Read More
Planning for the Future of Media Delivery
Can a content provider or programmer really "future-proof" its delivery model? Asking the question internally isn’t so much about polishing your crystal ball, but about defining your priorities as an organization.
Read More
Seeing the (Media Technology) Forest Through the Trees
It’s challenging to see the forest through the trees when new trees pop up to disrupt the view. New media technologies, delivery mechanisms, and business models emerge at such a frequent pace that organizations can find themselves taking focus away from their core mission to figure out how to incorporate each new iteration.
One of the most impactful ways companies can maintain their vision is through a holistic view of the media technology lifecycle. These three fundamental tenets can help them develop a comprehensive view of technology platforms:
1) Develop cross-functional KPIs
Certainly, it’s not news that specific, measurable and attainable KPIs, or Key Performance Indicators, are absolutely essential for business. The trick is to develop and implement performance metrics that work to align different groups and organizations. KPIs from different groups are often developed separately; but interconnecting them is a powerful means of supporting the higher-level strategic objectives of the entire organization. The insights gained from cross-functional KPIs help organizations to better understand how groups work together and interact, and develop a more comprehensive view of the entire media technology lifecycle.
This approach is not isolated solely to internal divisions. As companies begin to develop deep partnerships with vendors, they can also benefit from a cohesive view of their content’s performance from creation to consumption. I took a deeper dive into this topic in this blog.
2) Refine Workflows
Innovation and testing are key to future success. One way to ensure a safe space for such testing is to review and harden mission-critical processes and systems. The first step in hardening systems is identifying the systems that must operate at high availability, and then building the infrastructure around those systems to mitigate as much risk as possible. This could mean investing in power and cooling, or creating dual and diverse paths for networks. Whatever system or process is critical to the business, aim to have redundancy and back- ups in place. When key workflows are bolstered and refined to be relatively bullet-proof, organizations can apply a more critical eye at the business from a high level, and be confident that the most fundamental parts of the business are sound.
3) Know thyself
As companies strive to achieve a holistic grasp of the media technology lifecycle, they should start by weighing everything against the current state of their business. How are development teams working cross-functionally? What efforts are being duplicated? Where can automation be implemented? What tools does a team use exclusively, and which do they share? These questions are just the starting point for a business that wants to take a holistic, comprehensive view of their media technology lifecycle and improve upon it. But they can only be answered with a real and honest evaluation of the overall health of the organization.
Applying these three tenets can help position companies to take a more holistic view of their media technology lifecycle and better equip them to be more agile and adaptable.
Read More
Advertising: Seamless Transitions
The best way to manage big change is to eliminate concerns before they become problems. Even the most positive, well-researched changes can trigger fears, regardless of how “change-savvy” your business is. The human animal is hard-wired to meet any major upheaval with a series of internal alerts. Have you ever started a new job? Joined a new team? Gone on a date? These are all positive events, but your trepidation can usually be mapped back to two things:
How important is this to me?
What is it that I don’t know?
As the fourth largest advertiser in the U.S., Comcast Technology Solutions has developed best practices for new clients that replace fear of the unknown with the confidence of transparent and proactive engagement. Our onboarding processes for new clients succeeds by nurturing superior relationships throughout the media delivery ecosystem.
The personalized support provided by our dedicated Account Management team is critical to customers like Tailored Brands, Inc.— you can read the full story here. Tailored Brands, Inc. effectively cut its video distribution costs in half, and now has a much clearer view of day-to-day order processing, ad traffic, and delivery confirmations. Our track record of stellar relationships is one reason why Comcast Technology Solutions is proud to maintain a 99% client retention rate.
Frictionless ONBOARDING, BY THE NUMBERS
Comcast Technology Solutions can set up a new client within hours – from ingest to distribution. Here’s how we take the fear out of change, and bring the Ad Suite into the fold with no downtime or data loss:
1. Get to know your new Account Manager
The Ad Suite success is built upon a foundation of great relationships between our Account Managers and the clients they serve. Our goal of acting as an organic extension of your own organization begins with a go-to point for you that is attuned to your unique needs.
2. Set up your account — upload preferences, production service’s needs, destinations
The Account Manager works with each client to upload preferences and destinations, and build a plan to address any production services (such as closed captioning or watermarking) that the campaign needs.
3. Notify your destinations
Comcast Technology Solutions will send each destination a change notice that provides effective change dates, an email address for IT teams to whitelist, and a resource to assist with any questions.
4. Asset upload testing and QC
Our testing procedures and quality best practices have been honed by decades as a top-tier advertiser. Your campaigns utilize the same technologies and workflows that we use for our own content.
5.Customized training and takeaway guide
Full-service support is part of our commitment to every client. Training and support documentation is included, and designed to meet the needs of your unique program.
Welcome to Comcast Technology Solutions!
Read More
Four Megatrends That Will Shape Your Media Technology Strategy
It almost goes without saying that staying ahead of the current trends in media technology is imperative to success. As a 25+ year media and entertainment veteran, I’ve seen a lot of industry megatrends shape our business, but at nowhere near the speed that they are evolving today. These megatrends will impact how content providers, global operators, and media technology companies look toward the future and determine unique strategies to win in the market.
The Changing Nature of Content Distribution
Ask yourself this question: Five years ago, how many choices did consumers have for media consumption? You probably don’t know the answer off the top of your head, but you would likely (and accurately) guess “a lot fewer than today.” In fact, as of today, there are over a 1,000 OTT options available worldwide.
According to a 2016 Parks & Associates report, 64 percent of U.S. broadband households subscribed to an OTT video service² and 31 percent subscribe to multiple services. This exponential growth is impacting the media industry in two significant ways: consumers have become curators of their own content and content providers have a massive variety of choices of where to distribute their content.
Though they have millions of hours of programming available to them, the average consumer watches just over five hours of content each day. Content choices are determined by accessibility, relevance, and value. So how do you differentiate yourself from the pack? Features such as search, voice, recommendations, and interface design will likely continue to drive success for content aggregators and OTT services.
Multi-platform Distribution is Here to Stay
Today, 71 percent of U.S. broadband households own four or more video-capable devices. This is a stark reminder of the rapid change in content consumption and consumer behavior from even a decade ago.
Some of the clear indicators of this evolution is that mobile consumption is rapidly rising, on-demand services are increasingly popular, social media are used more and more as media platforms, and younger generations are embracing viewing channels across the entire spectrum of options. A recent study revealed that 90 percent of 18-24 year-olds U.S. internet users are watching video via a smartphone and tablet, while the 25-34 year-old demographic is pacing at around 86 percent.
These trends – and how content providers react to them – will define the market moving forward. New devices and ways to view content add technical complexities to delivering content such as new formats in which to deliver, changes to metadata, and varying network bandwidths. Agility and speed will be key components for success within this trend.
The Dizzying Pace of Technology
You’ve probably heard the quip that a new computer is obsolete before you get home from the store and have a chance to unpack it. With the media market, entertainment and technology are evolving at such a quick pace that it is increasingly important to evolve and adapt at a faster rate.
However, there is also opportunity within this environment. New models of content distribution rely not only on breadth of content, but also on speed and channel diversity. While these changes create new challenges, evolving technologies make it possible to reach these ideals while also maintaining long-term value and satisfying business goals from a cost perspective.
Business Elasticity is Essential
Content consumption behaviors are changing rapidly. Organizations who are trying to meet those needs would be well served to consistently seek new ways to use the available tools and technologies to stay agile and adaptable.
One such tool for business agility is leveraging cloud technologies. Early adopters are finding that the benefits of the cloud go far beyond offsetting CapEx costs — it can be used to support more elastic business models that scale up and down quickly, and provides the flexibility to test new structural ideas or process efficiencies. An example advantage of using the cloud to test new ideas is being able to building an entire workflow instantly in the cloud to only test a single part of the workflow or the entire chain.
2017: The Year of Agility
Perhaps the single theme that arches across these megatrends is that they are all shaped by the rapid and fluid nature of the media environment. From the changes in consumer behavior to the advent of multiple distribution channels, media technology organizations are faced with new challenges almost daily. I firmly believe that companies who take these trends into account throughout their business strategies, whether investing in new technology, working with partners or vendors, or looking at multiple distribution strategies, will be positioned for success.
Read More
March “Ad-ness”
The craziest annual college basketball tournament has commenced! College basketball’s premier tournament is exciting because of its unpredictability; with 68 teams competing in a single-elimination battle for glory, anything can happen. Behind the scenes, everything needs to run like clockwork in order to keep audiences on the edge of their seats – especially when it comes to delivering the ads that keep the action coming.
For advertisers, the unique nature of this annual sports phenomenon represents a huge opportunity to capitalize on a diverse, expansive, and short-term audience. Average game viewership is in the 10+ millions, and the championship typically receives 20+ million viewers, putting ad buys at a premium.
And those ad buys can really pay off: companies who can take advantage of the opportunity see some great results from multi-platform TV advertising campaigns like those delivered during the tourney. A recent report from Accenture shows that multi-platform TV advertising can have a significant halo effect on other components of an integrated campaign such as search, display, and short-form video advertising.
Understanding and maximizing the advertising value of major sporting events is a sweet spot for Comcast Technology Solutions. Our Ad Suite offers advertisers with the processes and technologies that we use to manage and deliver our own ad inventory. Rest assured that every one of your spots will be handled with great care and expediency, whether it’s for air during a majorly televised event or for a recurring show.
So when the game is on the line, make sure you have a partner who will have your back in ad distribution.
Read More
Relationship Advice for Strong Media Technology Partnerships
Great content delivered through an extraordinary viewing experience is the key to lasting audience love, and the primary mission of every content provider. To maintain viewer loyalty amidst constantly evolving definitions of “great content” and "extraordinary experience,” brands engage with technology partners who can help them keep the focus on what matters most. How do you tell if a prospective alliance is worthy of your commitment?
There are plenty of advice columns out there when it comes to choosing technology partners. For the purpose of this article I’m going to assume that there are some obvious basics that you establish when being courted by a new technology suitor, such as:
Trust: You can see a demonstrated track record of responsible business practices and ethics.
Transparency: There are no operational walls that could obscure the work this partner will do for you. Processes, best practices, operations, and methods of communication foster a seamless exchange of knowledge.
Track record: Simply put, there are reviews you can read before buying. A great partner has a history of success and strong client retention rates, and stand behind the work they perform.
Beyond these partnership “no-brainers,” what are some qualities to look for in a long-term media technology partner?
A great technology therapist
Ever wonder why a professional therapist is so valuable? Beyond the basics of trust and communication, the personal truths uncovered by an experienced, objective guide lay a foundation for lasting transformation. A great media technology partner enables “self-awareness at scale” in very much the same way.
Consider partners who can help you to better understand yourself. There are huge advantages in an objective end-to-end assessment of your processes and infrastructure, performed by a seasoned technology partner. Your business benefits immeasurably from having an unbiased view of where your software and hardware infrastructure truly stands from a competitive standpoint. Opportunities to optimize or automate processes can be more effectively uncovered.
It’s common for technology decisions to be made purely on financial grounds, or based on preferences or comfort zones of existing knowledge. A compelling outside opinion offers an alternative approach, provided from a much broader context.
A strength trainer for systems and workflows
Hardened core systems are the foundation of a technology infrastructure that requires athletic performance and always-on availability. Sports professionals build their workout regimens around core strength for a good reason: your strength as a functional whole stems from a powerful and protected center. Your systems, and your partner’s systems, are no different. Risk mitigation is not the place to cut corners. The positive downstream impacts that a strong partner brings to your operational stability are worth it, and they allow your business to focus on what it does best.
Reliability is the name of the game, and the right partner can demonstrate redundancies, contingencies and recovery plans for every touchpoint in the workflow. The increasing speed of innovation underscores the need for a technology plan that leaves no stone unturned. It’s a matter of simple performance physics: as you accelerate, response times get compressed. A “real keeper” of a technology partner protects you, both by reacting swiftly and by helping you see further down the road than you could otherwise. The more your partner can future-proof your operation, the better.
A yoga guru for elastic business models
“Flexibility” is one of those important-sounding-yet-vague business buzzwords that only really defines itself when your business is stretched to the limit. One of the most fascinating aspects of modern content delivery is the explosive way that one new innovation or approach can fundamentally alter your customer relationships. Just how flexible does your business model need to be? If your partner can help you to scale up your resources or adopt a new tactic to respond to changes in your delivery landscape, can it do the same thing in the opposite direction? A successful delivery strategy has to look past the time-to-market desires of this quarter’s roadmap, or even this year’s.
Elasticity in your business approach goes hand-in-hand with the strength of your total technology infrastructure. A great partner won’t lock you into a model, methodology, or proprietary lynchpin that keeps you beholden to their service. Instead they will support your ability to maintain agility without degrading quality of service. You’ll also have the freedom to experiment and test new customer experiences and monetization strategies. The best media technology partners result in the healthiest of relationships – a mutual investment in each other that enables everyone to have skin in the game and play to their strengths.
Read More
Measuring the Immeasurable
Media technology is evolving so quickly that it’s increasingly difficult to take a clear picture of your content’s performance. To complicate things further, advances that bring content to market faster often lead to innovations that change the demands of the end user. Between these two realities are companies that only thrive when technology and operations align. Across the industry there is a real need to develop and maintain a high-performance operation model that can embrace constant change, capitalize on new technologies, and deliver the measurements that lead to continuous improvement. To get there, a more holistic approach to performance metrics is needed.
What’s the key to KPIs?
Quality of Experience (QoE) is one of the defining performance metrics of our day. Delivering a consistent quality of experience across devices takes concerted effort and execution; not just from one provider, but possibly from multiple vendors who all contribute specialized processes and best practices to a larger delivery effort. When it comes to a viewer’s experience, the key performance indicators (KPIs) are a constant challenge to improve; but at least they’re fairly straightforward to define. Is the image quality optimal for the viewer’s connection and device? How often does buffering intrude on a program? How easily are consumers finding your content? How are they responding to monetization models?
When it comes to measuring performance within a vendor-rich workflow, KPIs get a lot harder to define and assign. How do you measure quality accurately if the file you delivered goes through a multi-vendor transcoding process before it gets to the end user? How does a service level agreement build accountability if it doesn’t consider the upstream or downstream impacts of each touch point?
The challenge is not just about end-user quality – it’s also about the value of content. Monetization models rely on seamless workflow operation almost as heavily as they do on increasing consumer lifetime value. For example, consider the relationship between linear programming and time-shifted video on demand (VOD) viewing. To maximize Nielsen C3 ratings, broadcast VOD assets must be available as close to the original air-time as possible. A program that needs to be available through several different OTT services at once needs to be delivered on time, every time, for value to be assessed accurately.
It takes a smarter village
One way to simplify this complexity is to create a KPI measurement structure that has every player in your workflow ecosystem reading from the same playbook. Again – the pace of technology is not going to relent, so a media distribution strategy needs to consider every piece of hardware and software, physical or virtual, as you create a measurement plan that permeates the entire workflow.
Performance measurement is a perpetual practice. For media companies however, continuous improvement encompasses not just operations and user experiences, but also a close scrutiny of the measurements themselves. The best strategies of 2017 will acknowledge the interdependencies created by every inhabitant of a workflow landscape, and measure success accordingly.
Read More
OTT Monetization: Charting Your Course
New over-the-top (OTT) video destinations are launching with such regularity that it must be an easy voyage to the promised land, right? The answer is no – it’s hard work. When it comes to monetization strategy, course correction and adaptation are the norms and not the exceptions. Success is out there for the companies able to deliver a truly differentiated experience, but once you’ve got an audience’s attention, is there a golden compass that keeps you on a path to profitability?
What do YOU pay for and why?
Think about your own viewing habits for a minute. Let’s say you’re exploring an inviting new service and you’re finding plenty of stuff you’d like to watch:
What makes you want to actually spend money for it?
What makes you want to keep paying for it?
When is advertising okay for you, and when is it bothersome?
How often do you just find something else to watch?
For the purpose of this exercise let’s just pretend that performance isn’t an issue (although of course it is). Great content and an extraordinary playback experience over any device are critical metrics, but they don’t automatically translate into viewers eager to throw money at the screen. The value proposition for an OTT brand is as personal as the answers to the above questions. And, if the trend towards personalization and more localized content continues,it’s going to have a profound impact on how your service is valued.
Flexibility is key to long-term health
Providers and advertisers will continue to come up with new ways to monetize; but the only real constant in the equation is the need to move your service and your audience into a more meaningful dialogue with each other. OTT providers have a delicate balance to maintain if they want to build a profitable brand without making viewers feel like guinea pigs. Everything hinges on staying relevant and engaging to consumers. The constant push to compress time-to-market and maximize ROI means that monetization strategies need to demonstrate a keen awareness of where your service sits on each viewer’s priority list – and respond accordingly.
There are currently three primary monetization models, each with their own challenges. Keeping engagement high and churn low are obvious and constant assumptions across the board.
Subscription-based (SVOD):
As we know from our domestic market, the subscriber-based model rules the roost – think Netflix, Hulu, Amazon Prime, or any number of others. With so many new video services, the days of being valued solely on library depth are gone. Original content that keeps subscribers glued to the screen is expensive to create, and the big players are spending billions to either make it themselves or coax popular programs into jumping ship.
A standalone SVOD destination – especially a niche-focused one – might have an entirely different mission than one that exists to augment a larger brand. Does your long-term strategy hinge on gaining a rapid ROI? Are you opening up an SVOD destination as a way to expand viewership, or to establish a new delivery platform? Does your operating capital provide the flexibility for heavy experimentation with introductory pricing? A large company that’s diversifying might be able to afford a negative margin while it works the kinks out, but a shiny new brand with an unknown fan base needs to get it right the first time. Unless it has the leeway to ignore profit as a success measurement, an SVOD offering needs to enter the fray with eyes wide open and an agile, efficient direct-to-customer (D2C) operation.
Advertising-based (AVOD)
Ad-based services are not just here to stay – they’re getting better on every front, and consumers accept them as long as they don’t over-encroach. Technology and process improvements are improving the industry’s ability to determine the true value of a piece of content across platforms, channels, and time-shifted consumption. At the same time, targeted and localized campaigns are further personalizing the user experience. Still, AVOD services need to be attractive to advertisers who have a lot of options for their media spend – and those options increase with every new OTT service.
Transaction-based (TVOD)
Simply put, a TVOD service puts content out there for consumers to purchase or rent by the slice. On one hand, it’s a less complicated way to set up an OTT shop, but there are a lot of places out there to buy content. Aside from the costs of using an established platform and managing the day-to-day transactions, the big challenge (outside of playback quality) is pretty traditional: you’ve got to sell enough to stay in business.
A “blended family” of xVOD monetization models
A mature OTT monetization strategy will likely emphasize one of the above “big three” models, but a creative hybrid approach can really pay off. Strong content is an awesome revenue generator that can hold up to some interesting new combinations. Subscribers might be fine with the judicious use of advertising in some instances. TVOD has demonstrated value as an effective first-tier relationship, leading into a more reliable subscription payment. Maybe your audience will respond to a hybrid “metered” approach where the lines between transaction and subscription are blurred, with access purchased in time-specific chunks.
A successful monetization strategy is anything but static. It has to be built on a foundation that allows for scalable quality and highly informed trial-and-error. More than just “price,” the decisions made on how to organize and monetize content have an immediate impact on how your audience relationship develops. The word “holistic” gets thrown around a lot in business, but in this space it’s crucial. OTT destinations need to understand their audiences, and build their revenue strategy with a concerted effort of innovation, best practices and rapid response.
Read More
Why Advertisers Should Love Server-Side Ad Insertion
Online advertising in all its forms, from banners to pop-ups to videos, generates a significant amount of revenue for both advertising agencies and media companies. Thanks to innovative technologies that are able to overcome anti-advertising tactics and better target consumers, advertising revenue continues its upward trajectory.[1] One of the most talked about tactics recently is server-side ad insertion (SSAI); I’ll illustrate why:
I was talking with a friend of mine recently, and she mentioned that she uses an ad blocker on her laptop. When I asked her why, she told me that she hates overly advertised pages and the slow load times that come with all that. I asked her about what shows she watches online, and she mentioned binging on The Walking Dead. Her ad blocker strips ads out of her stream and she didn’t even realize it. What’s really concerning to the media company providing the content is that their attempts at monetizing her zombie-watching experience have hit a brick wall.
So how do we re-enable video ads on her content without driving her away from her favorite shows because of slow video playback? Server-Side Ad Insertion, also known as SSAI or “ad stitching”.
To fully understand the advertising powers of SSAI, you also need to understand how its client-side counterpart works:
Client-Side — When a consumer plays an ad-supported video, the player on their device pauses video playback at an ad break, makes a separate call to an ad server, which then makes a separate call to the ad content. This process causes buffering, and also allows ad blockers to determine that the new content is coming from a known ad provider. The ad blocker then prevents the ad content from being played.
Server-Side — All advertising decisioning and integration is done at the very beginning of the content playback, allowing one consistent stream with all content and ads included, and protected from ad blockers.
Although client-side advertising continues to be a very effective way for media companies to monetize their content via dynamically inserted ads; the rise of ad blockers, and the drive towards a smoother, more “broadcast-level-quality” approach to online video has made this method less favorable in recent years.
Ad blockers are essentially browser plugins that have a configured list of regular expressions that match known ad server URLs. When a browser makes an ad request, and the request matches one of the ad server URLs, the ad blocker will apply a change to the page. The page will then simulate an error from the ad server URL and/or attempt to remove an element from the page, thereby effectively blocking the ad.
According to a recent report from Kantar, 18 percent of Internet users worldwide use ad blockers.[2] In the U.S., it’s estimated that in 2017 32% of Internet users will use ad blockers.[3] In 2016, it’s estimated that $20.3 billion in ad spend was blocked.[4] That’s significant lost revenue.
Had the media company providing my friend’s zombie-a-thon used SSAI, she would have a smooth, pleasant, terrifying experience across the entire binge, complete with correctly targeted, relevant and cleanly delivered advertising. Plus, the ads that are no longer blocked can now generate revenue as originally intended.
From the advertiser-side of entertainment, what’s not to love about SSAI? To learn more about server-side ad insertion at Comcast Technology Solutions, check out the solution brief.
[1] Digital Advertising Report, Adobe Digital Index 2015
[2] Kantar TNS, “Connected Life,” September 28, 2016
[3] eMarketer, June 2016
[4] Forrester, “Ad Blocker Rock the Media Ecosystem,” May 10, 2016
Read More
Buzzwords, an End-to-End Companion
There has to be some sort of mathematical correlation between the speed of innovation within an industry, and the speed in which buzzwords lose their relevance, become distorted, or die altogether. We love buzzwords – especially with regard to complex ideas, because they imply a lot of specific meaning without taking up a lot of brain space, but here’s the thing: taking a buzzword at face value can create a pretty misleading picture about what something actually accomplishes – which can cause a lot of trouble when budgets and deadlines are on the line.
The fuzzy nature of some buzzwords is not just a matter of lazy interpretation. It’s also commonplace for a marketing effort to spin the meaning of a popular buzzword to suit a specific agenda. That may make for an inspired campaign, but it doesn’t do much for consistency of meaning. For example, “reengineered” began life as a buzzword back in 1990, in a Harvard Business Review article written by Michael Hammer, a former MIT professor. His specific definition of “reengineering” focused on streamlining business workflows – eliminating process waste instead of simply automating steps that provide no value. Since then, the term has been co-opted for so many purposes that it’s barely a buzzword. Today, it just means “new and improved.” At this point, fast-food companies have even “reengineered” their menu.
Buzzwords are useful attention-getters, but without specific context, how a word is interpreted can be all over the map. It is easy for the definition of some trendy or commonly accepted terms within our own industry to be distorted in order to shoehorn them to marketing objectives or product goals. Perhaps we need to reengineer our approach to how we interpret some terms, and maybe even throw out a few.
“End-to-End” – to what end?
About a year ago we wrote this blog about what the term “end-to-end” really means, and how it’s actually peddled in the marketplace. Ultimately, whenever the term is used, it should be viewed as an invitation for scrutiny. It cannot be assumed that it actually means “every step is included.” What are the end-points in question? Within the context of a video workflow, does the solution truly encompass the entire soup-to-nuts process, from ingest to delivery to playback? Or, is it merely solving for one step in the larger workflow?
To make the term even more suspect, in many respects the “end-to-end” solution is actually just an amalgam of vendors, which doesn’t deliver the process efficiencies or cost savings implied by the term. Whenever you see “end-to-end,” hold it accountable to your standards to ensure that it’s aligned with what your business needs it to mean. Not all end-to-end solutions are created equal.
Is “OTT” still “over” anything?
“Over-The-Top” – where did the term come from? Well, in the First World War, “over the top” was a strategy by British soldiers to charge up and over trenches dug into the battlefield. For us, it refers to a business model of delivering content online, “over the top” of traditional cable subscriptions, without the use of a multiple-system operator (MSO) for controlling or distributing the content. When viewed within the context of the initial stare-downs between new internet content startups and traditional broadcasters, it’s easy to see how such an aggressive term became a cool way to differentiate a delivery model.
But is it really the right term anymore? OTT offerings are working more closely with major broadcast companies than ever before, to deliver video on demand (VOD) in a universe of multi-platform “on any device” expectations. For many, success is now being realized by reaching customers “through” platforms and techniques developed by the broadcast industry and not “over” them. Sure, you could argue that because the service has a separate charge that it’s still “OTT,” but that’s a really loose interpretation of the original standalone intent of OTT as an industry buzz-term. The market is getting noisier with every new OTT launch, making it harder for a new brand to get noticed. If an OTT business wants to maintain an audience, then quality of experience and speed to market are as critical as a strong monetization strategy. These are factors that drive up the cost of entry, and a strong OTT-broadcast alliance can mitigate that. The lines are not just blurring between broadcast and OTT – they are being erased. We’re going to need a new term here pretty soon.
“Multi-platform” is now an ante
Okay: “multi-platform” means that your service works on different devices in some form, so for the literalist there might not be much to argue about. But in an experience-driven industry like video content delivery, does it even mean much anymore? The breadth and scope of an offering’s device ecosystem is just one part of the picture. What does playback look like? Do I have to do something different with each device? Will discrepancies in quality make your multi-platform claims a moot point? If I access your content from more than one device, will my other devices stay abreast of any changes I make? Some of those concerns can be solved for by switching to a word like “cross-platform,” but not all.
The obvious comparison to “multi-platform” is “multi-channel,” which is so similar it can almost be an exercise in semantics – but there is a relevant point: In service industries, for example, there is plenty of conversation about the evolution from disparate multi-channel support into a more holistic “omni-channel experience.” In it, no longer does it matter how a customer chooses to interact (phone, chat, text, social media, etc.), a consistently high quality of service is delivered that meets or exceeds a customer’s expectation. Likewise, when we see “works on any device” in terms of video delivery, we might be thinking in terms of multiple devices; but what customers really expect is “omni-platform” quality – a consistent playback and user experience that’s familiar, trusted, and extraordinary – no matter how/when/where it’s consumed. “Multi-platform” is a word that needs to be held to an increasingly high standard.
When the buzz wears off, “Caveat Emptor” still applies
“Buyer Beware” is definitely a great-granddad of all business terms, and within a complex technology field like ours it applies more than ever. The true meaning of many buzzwords we rely upon is most certainly personal, but it’s really our expectations – and the expectations of our customers – that supply a lot of the context. With a clear understanding of what we want the end-user experience to look like, we can ask the right questions, move in the right directions, and align with providers that are really going to bring our vision into focus.
Read More
Broadcast and OTT: The Year of Converged Workflows
Best practices in delivery workflows and monetization processes can be leveraged by both broadcasters and OTT providers to create efficiencies delivering high-quality content to a multitude of destinations. This blog details this latest OTT trend.
Read More
Leading the Collaboration of the Ad Industry
The creative ecosystem of the advertising industry is more multichannel and global than ever. Content for both traditional and emerging platforms is an increasingly complex and challenging dynamic for advertisers and agencies to navigate. We all know managing multifaceted, global ad campaigns across stakeholders is both time-consuming and prone to errors, elevating overall spend as well as brand risk.
To compensate for this complex landscape and the variable daily demands, specialized companies exist to service the intricate and customized segments of this tangled ad ecosystem. Comcast Technology Solutions has a best in breed ad distribution platform, and recently announced an alliance with other industry leaders to complete the ad asset management workflow.
As you may have read, we just announced our participation in the newly formed Ad Consortium. The Consoritium represents domestic delivery, aligned with our co- founders Adstream, Burns Entertainment & Sports Marketing, Entertainment Communications Network (ECN), and The TEAM Companies. Each founding company plays a key role in the ad process. The Ad Consortium will combine best practices across traditional and emerging media, creating a seamless integrated process for advertising content, benefiting advertisers and agencies alike.
With the thousands of spots delivered each day around the U.S., it’s imperative to not only use current asset management and delivery best practices, but to continue to refine and enhance those processes. Participation in this consortium will allow us to deliver tremendous value to advertisers and agencies, as they can now leverage an alliance of these leading ad related industry solutions.
The Ad Consortium’s objectives include:
Identifying the best protocols in key areas like file-sharing, analytics, talent and rights usage, and more, at each step of the creative process.
Adopting common nomenclature to provide a smooth and efficient process that improves accuracy and eliminates operational waste.
Using tools like Ad-ID as the common identifier and metadata source for ad distribution, traffic, talent payments, and rights and asset management across all platforms.
Steering marketers through the maze of new and emerging media, (VOD, OTT, HTML5, social, etc.) by enabling efficient ad delivery, tracking and management.
I look forward to sharing our work with you here, and at the Ad Consortium website: http://www.adconsortium.org/.
For more information on how Comcast Technology Solutions can keep your ad delivery at the forefront of technology and best practice advancements, check out the Ad Suite or contact us.
Read More
What is Live-Plus-35?
The entertainment and media industry has been aflutter about Nielsen’s plans to launch “Live-plus-35” this fall. This would entail extending the ratings window to 35 days beyond the initial live broadcasting of a TV program. Considering that today’s TV experience involves multiple devices and platforms that support delayed viewing via DVR recording, VOD, etc., it shouldn’t be a surprise that consumers are becoming more and more likely to watch TV content after it has aired. According to Glen Enoch, Nielsen’s Senior VP of Audience Insights; "We have found that audiences continue to grow beyond seven days in every instance, some by 58 percent among adults 18-to-49."[1]
For networks, going from 3/7 to 35 days should ease their frustrations over not getting full viewing credit for their programs. For example, there is a growing trend of consumers preferring to watch drama series and animated comedies over time periods much longer than three days.[2] In fact, some shows are mostly watched via delayed viewing well outside of the Nielsen C3/C7 window and consequently have low ratings because they have small in-window audiences. In other words, these shows become undervalued despite having high consumer engagement.
Until Live-plus-35 arrives, roughly 75% of broadcast and cable networks are now insisting on using Nielsen's C7 as the primary way to measure success when setting upfront deals.[3] Although C7 has been around for a while, Nielsen only allowed C7 in Canada until this year. Now that C7 is allowed in the U.S., the majority of broadcasters and networks are starting to drop C3. Just four days more than C3, this small extension of the ratings window is expected to increase revenue by 1-3%.[4] Now consider the potential impact of adding 28 more days to C7…
Imagine the kind of impact Live-plus-35 will have!
[1] “Introducing Live-Plus … 35-Day Ratings(!) as TV's New Normal Struggles” by Michael O’Connell, The Hollywood Reporter, June 2016
[2] “”
[3] “75 Percent of Networks Will Use C7 Advertising Metric for Upfront Deals” by Jason Lynch, ADWEEK, June 2016
[4] “More C7-Driven Media Buys? Next, Look For Data-Driven Deals” by Wayne Friedman, TVWATCH, June 2016
Read More
Open Minds, Open Source, Open CDNs
A Content Delivery Network (CDN) is essential for any company operating in the entertainment and media space, many of which use multiple CDNs. The standard definition of a CDN is relatively simple – a collection of geographically-dispersed servers that optimize delivery of all kinds of content based on the end-user’s physical location. Perhaps stating the obvious, reducing the distance between the server and the end-user will lower the potential for degradation, and improve the content consumption experience. However, what aren’t as obvious are the unique benefits that can be gained from using an open source CDN. Read on to find out why some organizations go with the ‘open’ option:
Collaboration – An open source CDN relies on a community of developers, website administrators, and network owners to build, maintain, and improve the CDN software. These open source communities often have a mix of skill sets and industry backgrounds, but work together to reduce development costs and contribute to highly performant CDNs.
Lower Cost – Because so many companies and individual users are contributing to the community and feature set, this lowers the overall cost for all community users.
Speed to Market – Open source projects often deliver many critical features over short timelines due to the fact that contributions come in from multiple high profile and well-funded companies in parallel.
Independence – Using open source software ensures that a user’s web project won’t be dependent on a single company’s funding, road map or technology, and therefore is able to move forward despite individual company setbacks and changes in the marketplace. Further, if a user decides they need to change their technology, they are able to do so without wasting a lot of resources because they didn’t have a large amount of capital invested in one system in the first place.
Support Flexibility – Open source software allows users to decide how much they want to do on their own, and also gives them multiple options for support should they choose to get help. With open source software, it is also much easier to hire people with experience in a particular technology since multiple companies both develop and use open source software.
Configuration – Although a community builds and maintains the CDN software, a user decides how they will use this software… such as which optional features to implement, how to configure services to deliver content, and which hardware to use.
Diversity – Open source CDNs allow all types of individuals and companies to submit features and functionality to a project, and therefore have extremely large and diverse communities that collaborate on a wide variety of features and solutions to challenging issues.
Innovation – Due to the diversity and collaboration, open source CDNs are constantly innovating across multiple industry challenges. This leaves them well positioned to accommodate next gen media requirements, such as new protocols, interactive features, e-commerce systems, etc.
Standardization – One of the biggest benefits of open source CDNs is that they are open. Contributions to the CDN eventually reach a critical mass, which drives the community to innovate and simplify standards. While proprietary CDN platforms move further and further from industry standardization in an effort to patent and protect their intellectual property, open source constantly moves towards unified standards. This ensures simplicity and user-friendliness over the long haul. Comparing the Windows Media Server and Flash Media Server to HTTP-based streaming is a great example of how open standardization really does drive better long term outcomes.
Security – Security is a key concern when implementing any software into a technology stack. CDN software is no exception, as it plays a major role in distributing critical content. This is where open source shines. Compared to proprietary software, open source development ensures more “eyes” on the product. Having more individuals working with an open source project increases the likelihood that potential vulnerabilities and design flaws are uncovered much faster than with proprietary software development. This is evident with Linux and Apache - some of the most widely deployed and secure open source projects in the world. In fact, studies like the Open Source Hardening Project, established in 2006 by the US Department of Homeland Security, have shown fewer security flaws compared to proprietary software.[1]
All in all, the ongoing rise in content consumption via mobile devices, applications, and websites necessitates highly performant CDNs that can support their complex delivery requirements now and in the future. Open source CDNs are collectively built by some of the brightest minds the world over, and often lead the way in content delivery innovations. They also allow users to maintain complete control over their projects, and to easily customize and improve how their content is delivered. Open source is about sharing knowledge for the greater good of all CDN users. Whether initiated by businesses, nonprofits, or individual users, open source projects enable users to augment and accelerate their software development efforts.
The Comcast CDN is an active part of the open source community. Our commercial CDN uses the open source ‘Apache Traffic Server’ caching engine and we regularly contribute to this open source project. We even have our own ATS committers on staff. Comcast also designed and open-sourced a caching control layer called ‘Traffic Control,’ that allows users to tie together many ATS caches to form a large-scale CDN. The Traffic Control software is available in Github and includes a Traffic Router, Traffic Ops, Traffic Vault, Traffic Monitor, Traffic Stats, and Traffic Portal. And, we are proud to say that other companies are actively using Traffic Control. A recent example is Cisco, which created a commercial CDN offering for service providers on top of Traffic Control.
Beyond the open source community, customers of the Comcast CDN also benefit from our open-sourced technology. They are using one of the most performant open source caching technologies in the world, which is also used by top-tier companies like Apple, Yahoo, and LinkedIn. This ensures that they are getting the latest technology and highest performance for media delivery from the Comcast CDN. Click here to learn more.
[1] http://searchsecurity.techtarget.com/definition/Open-Source-Hardening-Pr...
Read More
A(nother) Guide to Ad Transparency
Transparency in advertising has been a very hot topic for the last few years, encompassing both financial and operational transparency. However, with many overlapping principals it can be difficult to understand the differences between the two, and more importantly, how they could impact the ad transaction process in the months ahead.
In an effort to demystify the meaning of ad transparency, The American Association of Advertising Agencies (4As) and the Association of National Advertisers (ANA) formed the “Media Transparency Taskforce” with the goal of creating a set of recommended ad transaction practices for agencies and advertisers. While the two groups have not settled on a final set of guidelines, the 4As has issued their own set of recommendations. The following highlights what could be the most impactful of the 4As’ recommendations:
Sections 1, 2 & 3 – Address client/agency retained relationships for U.S. media planning and buying services:
The default principle in all client/agency relationships where the agency is agent and the client is principal is full disclosure and full transparency in media planning and buying, unless there is an exception that the client has agreed to in advance and is covered by a separate agreement. Further, the client/agency agreement should specify that the client is the principal and the agency is the agent.
The agency and the client can agree to a fixed price or other pricing arrangements for the agency's other products and services through open and arms-length negotiations. These might include proprietary media (including pre-owned inventory) trading desks, barter, programmatic buying and other future models.
The agency business is governed by contracts with clients. All terms of business between the agency and the client should be documented in a formal written agreement(s), and both parties should comply with the obligations to the other party that are noted in the respective agreement.
Sections 4 & 5 – Address separate commercial relationships between agencies and media vendors and other suppliers:
The agency, (agency group and holding company) may enter into commercial relationships with media vendors and other suppliers on its own account, which are separate and unrelated to the purchase of media as agent for their clients.
The agency should disclose and seek the client's acknowledgement of any agency's ownership interest in any entity and details of any of its employees who are directors of any entity that the agency recommends or uses as a provider of products or services to the client.
To capture the opportunities of ad transparency, a thorough understanding of how ad spots are trafficked to media destinations is essential – especially for advertisers. Advertisers should be aware of the choices their agencies are making in ad distribution and the costs associated. Sometimes trafficking of spots gets bundled with other services. However, many advertisers save money when agency costs are itemized and service partner choice is transparent.
To help facilitate the transparency of their ad distribution, advertisers should consider using an ad delivery service that has a deep understanding of the new transparency rules and can provide a second set of eyes on an ad spot’s journey. Having this extra guidance can help advertisers save resources and ensure that their ads arrive on time and generate revenue. If you could use some extra help in understanding the new ad transparency guidelines and how they may affect your ad distribution, please call our Ad Suite team at 646-230-8556.
Read More
Terrestrial, Satellite or Both?
Satellite cable delivery has been the standard for many years. However, recently content delivery via terrestrial “fiber” has started to shake up the industry. Depending on your location and goals, one or both may be the best option for your delivery. Read on to find out the difference between the two delivery methods, and get a better idea of which one is right for your pay TV business.
Terrestrial delivery has many advantages over satellite and is becoming the standard for many providers. Most large metropolitan areas in the U.S. already have fiber optic networks built, which allow them to take advantage of many of the following benefits:
Supports a wide range of networks, from traditional networks like NBC to international offerings like France 24
Permits localization, such as regional sports networks like Pac-12 Washington
Easily accommodates a host of formats, including formats that are not widely deployed
If the network is redundant, fiber provides an essential backup in case of an outage
Cost efficient to set up if there’s an existing fiber network
Services can be cost effectively delivered in mezzanine quality in order to transcode in many formats to support multiplatform delivery
Unfortunately, terrestrial delivery can be cost prohibitive – especially for many smaller cities and towns where laying down new fiber can be incredibly expensive. In these areas, satellite delivery may make more sense. Despite some of the limitations in the range of networks and formats available for satellite, the following are some of the ongoing benefits of satellite delivery:
Supports all the most popular networks
Can reach viewers anywhere in the U.S.
Is often the most cost effective method to receive content for smaller, geographically diverse markets
Is a very reliable reception method that is not subject to terrestrial issues like fiber cuts or other network issues
Enables affordable mobile backhaul and coverage in areas unreachable by terrestrial connections
So there you have it, both terrestrial and satellite delivery offer unique benefits that are best suited to different regions of the U.S. That said, some of the world’s largest cable operators are harnessing the unique benefits of both delivery methods. Fiber permits them to reliably deliver targeted content, and satellite permits them to offer video options and create new revenue streams.
Whether you are setting up a pay TV business in a large city or a small town, have deep pockets or a tight budget, Comcast Technology Solutions can deliver your premium content via the method that suits you best – terrestrial, satellite or both. Please call 800-824-1776 to learn more.
Read More
Origination Evolution: Integrating Human and Hardware Resources
There is little doubt about the recent dramatic shift in how content is distributed and viewers consume it. Yet, despite increased demand for streaming and OTT, pay-TV and other subscription-based services remain strong, with over 100 million pay subscribers[i] in the United States, versus a combined 50 million subscribers between top online providers[ii] in 2015.
Channel origination has adapted and transformed to satisfy this hybrid distribution demand. Layering in the needs for security, monitoring, and flexibility further complicates the already complex task of origination. These complexities have encouraged an evolution in the human and technical processes necessary for stable and efficient origination.
The result is a unified origination environment where technical and hardware needs, as well as human support services, are merged. It is an environment in which technology is streamlined, and where operators and engineers coexist to ensure stable origination.
Joining Forces for Stability and Reliability
Channel origination is and always has been an orchestrated effort from technology, operators, and engineers. Operators are key to ensuring the quality and consistency of channel playout, and are the first line of defense against outages and other issues. Engineers provide for the overall system design, documentation, deployment, and ongoing sustained support of the operational systems. Operators notify engineers of outages and issues, and both teams then work together to resolve the matter and restore playout.
The key difference in the unified environment is that operators and engineers are in the same room. Therefore, response time to outages is drastically reduced as issues are seen at the same time, in real time. Operators and engineers can begin troubleshooting and testing immediately, and since both parties have witnessed the outage, precious time is spared that would otherwise be consumed describing the observed issue.
Redesigned Technology Offers Simplicity and Flexibility
Integrated technology in the single environment improves monitoring and recovery time by simplifying diagnostics and research. Operators and engineers have less hardware to test and inspect to isolate the issue. The combination of the streamlined technology and colocation of Operations and Engineering not only greatly reduces recovery time, but also enhances origination stability, efficiency, and flexibility.
In addition, the updated equipment and technology in the unified origination environment are designed to halt problems before they start. They include diagnostic rules that automatically spot issues before they affect the feed. This “self-healing” feature can ultimately reduce the amount of issues that operators and engineers see.
The unified origination environment transports video via IP routing, versus point-to-point via coax. IP routing moves all video signals through network infrastructure, without a fixed number of cross points to contend with. Utilizing a 10G network infrastructure enables simultaneous transport of multiple video signals on a single network connection—from compressed video (MPEG2, H.264), to uncompressed HD and SDI, and even 4K with light compression. Furthermore, the network is redundant and can withstand the failure of a single router with no perceptible downtime.
Beyond the stability and monitoring benefits of the unified origination environment is its flexibility. Rapid scalability is made possible by the consolidation of hardware and its near “plug-n-play” attribute. Breakout pods make adding signals or channels (such as live event streaming) simple and seamless. The nimble nature of the IP infrastructure facilitates flexible expansion.
This new generation of channel origination leverages the best technology has to offer with experienced operators and engineers within a unified space to provide an efficient and reliable platform. The network infrastructure enables rapid scalability, security, and redundancy. It also facilitates seamless distribution to satellite or terrestrial, such as OTT transmission services, meeting the demands of today’s consumers and allowing for easy expansion for tomorrow’s evolutions. Click here to learn more about Comcast Wholesale’s new unified origination environment.
[i] http://www.cartesian.com/over-the-top-versus-pay-tv/
[ii] http://247wallst.com/media/2016/01/15/can-hulu-compete-with-netflix-by-the-numbers/
Read More
How Comcast’s CDN Handles Traffic Spikes
Sporting events, political rallies, retail sales, and movie premiers can all cause highway traffic jams. These same events can also lead to significant spikes in Internet traffic. And, when spikes occur, the quality of an end-user’s streaming or download experience typically suffers. Live sports and breaking news shows experience low bit rates and pixelate, online gaming downloads stall, and streamed movies buffer. Content Delivery Networks (CDNs) help mitigate the effects of these spikes and deliver a consistent, high quality experience to the end-user.
However, some CDNs are better equipped to handle these spikes. The Comcast CDN was purpose-built for large media delivery at the highest scale for live events. With more than 100 nodes and one of the largest, most advanced networks in North America, the Comcast CDN easily manages demand spikes while maintaining stream and download quality.
How does the Comcast CDN do it? Because Comcast is a video content provider and distributor, the Comcast CDN is optimized to handle this type of large file delivery and traffic surges without the degradation of quality. In addition to robust network infrastructure, Comcast employs the following strategies to mitigate traffic spikes and deliver superior performance:
Continuous Monitoring – Using a variety of monitoring tools, Comcast tracks CDN performance and watches for end-user peak usage patterns. Comcast also builds robust predictive models that take into account the industry’s broader historic and forecasted internet traffic growth statistics, as well as Comcast’s own data.
Proactive Planning – After assessing end-user patterns and predictive modeling, Comcast preemptively and liberally increases CDN capacity for recurring and anticipated peaks in traffic. Comcast also implements a ‘soft moratorium’ to minimize impactful infrastructure service changes if a CDN customer has a critical event on the horizon.
Strategic Absorption – For unanticipated traffic spikes, Comcast absorbs the excess traffic by automatically spreading traffic to multiple CDN caches in the same metro. Comcast has over 100 separate caching locations in the U.S., and spreads its CDN traffic to multiple proximate physical locations in each major metro without a tradeoff in performance. These CDN locations are strategically deployed across the U.S., which (1) enables delivery in large metro areas and (2) more evenly spreads demand to prevent congestion that commonly impacts high throughput HD video and large file downloads. In addition, the Comcast network is well interconnected with many other networks. This enables high-quality content distribution, and allows content to be localized for distribution just as effectively.
Reliable Service – Many CDNs don’t provide enough overhead capacity or standard origin protection tiers to absorb large traffic spikes. These practices further compound the congestion problems experienced during large events. Excess traffic is either bottlenecked in the CDN or passed back to the customer’s origin, which makes end-user problems worse. This does not happen with Comcast’s CDN, which has a QoS variance that is 20-30% better than other CDNs based on objective third-party data. Whether it’s serving high seasonal traffic peaks or one-time massive scale video events, content is delivered consistently to end-users and a customer’s origin load remains at an absolute minimum.
If you have media content that is prone to spikes such as movies, TV shows or online games, you will want a CDN designed to protect your content and customers. The Comcast CDN, available from Comcast Technology Solutions, offers the most complete solution that can be used as a stand-alone CDN or as part of your multi-CDN strategy. Please contact us to learn more.
Read More
Simplify the Management & Monetization of Live Events
For media companies that frequently broadcast live events, such as sports games, the workflows for managing and monetizing multiple live events just got easier with the full suite of mpx services. Using mpx Live Events, you can now automate the recording process by setting your encoders to stop and start at specific times, and the pre-screening feature in the mpx Console to help you make sure that everything is on track. Since mpx Live Events is fully integrated with mpx, your live events can also utilize mpx Replay (which provides start-over functionality and converts video assets for catch-up and video-on-demand). And, as a part of mpx Commerce, you can monetize your live events by packaging them as pay-per-views (PPV), electronic sell-throughs (EST), or subscriptions.
For example, let’s say your media company has the exclusive rights to live broadcast the World Freestyle Skiing Championship. This event hosts three competitions over three days, starting with Moguls on Tuesday at 9:00am-12:00pm PST, Aerials on Wednesday at 9:00am-12:00pm PST, and Dual Moguls at 9:00am-12:00pm Thursday.
mpx Live Events allows you to set your encoders up at these prescribed event start and end times. Then, you can use mpx Commerce to provide free access for the first two events, and PPV for the third. Or, bundle all three events into an extreme skiing sports subscription.
Please read our mpx Replay Solution Brief and mpx Commerce White Paper to learn more. As always, please feel free to contact us with any questions about mpx.
Read More
Considerations for the Move to Digital and HD
The old adage if it isn’t broken, don’t fix it is sage advice in many respects. However, applying this advice to technology upgrades can limit operations, capabilities, and ultimately profit. Though the process of upgrading technology often seems costly and complicated, it frequently opens doors to new profit and growth opportunities.
As an example, a significant number of cable operators in the U.S. have not upgraded to digital or HD operations and continue to maintain analog and SD-only headends. The reasons for this are wide and varied, but often justified. However, given the growth and demand for robust quality of experience, this approach must be reconsidered.
While moving to digital and HD requires upgrades in the plant, primarily for new receivers, they can be deployed over existing MPEG-2 equipment, negating the cost for a complete overhaul. This has a compounded beneficial effect on operations, as the life of much of the equipment is extended while at the same time gaining access to services that are in demand from subscribers. This combination positions operators for growth and unrealized profits and keeps them competitive in a tough market for subscriber share.
The Efficiency of Digital and HD
Transmitting and receiving digital HD signals is fundamentally more efficient than analog SD. For instance, one analog channel occupies the same amount of space in which 10-14 SD digital channels can be transmitted. Channel lineups can be expanded by a factor of at least ten without soaking up more bandwidth. This incredibly flexible means of upgrading allows for more channel and network offerings, or more bandwidth for premium services, such as VOD or improved broadband.
One question remains about digital HD upgrades: how is it deployed to the field? Here again the need for expensive STB deployment is not required. In most cases, existing and low-cost digital terminal adapters (DTAs) are capable of delivering digital and HD cable signals into subscribers’ homes. Used as a cable box, DTAs can utilize the ROVI guide, which offers a guide and navigation common in digital cable. Additionally, operators can roll out DVRs as needed when subscribers request them.
Upgrading or not upgrading is a choice for the time being. But the time is coming when it will either it will be more expensive to deliver analog or operators will run out of bandwidth. In light of the investment required to offer digital and HD, upgrade costs will be recouped and the transition will be more cost-effective in the long run with increased efficiency, more channel options, and the potential to offer expanded, higher-margin services. The upgrade not only positions operators for the here and now—as well as the future—of cable, but moving to digital drastically reduces signal piracy. It is never too soon to consider the digital and HD upgrade and how it can protect your operations and open new doors of opportunity for your business.
Read More
Unleash Digital Learning with MediaAMP
Much in the same way that premium television has moved beyond the living room in recent years, education has also undergone a similar transformation. Although teaching and learning still primarily take place in the classroom, these two essentials of education can now take place ‘just about anywhere’ with today’s online video technology. thePlatform’s strategic partner, MediaAMP, is paving the way for digitized education.
Developed at a leading research university and powered by thePlatform’s mpx, MediaAMP’s end-to-end “Media Management Platform” addresses the unique challenges of operating a digital media-intensive learning environment. With a deep understanding of the complex requirements that educational organizations face, MediaAMP enables you to significantly reduce the cost and complexity of storing, managing, and delivering content. The following are additional benefits of MediaAMP’s solution:
Teaching and Learning – MediaAMP integrates with leading lecture capture and learning management systems to provide a central management hub, allowing instructors and departments to manage their content and power rich multimedia learning experiences.
Libraries and Special Collections – Feed-based, secure delivery ensures compliance with copyright and fair use access restrictions, while still enabling collaboration and the reuse of digital content. Online media collections are searchable via robust customizable metadata.
Marketing – MediaAMP unlocks siloes of content across the entire institution, securely distributing digital media to external audiences through virtually any delivery channel. Showcase learning assets to support Public Relations, Admissions, and Community Outreach initiatives.
Athletics – Manage and deliver valuable assets, like game film, from a centralized repository, enabling secure collaboration across teams and departments. Maximize value by leveraging this content for broadcast and fundraising activities.
Read More
What Does "End-To-End" Really Mean?
There is a barrage of marketing messages filled with claims of “end-to-end solutions” designed to help make your business successful. In a complex content ecosystem, everyone claims to be “end-to-end,” but what does this really mean to you and your business?
Dictionary.com defines “end-to-end” as “a term that suggests that the supplier of an application program or system will provide all the hardware and/or software components and resources to meet the customer's requirement and no other supplier need be involved.” One look will tell you that this does not tell the entire story. “End-to-End” solutions come in many flavors.
In one variation, there are solutions that take you from beginning to end of a portion of the workflow. For example, some on demand providers can take your assets, add metadata, transcode them and send them out to distributors. This is “end-to-end” for the video on demand (VOD) portion of the content lifecycle, but does not address preparing or delivering these same assets for online video. In this scenario, it is up to you to find the next vendor who can convert these assets into multiple bit rates and package for multiple devices. And, it is often up to you to negotiate with and manage the second vendor as well as collect and provide the same files from the previous step in the process to the new vendor. Not really “end-to-end” in a multiplatform world is it?
Other providers offer “end-to-end” solutions that are really a patchwork of multiple partners and hand-offs. One specializes in a portion of the content workflow, such as linear acquisition, enhancement or transcoding. Once completed, they hand the files over to another vendor for transformation into various bit rates another for delivery. This happens “under the covers,” meaning you deal only with the primary vendor, but in reality your content changes hands many times. The problem with this model is that while you are only dealing with a single vendor, you are paying premiums for each vendor in the chain and losing time to market with each handoff.
By definition, an “end-to-end” solution encompasses the entire content workflow, from acquisition and capture to delivery and playout, using a single integrated platform and vendor. This holistic approach to content acquisition, media management, and distribution offers advantages over other variations of the end-to-end concept.
First, you get to work with a single vendor to take your content from linear broadcast to VOD and online streaming or replay. With the acquisition of single asset, the vendor can transcode, encode, and package the asset, and deliver it to multiple platforms. This single source approach simplifies workflows and reduces failure potential by keeping the asset in an integrated solution from acquisition to distribution.
Second, you can decrease operational labor and costs. When you deal with multiple vendors (or even a single vendor who is passing your assets to partners), the management of the process becomes more complicated. You may need several people to deliver and QC content, along with people to manage the project and handle the bill. Working with a true “end-to-end” provider gives you a single point of contact, thereby reducing the amount of staff needed to ensure content makes it through the entire lifecycle in top quality. Clearly, this approach results in quicker turns and faster time to market.
So next time you see “end-to-end,” don’t just skim right over it and say, “yeah, everyone is end-to-end.” Think about what this really means and whether it matters to your business. If it does, you may want to dig a little further into how the company interprets this buzzword. Are the vendors promising this approach helping you carry your content through the entire process or putting a marketing bow on a complex problem?
What Does "End-To-End" Really Mean?
Comcast Technology Solutions
2017-10-06
2017-10-06
Blog
https://www.comcasttechnologysolutions.com/sites/default/files/2016-09/CTS_Final.jpg
Comcast Technology Solutions
Read More
VOD - Monetizing Content From The C3 Window To D4
In a time not too long ago, monetizing content was easier. Linear content was broadcast, ratings were taken, commercials were aired, and money was made. In just a few years, monetizing content has become increasingly complex. Time-shifted viewing, video on demand (VOD), DVRs and online streaming have changed the way content is consumed. In fact, 53% of all viewers - and 61% of millennials - watch time-shifted programming.[1] The last stat is of particular interest considering that millennials are a generation more likely to cut cords or never have one in the first place. This makes VOD even more important to capturing, and keeping, this demographic.
On demand advertising models have undergone some of the most drastic changes as part of this shift. Instead of simple ratings on linear broadcasts, we have many windows and potentials for monetization. Let’s break down and define the windows and the opportunities that exist for each:
1. C3 – For the first three days after content is aired live, it is pitched to VOD servers with its original ad load and Nielsen watermarks. This allows on demand views in the first three days to still be counted in the original impressions. Considering 40% of VOD viewing occurs in the C3 window, this added viewership has resulted in a significant lift to originally aired programs in recent months.[2] For example, Fox’s premier of Scream Queens, targeted at millennials, received a 53% lift in the first three days on demand.[3]
2. C7 – Similar to C3, C7 is a combination of average commercial linear broadcast and the first seven days of time-shifted viewing. According to Ad Age, C7 measurement is gaining ground with 11 new shows earning up to a one-half point rating increase or more at the beginning of the 2015-2016 season. Fox’s “Empire” even touted a 1.5 point increase in the C7 window.[4]
3. D4/Dynamic Ad Insertion (DAI) – After the C3 or C7 window, assets are prepared for D4 and DAI.
Sixty percent of VOD viewing occurs in this window. Therefore, a lot of opportunity exists during this time to increase ad impressions. Until recently, this critical window was not optimized by advertisers, as C3 ad loads were stripped out and a static ad placed in the ad breaks across the VOD asset. This ad usually remained for its life of the VOD asset, clearly limiting advertisers’ ability to capture potential viewers or target advertising to specific audiences.
Recent advances in technology and support from providers have opened doors to ad insertion after D4. Now, ad breaks can be rapidly inserted and ads changed out in VOD assets. Research has shown that this approach is extremely effective. Eighty percent of impressions are now viewed in mid-roll and the DAI ads contributed to a 22% increase in relevance and 37% lift in intent to recommend. These statistics are not lost on content providers who are accelerating usage of this new advertising venue. By October 2015 more than 7.7 billion DAI ads had been delivered, compared to 6.3 billion ads throughout all of 2014. Given that total DAI ads were only 1.1 billion in 2013, this trend only seems to be growing exponentially. [5]
4. C3 DAI – This is one of the newest methods of on demand monetization that enables ad breaks to be marked in the C3 window. This feature gives viewers the option to fast forward through content – but not ads – so they can come back and finish viewing a program later. It also enables local spots and promos to be switched out in the C3 window. In addition, C3 DAI facilitates the insertion of a second Nielsen watermark, which allows Nielsen differentiation of DVR versus VOD spots.
As you can see, more options and technologies exist to facilitate advertising on demand than ever before. With the growth of monetization opportunities along the entire lifecycle of a piece of content, managing the process and technology can be complicated. However, if executed with experienced partners, these technologies offer a fertile ground for advertisers, choices for viewers and more ways for content providers to extend the value of their programming.
[1] Source: Time Shifting: February 2015, Hub Entertainment Research
[2] Sources: Rentrak Custom Report, June 2015
[3] http://adage.com/article/media/scream-queens-p/300636/
[4] http://adage.com/article/media/fox-abc-dramas-boast-biggest-lifts-c7/301137/
[5] http://www.fiercecable.com/story/cable-vod-ad-impressions-spiked-another-40-q3-canoe-says/2015-10-16
Read More
Supporting a Multi-DRM Content Strategy
Using Digital Rights Management (DRM) to secure and protect content means you have some choices to make. These choices, the ones you make in packaging and encrypting your content, have a direct impact on the devices that ultimately play your content. To reach as many viewers as possible, you will need to reach as many devices as possible. And to do this, you will need a multi-DRM strategy that takes into account the following:
Viewers of your content are unknowingly making DRM choices based on the browser or mobile platform they are using to consume content.
mpx Remote Media Processor
With the upcoming update of the mpx Remote Media Processor (RMP), mpx will package and secure content in the following combinations.
This multi-DRM approach in combination with support for the leading adaptive bitrate streaming standards enables operators and content providers to deliver protected content to all mobile and browser based media players, including game consoles and smart TVs.
Captions & Subtitles
In addition, this latest update to the mpx RMP enables support for WebVTT captions in the Apple HLS streaming format (WebVTT provides captions or subtitles for video content). Delivered over HLS, mpx will provide Apple-compliant closed captioning and subtitling using the native M3U8 manifest.
Easier Multi-DRM
In the coming weeks, we will provide updates on how mpx makes it easier for you to protect content and stay ahead of the continuously evolving DRM landscape.
Learn More
Keep an eye on thePlatform’s blog to stay current on mpx’s DRM solution, and other key developments in online video. For more information on how mpx protects content, please read our Monetization in Every Window solution brief.
Read More
Attracting “Cord Nevers” into the Pay TV Ecosystem
Pay TV providers have experienced transitional shifts, predominantly over the last five years, as consumers change how they watch their content. Internet video has redefined how consumers get content and has spurred the cord cutter and cord never segments. These groups of consumers have either abandoned subscription-based Pay TV or have never paid for TV.
Consumers Define How They Get Content
The consumers that have never had a TV subscription—the cord nevers – are of particular interest to Pay TV providers. Knowing the demographics and motivators for this segment is critical to understanding the group and how to bring it into the fold.
Cord nevers are generally in the coveted 18-34 demographic—77% of 18-34 year olds are more likely to be cord nevers.[i] They are college students and graduates generally with less disposable income. Mobility defines their habits, which indicate a preference to consume content on their laptops, tablets, or smart phones. In short, they are savvy with technology and extremely budget-conscious.
Cord nevers are motivated by two influences: access to broadband and choice. Specifically, this group sees Internet access as a necessity and Pay TV as a luxury. In 2014, 14% of broadband households did not have a traditional Pay TV package, up from 9% in 2011.[ii] And in terms of choice, they prefer access to specific content as opposed to a wide assortment of channels. Cord nevers also want to watch content on their device of choice, whether that be on a mobile device, laptop, or Internet-capable device (Roku, AppleTV, etc.), or television.
Meeting Cord Nevers Where They Are
The cord nevers segment of consumers is still growing and will continue to shape the Pay TV industry trajectory with over-the-top (OTT) content. In recent months, HBO, CBS, and other content providers have offered their content directly to consumers via mobile applications and monthly subscriptions. Similarly, many providers are offering apps through which consumers can stream a number of networks live. In so doing, they are making content affordable for cord nevers and satisfying their desire for choice.
Given their influence and potential purchasing power as they age and accumulate more disposable income, it is important to develop business strategies that meet the cord nevers where they are spending their dollars. No matter how a Pay TV provider chooses to address this growing segment, offering this segment of consumers what they want at prices they can sustain is key to turning “cord nevers” into “cord lovers.”
[i] http://www.telecompetitor.com/reasons-for-video-cord-cutting-may-be-different-than-for-cord-nevers/
[ii] http://tdgresearch.com/tdg-14-of-broadband-households-now-without-traditional-pay-tv/
Read More
Use mpx to Efficiently Enter New Markets
mpx makes it easy to deliver your video offerings to market. From linear streaming to OTT solutions, mpx provides all the tools you need to manage, productize, promote, and monetize your content. Spinning up a new service may seem daunting at first, but it won’t be long before you’re ready to grow your brand, expand your reach, and capitalize on new market opportunities. Read on to learn how mpx can assist you in entering new markets and taking your video solution global.
Expanding your offering
With an established video streaming solution in place, it is not uncommon to monetize your offering in new markets via a direct to consumer TVOD or SVOD strategy. However, entering a new market does not come without risks.
Accepting payments can be complex in a new, regional market. Finding a payment partner and dealing with the regional and cultural differences in accepting payments can be cost prohibitive when testing market viability.
mpx makes it easy to enter new markets and minimize risk
With mpx Video Commerce, you can easily define iTunes® and Google Play™ compatible products and subscriptions, and make them available via your application for in-app purchasing. This approach will help you offset the risks mentioned earlier, such as managing payments and marketing to a new audience. Once your products and subscriptions are modeled, Video Commerce provides all the tools needed to offer in-app purchases, including authentication, purchase processing, and receipt management.
The tight integration of mpx and Video Commerce provides you with the ability to create product offerings that are intertwined with your workflow. Content availability windows and regional content restrictions defined in mpx are automatically reflected in the products you create in Video Commerce. Purchases made via iTunes and Google Play are backed by the mpx Entitlements service, which manages viewers’ access to content and, supports playback on other platforms. For instance, you can purchase and start video streaming on your iPhone while in transit, and finish viewing on your laptop when you arrive at your destination. mpx makes the viewing experience seamless.
mpx Payment Gateways
Leveraging iTunes and Google Play is a great way to enter a new market with minimal effort, establish your brand, and drive traffic to your own site. mpx has the tools you need to build out your own storefront, maximize your brand, and promote your products on your own site. To help you manage fulfillment, mpx provides support for internationally recognized payment gateways such as Braintree and PayPal, as well as regional providers such as Latin America’s Mercado Pago. Video Commerce, with its tight integrations with other mpx solutions, makes payment provider setup easy and enables you to bring your products to market faster.
Learn More
Download our Advanced Video Commerce whitepaper to learn more about how mpx can help you enter new markets.
Read More
The Benefits of a Multi-CDN Strategy
Increasing consumer demand for streaming video, online gaming, and downloads—along with growing expectations of quality of experience (QofE)—is causing content owners and distributors to rethink their content delivery strategies. The leaders in online delivery have discovered that blending multiple CDNs will provide their users with a higher quality viewing experience coupled with the opportunity to optimize delivery costs. Additionally, multi-vendor strategies offer more reliability and predictability when problems hit the global Internet. These competitive advantages have established multi-CDN architectures as a best practice in the industry.
Considerations for a Multi-CDN Deployment
Deploying IP video or other media on multiple CDNs goes beyond adding more of the same to the mix. And, it’s also not exclusively about reducing costs. The process involves evaluating each CDN based on coverage, speed, cost, scalability, competencies and flexibility.
CDNs vary dramatically in capability and performance. Not every CDN is functional for all purposes, and variations exist in geographic delivery footprints, feature sets and openness to participation in a multi-CDN fabric. Due to the differences between CDNs and their commercial objectives, careful evaluation of your needs and the vendor’s capabilities is crucial to identifying the optimal partners to add to your CDN mix. Here are some suggestions to get the most out of your multi-CDN environment.
Look for distinct network advantages
Leveraging the advantages of multiple CDNs requires analysis of each CDN’s geographic coverage and architecture of their caching infrastructure, which can have a dramatic impact on overall performance and reliability. Take into consideration the location, quantity, and overall placement of caches. Where are caches located? Are the caches connected through a network that is owned and operated by the CDN provider? How is redundancy managed across caches to handle failovers? An additional CDN that can optimize the positioning of content can improve consumer engagement. In addition, having multiple options in busy markets offers load balancing during peak traffic periods and reduces the risk of service downtime and outages.
Evaluate CDN strengths based on traffic type
CDNs offer a wide range of solutions. Some are tailored to support video services, whereas others are focused on e-commerce site acceleration, security, or small object delivery. As online media consumption has exploded, some CDNs have not scaled to accommodate the file sizes required for certain content like large games or high bitrate video. They may also lack a robust set of features tailored to support a true broadcast-quality video experience.
When considering a multi-CDN strategy, make sure the individual CDNs can support, or perhaps even specialize, in the specific applications that are most important to you. For example, a CDN that is purpose-built to transport video and large objects is critical for any streaming video service and can be a valuable addition to your network architecture. Additionally, understand the goals of the vendor; do they align with your goals and initiatives of delivering the best service to customers and subscribers?
Determine how to blend CDNs
A good multi-CDN implementation makes the best routing decisions for each user based on a number of weighted variables. Routing can be predetermined based on a specific geo/network, real-time performance statistics or cost. You will need to decide if you want to engineer this capability directly into your application or use an external vendor. Simple DNS or HTTP routing based on geo/ASN is fairly straightforward to develop but does require development cycles and ongoing maintenance. Additionally, vendors, such as Cedexis and Conviva, use real-time statistics (e.g. response time, cold-start times, etc.) on CDNs to automatically route traffic to the best performing CDN.
Gauge each CDN’s flexibility and avoid proprietary “traps”
When considering a potential vendor to add to your existing fabric of CDNs, it is important to determine if they truly embrace a multi-CDN approach, if they can seamlessly integrate into your multi-CDN strategy, and if they will provide long-term value. Are their features based on open standards, which will operate seamlessly across multiple CDNs, or are they using proprietary methods and technologies as a means to lock customers exclusively into their CDN solution? How flexible are the terms and structure of the contract? Do commit levels or other obligations hinder (as opposed to facilitate) utilizing other CDNs? Also, if utilizing your CDN provider for origin services, will all CDNs have equal access to the originated content or only the CDN providing the origin? Look for multi-CDN partners with an open, standards-based architecture and simple agreements that facilitate, versus hinder, the use of a multi-CDN environment.
Strategic implementation of multiple CDNs is the best path to ensure you are competitive, now and in the future. Multiple CDNs enable you to establish a hierarchical distribution of content across multiple tiers of caches and different networks, helping to optimize performance, reduce delivery costs and increase reliability. Most importantly, multiple CDNs enable better user engagement, which leads to higher subscriber retention, faster growth, and additional advertising revenue.
Given the rise and projections in online media consumption, now is the time to consider deploying multiple CDNs. And, if you currently operate a multi-CDN environment, now is the time to evaluate your mix of providers to ensure the initiatives most important to you are fully supported.
Read More
Four Network Capabilities Essential to Ad Distribution
The most critical aspect of any ad delivery vendor—besides full-service customer relations—is the capability of the network over which the ad assets flow.
There is very little a distribution partner can do for your advertising needs without a robust and reliable network to get your spots from point A to point B. It is the backbone of the vendor’s business, and it needs to be a world-class performing network capable of instilling confidence in the handling of your assets.
As you evaluate a new or existing delivery vendor, keep these four key network capabilities in mind.
Network Quality
The quality and design of the network has the greatest effect on the speed at which assets are delivered. Is the network the vendor uses to deliver ads built specifically to handle large online media objects? A purpose-built network supports greater stability and reliability, facilitates faster delivery times, significantly reduces latency, and supports the next key network capability.
Guaranteed Delivery Times
Timing is everything with ad delivery. When your brand’s campaign is on the line, you want to know that your spots are delivered on time and accurately. Can your vendor guarantee delivery times?
Network Reach
You have an ad plan that includes penetration into specific markets. What reach does your vendor have into these markets or third party networks? Does it have direct touch points in your areas of focus? Not all ad distribution networks have equal reach. It is imperative that your choice of vendor has a broad and thorough footprint directly to your desired destinations.
Security
You have spent significant time and budget to create content that resonates with your audience and reflects your brand. The network that delivers your assets ought to ensure security of your brand and proprietary content with robust physical and virtual measures.
Where are your assets stored? Is the facility that houses the servers an MPAA-rated secure facility? Does it have on-site security monitoring, redundant servers, and redundant power supply? Your delivery vendor should demonstrate its commitment to protecting your brand assets with thorough and intelligent security standards.
While the process of delivering ad assets seems straightforward on the surface, it is beneficial to dig into the makeup of the network utilized for distribution. Knowing how your assets are transported and stored, as well as the details of the network, is simply good business practice where your brand assets are concerned.
Read More
Protect Your Pay TV Content and Monetize Every Window
As the cloud continues to expand the boundaries of premium television services, it is also creating numerous monetization opportunities for pay TV operators. The second part of our four part series called Pay TV 2.015 addresses the importance of content protection when it comes to monetizing every window. Read our solution brief called “Pay TV 2.015 – Policies and Entitlements” to learn how mpx:
Authenticates and authorizes play requests for linear channels, subscriptions, and transactions;
Provides a variety of security options – user authentication, validating tokens, content licensing agreements, geo-fencing, to name a few – to enforce all content and access rules; and
Facilitates advertising with ad policies, tags, server side ad insertion, and more.
Read Pay TV 2.015 – Policies and Entitlements
Read More
Trainer’s Corner – mpx Media Availability Windows
mpx makes it easy to manage media availability. You might recall our recent post called “Automatic Availability Windows” about automating media availability management. This post also highlighted the power of mpx’s validation rules as they apply to single or multiple availability windows. In this month’s Trainer’s Corner, we will take a closer look at setting up availability windows in the mpx Console:
mpx Availability Window Basics
Defining media availability in mpx is straightforward. The availability date or timeframe in which you wish to make your media available is based on the media’s airdate (or publication date), which is defined in the media’s metadata.
Setting the availability window is performed in the Media Policies tab and is as simple as choosing the date and time that you would like to have your media become available and expire. You can easily get more granular with your media availability by applying a geo-restriction, or choosing which countries your media is permitted or restricted from being viewed in.
Applying Availability Restrictions
It is common for media to be affected by the state of its availability. For example, new media may have authorization requirements for the first 15 days after its publication date and then have its restriction relaxed.
mpx Restrictions is a powerful to tool for automating media availability. Restrictions are defined in mpx Policies and are applied to the media via the aforementioned Media Policies tab shown in Figure 2. Once defined, a single restriction can be applied to any number of media policies.
Restrictions can also block or enable media access based on the following criteria:
Countries, regions, metro areas, cities, area codes, or postal codes
IP address
Domain addresses
Browser or device type
Entitlements Service authentication (TV Everywhere scenarios)
mpx Restrictions provides yet another layer of control in managing media availability.
Multiple Availability Windows
Thus far we have focused on media with a single window of availability, meaning the media can become available and expire only once. Because it is increasingly common for media to have different availability requirements based on geography, distribution, exclusivity periods, etc., mpx now provides the ability to define multiple windows of availability for a given media asset.
For example, you may have media that has the following availability based on geographic restrictions:
USA has an availability window of 8/1/15 – 8/20/15,
Canada is 9/1/15 – 9/20/15, and
Mexico is 10/1/15 – 10/20/15
From a single account and in one policy, mpx is able to manage the availability of content based on viewing rights or other criteria without the need for further user intervention. Prior to this functionality, administrators had to use multiple accounts to perform these tasks or manually approve/disapprove media for a given outlet.
The availability windows and availability labels functionality of mpx expands the media usage scenarios including:
The ability to make media available and unavailable more than once.
Customizable availability labels to associate individual windows with related media metadata, such as ad policies or restrictions.
Better regulation of media availability in feeds. For example, availability labels allow you to make media available to one media feed during the first window, and then make it available in more than one feed during the second window.
Automatic updates of a media’s available date, expiration date, and availability state based on the current availability window settings.
Automatic updates of a media’s availability labels. As media flows from one availability window to the next, the availability labels on the media are updated based on the current availability window.
Learn More
For more information on media availability and the use of mpx Validation Rules, please check out our blog post called “Automatic Availability Windows.” If you would like a demo of mpx Multiple Availability Windows, please call 1 (877) 436-7940.
Read More
Exploring the Benefits of DOCSIS 3.1
A few years ago, DOCSIS 3.1 was announced at a special meeting of the Society for Cable Telecommunications Engineers (SCTE). CableLabs officially released the specification in October 2013, putting us a little over two years since its inception. By well into 2015, you may be wondering what is the hold up?
As a reminder, DOCSIS 3.1 provides speeds up to 50% faster than existing DOCSIS 3.0, resulting in a theoretical speed of up to 10 Gbps downstream, and a practical application of 1 Gbps to the home. Compared to current average network speeds, this is a huge increase without a forklift upgrade to the headend or a carte blanche overhaul of an operators’ consumer premises equipment (CPE). In fact, many routers readily available from retailers can handle this throughput with a minimal investment by consumers. And, because cable operators already have HFC to the home, they are in the best position to quickly push these upgrades to homes across their whole footprint, whereas other providers have to dig lines to select areas, at great expense, to match this speed.
As an added bonus, DOCSIS 3.1 is more energy efficient than existing DOCSIS 3.0 standards, allowing cable modems to go into “sleep” mode during certain times of day, much like a digital thermostat. And, it allows for more security options over previous DOCSIS versions.
This is all good news for consumers and cable operators, but DOCSIS 3.1 also benefits content owners. The quantum leap in speed can greatly increase the performance and stability of content delivered to the consumer. Consider these findings from online video optimization firm Conviva[i]: viewing time for live action TV drops from over 40 minutes in HD to just 1 minute, if the viewer encounters buffering. “By reducing buffering, a live content provider could improve revenue (from more viewed advertising) by as much as 8.5%. The revenue uplift from improving video quality is even greater; upward of 11.4%. Combined, the potential total uplift reaches 20%,” its analysis determined. DOCSIS 3.1 can help to reduce buffering and result in a better overall experience for viewers. This is especially important for new technologies which require higher resolutions such as the ability to stream and download 4K content or gaming.
With so many advantages, it is easy to see why the industry is keeping a close eye on developments. Fortunately, the new spec is gaining momentum. Recently, Broadcom unveiled the first D3.1 chip, compatible with both DOCSIS 3.1 and DOCSIS 3.0.[ii] As technologies like this, tailored to DOCSIS 3.1, proliferate the marketplace they will help accelerate the adoption across HFC networks and ultimately into the home. Several cable operators are planning tests this year with wide scale availability in the next few years. While challenges certainly exist to making DOCSIS 3.1 – and 1 Gbps to the home – a reality, cable operators are in the best position to create scalable, manageable solutions that can benefit the entire ecosystem.
[i] http://www.conviva.com/conviva-viewer-experience-report/vxr-2013/
[ii] http://www.multichannel.com/news/technology/ces-broadcom-chips-docsis-31/386655?nopaging=1
Read More
Advertising in the Off Season
Everything has its season—football, weddings, awards, and if you are fortunate enough to live in the Southwest, hatch chiles. Television series are no exception to seasonality, primarily being lumped into fall and spring. Television advertising has closely followed the pattern of the seasons, with exceptional spikes leading up to and during the holiday and election seasons.
But slowly and steadily, another TV season has emerged. Summer.
Broadcast and cable networks have given substantial amounts of focus to producing highly-anticipated series during what used to be seen as a lull, or “off season,” not more than a decade ago. The effort is evident in the number of summer TV series and the viewers they attract. More than half of the top 10 shows for the week ending June 28, 2015 were summer-only series, according to Nielsen.[i] For example, during the week of June 22, “America’s Got Talent” topped the list with over 11 million viewers tuning in—an impressive audience for any season.[ii] By comparison, “The Voice,” a similarly-styled, judged talent competition show, averaged 13 million viewers per week during the spring 2015 season.[iii]
Overall viewing numbers in the fall and spring seasons exceed those during summer. However, with the ever-increasing number and variety of shows, the summer season can hardly be considered the off season any longer.
Perhaps, advertisers should look at the summer season as the season full of opportunity and potential. Because summer has traditionally been recognized as the off season, rates trend lower than other seasons, while at the same time competition is down due to the lull in summer advertising. For the same reason, slot availability trends higher, making advertising during prime time more affordable and effective.
Lower rates, less competition and increased slot availability, when combined with increased summer-only programming and stable-but-rising viewer numbers, make a compelling case for advertisers to reallocate some of their annual advertising budget to the summer season. The mixture is rich with value opportunities and expanded reach, perhaps to audiences not captured during the fall and spring. While advertisers need not reallocate and spend significant budget to initiate an effective summer ad campaign, their increased focus and attention to advertising during this time has potential to pay off in dividends.
Summer may not be the ideal time to execute major campaigns for every brand or every product. It may just be an ideal time to sustain brand awareness, customer retention, and even new customer acquisition. It is the time, like any other advertising lull, to take the long view on brand awareness to sustain brand loyalty.
When analyzing the benefits of advertising during the “off season”, consider this advertising success story from the Great Depression. During this time, companies and businesses were doing what was necessary to survive—slashing budgets. What is always the first item to get cut during budget crises? Advertising. However, there was one company that took a different tack by increasing its advertising. Unlike many companies from the period, this one survived, grew, and remains among us today. Johnson & Johnson.
[i] http://tvbythenumbers.zap2it.com/2015/06/30/tv-ratings-broadcast-top-25-americas-got-talent-tops-adults-18-49-total-viewers-for-the-week-ending-june-28-2015/424570/
[ii] http://www.nielsen.com/us/en/top10s.html
[iii] http://www.usatoday.com/story/life/tv/2015/03/03/nielsen-weekly-tv-ratings-highlights/24309823/
Read More
Automatic Availability Windows
Managing media availability has become increasingly complicated and dynamic. Content exclusivity for negotiated time periods is now commonplace – it frequently moves in and out of syndication outlets, and there might also be regional content restrictions. Creating this exclusivity traditionally requires heavy manual processing that involves approvals, defining individual windows, and configuring new windows as required.
mpx simplifies the creation and management of availability windows. With mpx, you can automate your media availability schedule, dynamically change ad policies, manage content restrictions, modify media metadata, and more:
Window Automation & Tags
From a single account, mpx enables you to automate the management of availability windows and control where your media is consumed. It also lets you define when media is available and when it expires in combination with availability tags. Availability tags allow you to mark an availability window for a specific region or device (e.g., iPad). mpx automatically selects the appropriate feed and device by evaluating these windows and tags. For example, if you are syndicating content via an Apple TV, you may offer rights-restricted content for the first 30 days of availability and unrestricted access afterwards (the 2nd window of availability). In mpx, you can label the different availability windows as “restricted” and “unrestricted.” In the aforementioned Apple TV scenario, when “restricted” content is published in the first availability window, it will require user authentication for a period of time then expire. Once expired, the second availability window begins and “unrestricted” content is published and freely viewed.
Managing Complexity
When managing multiple availability windows, validation rules in mpx give you the ability to link policies and products to availability windows, modify media metadata, and link an action to a custom field. As illustrated by the following chart, media can be packaged in different ways (e.g., subscription packages, product bundles, discount periods, etc.) based on different availability periods before expiration.
Common scenarios for setting validation rules against window availability may include:
Packaging content for electronic sell through
Modifying metadata
Associating different ad policies with media availability windows (e.g., Ad Policy 1 is associate with the first availability window and Ad Policy 2 is associate with the second availability window, etc.)
Enforcing content restrictions for different availability windows (exclusive content or TV Everywhere)
mpx’s multiple availability windows combined with validation rules provide a powerful and flexible interface for working with media. As inventory grows, policies and packaging change, and new monetization paths emerge. mpx lets you maintain control over how your media is best consumed, modified, and maintained.
Read More
The Outlook for Programmatic Advertising
Programmatic advertising on television is rapidly moving beyond the “proof of concept” stage to a viable reality. In the first quarter of 2015, programmatic ad buys were placed during several prime time slots, including ESPN’s SportsCenter,[i] the Super Bowl[ii], and the Academy Awards[iii]. And at the recent 2015 Upfronts, where TV networks pitch their fall programming to the ad community, Turner and NBC were among the networks discussing plans to make portions of their prime-time inventory available via data-enabled and other programmatic means.[iv]
How it Works
In programmatic advertising, spots are sold based on the intended audience, size of that audience, and supply and demand of a certain program. The key is that this typically happens much closer to air time than traditional upfront purchases and allows buyers to pay a more market-driven price for their avail.
Another way to think about how programmatic influences the price of an ad is to look at how airlines sell tickets. Airlines use sophisticated yield management practices and detailed data to help determine how many seats to sell on a given flight and at what price.[v] While the price is still ultimately set by the airline, leveraging big data makes it easier to determine the best price based upon predictable consumption patterns, such as the demand for traveling to a particular destination from a certain city at a particular time of day.
One of the greatest concerns has been the idea that selling the highly valuable TV ad avails would mirror the type of commoditization that occurs when buying banner ads and other forms of digital media, particularly in the digital display ad market where supply greatly exceeds demand. However, sellers are never removed from the process of approving how much they receive for the inventory they sell via various kinds of programmatic methods.
In addition to taking into account market factors, programmatic uses “big data” modeling to predict an individual viewer’s engagement with an ad. The new influences of data-enabled advertising into the TV ad buying process are requiring increased automation in transactions and enhanced research capabilities, which increase the efficiency and transparency in advertising.
Linear Television Media Buys
Despite the growth of digital advertising, linear TV has distinct advantages and provides tremendous value to advertisers. Linear television provides a significant scale of viewers and viewing time combined with unparalleled quality of content, distribution, and now of data for research and targeting purposes.
Linear television networks are quickly adopting programmatic advertising capabilities, such as data, automation and dynamic ad insertion (DAI), that will allow them to provide the same type of predictability as digital buys. This is creating an opportunity for advertisers to integrate the higher levels of reach and audience engagement offered by linear TV into their digital media buys on multiple platforms. For example, as many as 90 television networks are now delivering their programming on an average of five media devices, according to CTAM.[vi]
Canoe, an advertising technology company focused on delivering dynamically inserted advertising to video on demand programming, reported it served up 2.56 billion ads in the first quarter of 2015. At this rate, Canoe is on pace for approximately 10 billion viewed ad impressions in 2015—a dramatic increase over the 6.3 billion viewed ad impressions in 2014.[vii] These advances build upon earlier moves, such as Nielsen’s “C3” cross-platform audience ratings, to provide advertisers with the opportunity to insert ads and measure the engagement levels through archived video content as part of their integrated media buys.
The Role of Ad Delivery
As linear television advertising continues to evolve, ad management platforms will play a critical role in supporting the rise of programmatic ad buying. The increasingly automated processes used in programmatic ad buying are creating opportunities for media providers to hold portions of their inventory for media buys that occur closer to real time. Similarly, media buyers can make decisions about where they place an ad and which creative they will use.[viii]
Taking advantage of these opportunities will require ad delivery capabilities that can support faster turn-around times for creative approvals and centralizing ready-to-air spots for fiber-fast delivery to designated TV, radio, online and VOD media destinations. With all of these pieces in place, programmatic TV is no longer a question of why. It is a matter of when.
[i] Programmatic TV Following Its Own Path, MediaPost, May 19, 2015
[ii] Oreo and Ritz Mark First Super Bowl Ads to Be Purchased Programmatically, Adweek, January 30, 2015
[iii] Exclusive: First Oscar Spots Bought Via Programmatic, Broadcasting & Cable, February 20, 2015
[iv] Guess what Big Media pitched the ad guys, CNBC, May 15, 2015
[v] Yield Management In The Airline Industry, By Frederic Voneche, ResearchGate, February 28, 2005
[vi] TV Everywhere Industry Resource Center, CTAM, 2015
[vii] Canoe Cruises to 2.56B VOD Ad Impressions in Q1 Multichannel News, April 23, 1952
[viii] Programmatic Ad Buying: Coming to a TV Near You, Adweek, January 22, 2015
Read More
Monetize C3/C7 and Maximize the Value of Your VOD Content
Viewers are consuming Video-on-Demand (VOD) content at an increasingly rapid pace, which has created monetization opportunities for pay TV operators. However, making the most of these opportunities can present challenges. Read on to find out what C3/C7 is and how mpx can help you maximize the value of your VOD content library:
What is C3/C7?
Several years ago, Nielsen Ratings (the primary measurement of a TV program’s success in the U.S. and Canada) expanded its measurement window to include the first three and seven days, respectively, beyond a program’s initial live broadcast. Called “C3/C7,” this longer window greatly increased the opportunity for pay TV operators to monetize via advertising. However, the workflow for delivering effective advertising for views during and after C3/C7 has introduced a number of challenges.
Often, programmers want to make TV shows immediately available after their live broadcast as a VOD asset. To support the C3/C7 window, the full ad load is retained and uploaded into their Video Management System (VMS). After the C3/C7 window has expired, the ads are no longer measured and are often not relevant to viewers. To replace those stale ads with more targeted ones, programmers have to archive the original asset, upload a new TV show asset without ads, and employ ad insertion to insert fresh ads. Setting up, executing, and managing this advertising workflow is a time consuming and costly process.
How does mpx simplify this advertising workflow?
mpx simplifies your advertising workflow with dynamic ad insertion (DAI), which allows a programmer to replace any ad in the original program without needing to replace the original as-aired asset. During the C3/C7 window, mpx will present any ads as originally broadcast, and will prevent users from skipping over those ads. Once the C3/C7 window closes, mpx takes the in and out points of each ad break and dynamically inserts new ads in place of existing ads. mpx is also flexible enough to either remove all ads or selectively replace individual ads. This DAI functionality allows you to monetize content outside of the C3/C7 window with different ads that are more targeted/relevant to viewers.
mpx also lets programmers seamlessly switch between server and client side ad insertion. Because mpx intelligently overlays the existing ad load of an asset with a new ad from your ad network, programmers don’t have to manage the specifics of the asset’s ad spots. By knowing the in and out point of each ad break mpx knows when to restart the program after dynamically inserted ads. Those viewing the program will be able to use a player’s fast forward and rewind controls, but not during the ads. Further, mpx’s server side ad insertion solution ensures that the ad breaks will survive the use of any ad blockers that may have been installed on a viewer’s device.
Read More
thePlatform’s Simon McGrath Named Top VOD Professional
We are proud to announce that our General Manager EMEA, Simon McGrath, was recently named as one of the United Kingdom’s top 50 most influential people working in Video-on-Demand (VOD) by VODProfessional. According to this “trade website for people who work in VOD,” they received over 100 nominations for the designation and extensively reviewed each one via a three-judge panel.
Congratulations Simon!
Read More
Content Distribution Technologies Integrate to Create a New Paradigm
In the past few years, content distribution has become increasingly complicated as assets need to be prepared, managed and delivered for both broadcast and digital workflows. In order to accommodate all platforms, a single asset needs to be acquired, moved, stored, converted and transported using multiple files, technologies and expertise to ensure that it can be seen by viewers, when and where they want it. These disparate workflows are complex, causing frustration at a minimum and significant costs or duplication of efforts in the worst cases.
As online video consumption moves to the mainstream and the technologies proven, content owners are looking for ways to streamline the workflow from asset creation and acquisition to distribution and playout at its destination.
While ideal, this panacea remains illusive. Many content owners have either set out to build their own unified workflow or settled for a piecemeal solution comprised of a network of partners. Some specialize in the preparation and transmission of linear or on demand content. Others have made their mark in the digital space, focusing on online content management, streaming or player capabilities. Yet others offer a network on which to move all of the content. And, though the market is crowded with partners promising an end-to-end solution, very few offer a complete approach to the complexities of multiplatform content distribution.
Rather than cobbling together a broadcast and digital solution, consider a content distribution partner who can offer a holistic solution that includes:
Seamless acquisition capabilities – The first step in managing multiple file types and delivery formats is getting all of them in the same place. This may seem obvious, but can be a challenge. Before selecting a partner, be sure they can accept your assets in any format you have them and let them handle conversion. This will save you time and money.
Connections to diverse destinations – No matter how you manage your assets, they cannot get consumed until they reach a distribution point. Cable operators, VOD systems, broadcast stations, radio sites and online channels are only a few potential destinations. To simplify your workflow, select partners with maximum reach amongst your target audience. The more places a single partner can get your content, the greater economy of scale you receive.
Infrastructure – Some partners have a large dish farm, others a vast network of leased fiber, and others still have robust data centers or content management capabilities. Imagine how much simpler distribution would be if you could access all of this in a one place. It would be like going to a single station where you could choose to take a plane, train, car or hovercraft depending on what was going to get to your destination the fastest.
Innovation – Technology is changing at a pace unprecedented in the history of broadcast television, cable or online content distribution. The platforms built today may not sustain the business models of tomorrow. Every time you switch there are costs in both money and time to reestablish your library or business requirements. Look for partners with the platforms to support you today and the insights to help you succeed tomorrow.
In today’s content distribution landscape, there are more players and more variables than ever before. Finding a way to navigate the complexities can be overwhelming. Building your own solution is a gargantuan undertaking; identifying a partner, only slightly less daunting.
Read More
Managing Your Physical Ad Assets
Physical asset management is one of the most frequently overlooked aspects to producing, delivering, and distributing video ads. This is particularly the case with advertisers and agencies that have produced a high volume of ads. For those that have changed distributors, asset management is compounded by the fact that they are now in multiple locations. And, no matter where they are, you want them protected, archived, easily accessible, consolidated, and most importantly, under your control.
Consolidating physical assets is no easy task, and it loses priority simply because it is a task to which most do not take a shine. As time goes on, the more your assets are neglected and the messier the task becomes.
But it doesn’t have to be.
In fact, it does not even need to be your task alone. Full-service ad delivery vendors will help facilitate the transfer of your physical assets—wherever they may be—to a single storage and archiving facility. As an added bonus, this process has minimal impact on you, leaving you with time to focus on business-critical tasks.
The Benefits of Consolidated Storage
Advertisers are usually faced with the prospect of moving assets when they are in the process of choosing or have already chosen a new distribution partner. But there are certain advantages in consolidating physical assets even if you are not changing distribution vendors. Selecting a storage vendor that is independent of your ad delivery partner streamlines your distribution now and in the future. Consequently, you gain flexibility in selecting your ad delivery vendor without concern for moving your assets again.
The Costs and Your Responsibilities
Before exploring costs, consolidating physical assets is the perfect time to purge outdated and irrelevant assets. Not only does this help organize your library, but also helps lower overall costs. Two types of costs, both of which are often negotiable, are typically associated with asset consolidation:
The storage cost at the new storage vendor (often a per-item, per-month cost).
The transfer costs from the ad delivery vendor to move the assets from their current location(s).
While an ad delivery vendor will help facilitate the transfer project, you will need to provide logs and inventories on all current assets and their locations. Delivery vendors can also work with current storage locations to negotiate the lowest possible transfer costs for you. However, you will have to work with your new storage vendor to finalize these costs.
Considering how much of the process is handled behind the scenes, transferring and consolidating physical assets is not as burdensome as it may first appear. And once completed, you stand to gain much in the way of simplicity and choice. Assets are in one location, which gives you the flexibility to choose and change distribution partners. But most importantly, you will be in control of your physical assets.
Read More
Is HEVC the Next Step In The Evolution of Cable Network Architecture?
When it made its debut in the mid 1990s, digital video technology represented an opportunity for cable system operators and other MVPDs to deliver more content to their customers without rebuilding their physical plant. Could HEVC, the H.265 digital encoding standard, represent that same quantum leap for plant-based video architecture today?
Some industry experts would say the answer to that question is “yes.” MPEG-2 digital video encoding allowed an MVPD to deliver 12 standard definition digital video channels or two to three HDTV channels using the same bandwidth as one 6MHz analog channel. H.264/MPEG-4 doubled that ratio. The new H.265 HEVC standard provides an additional 50% improvement over current platforms, delivering 24 SD channels or six HD channels in the same 6 MHz spectrum.[i]
With many MVPDs still using MPEG-2 headend architecture, HEVC could provide an opportunity take advantage of a compression standard that’s also being used for delivering IP video to connected media devices. Additionally, HEVC can support 4K, or “Ultra HD” video, as well as HDR (high dynamic range), an emerging video format that is capturing the interest of video producers as a means of achieving levels of pixelation closer to that of the human eye.[ii]
Online Video and HEVC
While much of the cable industry’s discussion about HEVC has centered on its role in supporting Ultra HD, online video distributors and other broadband services providers are embracing HEVC as a means to lower their bandwidth costs. [iii] With studies showing as many as a billion or more viewing devices are capable of HEVC playback, experts say adoption of the technology will allow a pure OTT provider to deliver 720p HD content over streams that were previously limited to 480p SD content.[iv] Taking advantage of the larger base of HEVC-enabled devices, the OTT community has also been among the first to market Ultra HD content, including last year’s product launches from Netflix and Amazon.[v]
HEVC and the Migration to All-IP
Considerations in adopting Ultra HD include the costs of equipment at each headend, lack of Ultra HD sets in market and long-term plans for a full-IP launch. However, some analysts are saying it’s not too soon to consider incorporating HEVC technology as part of the roadmap toward an all-IP network architecture. For example, an analysis conducted by IBB Consulting finds that the benefits of early HEVC deployments can include: a.) Reduced backbone and regional IP network demands; b.) Minimized cable modem termination systems (CMTS) and access network bandwidth requirements; and c.) Increased quality of experience on all screens.[vi]
The recent INTX show demonstrated that the supplier community is introducing technology to support HEVC. In addition to the availability of HEVC HD encoders, a growing number of industry suppliers are also offering multi-format integrated receiver-decoders (IRDs) for supporting MPEG-2, MPEG-4 and HEVC, as well as IPTV and Ultra HD set-top boxes. Cable MSOs have also developed video streaming apps that will work with connected Ultra HD televisions.[vii]
[i] Don’t wait for 4K: The case for deploying HEVC today, Erica Robinson, managing consultant, IBB Consulting, for CED Magazine, September 2014
[ii] HDR: welcome to the next big shift in home entertainment, TechRadar, April 2015
[iii] What Is HEVC (H.265)?, Streaming Media Magazine, February 14, 2014
[iv] HEVC: Raising All Resolution Boats?,
[v] Netflix Shifts 4K Video to Premium Tier, Light Reading, October 13, 2014
[vi] Don’t wait for 4K: The case for deploying HEVC today (see endnote #1)
[vii] Comcast Tees Up 4K Box, Bigger 4K Service, Multichannel News, May 6, 2015
Read More
The Challenges of Ad Insertion Across Multiple Channels
The methods of ad insertion vary depending on the channel, whether it be linear broadcast, on demand, or online. The expanded and varied reach between the channels and demographic-based advertising is beneficial to agencies and advertisers. However, having discrete and disparate platforms presents certain challenges for programmers and traffic managers.
An overview of the methods below illustrates the nuances and challenges of inserting ads into separate channels:
For linear broadcast, ads are delivered to an origination point or a broadcaster, where it is loaded into a play-to-air server. Master Control schedules the ad, where the ad is added to the rotation and inserted into commercial breaks based on the traffic instructions from the agency or advertiser.
Video on demand assets are captured with the original ad load baked into the asset, which contains Nielsen metadata for tracking purposes. These VOD assets are available in this state and only this state during the C3 period—the first three days after the asset was initially distributed, including the original airing.
On the fourth day (known as D4) - after the C3 period ends - the VOD asset is stripped of its ad load and markers are applied to the asset. Typically a programmer or network transcodes the asset according to the destination’s format requirements. Ads are then sent to the edge at the MVPD’s or MSO’s plants where distributors capable of Dynamic Ad Insertion (DAI) can insert ads into VOD assets dynamically. Decision engines linked to a campaign manager insert the ads into a VOD asset to fulfill the instructions and impression count the agency or advertiser purchased. For the carriers that cannot dynamically insert ads, D4 assets will have ads “baked in” and will be delivered as-is.
Online ad insertion operates in much the same way as VOD DAI, where ads are transcoded into the necessary format and held in ad servers within a content delivery network (CDN). A decision engine, in conjunction with a campaign manager, will insert ads in an online stream to fulfill the impression count purchased by the advertiser.
The most obvious hurdles are the separate channels into which ads must be placed for media buys that span multiple channels. Additionally, destinations within each channel require various file formats. This often requires agencies and advertisers to deliver the master high-resolution file or multiple formats to various vendors in order to distribute the ad over the appropriate avenues, and thus to the consumer. In addition to the administrative burden, the fragmentation in the workflow leads to file management issues and potential for mistakes or missed deliveries.
While multichannel advertising does present certain workflow challenges, advertisers still stand to gain more from a cross-channel ad campaign. As advertisers and agencies consider the details of a media buy, it is important to keep these channels and their unique advertising advantages in mind. Multichannel advertising provides several avenues that increase the life span of the ad and maximizes ad spend.
Much like the complexities of content distribution, navigating the advertising landscape requires careful consideration of not only the requirements of each channel, but also the growing number of devices on which consumers are viewing content. This demand adds additional support and workflows along the entire advertising supply chain, increases the time required to process and deliver the assets. Finding a solution that can handle the conversion and placement of these various ad formats can simplify the entire process, reduce costs and shorten an ad’s time to market.
Read More
The HD Effect in Ad Distribution
Statistics around HD and SD TV spot delivery reveal an interesting story and impact to agencies and advertisers. But first, let’s consider the data. At Comcast AdDelivery, the percentage of HD-only destinations has increased from 3% to 6% since the beginning of 2014. In the same timeframe, the percentage of SD-only destinations decreased from 15% to 8%. The remainder, at approximately 85%, is comprised of hybrid sites that accept and air both HD and SD spots.
In addition, the growth in HD spot uploads, grew almost 140% from 2013 to 2014. When coupled with the evidence that HD spots outperform SD spots in audience retention by about 18%*, it becomes plausible that HD sites and HD spots will soon dominate as the industry standard.
There is clearly a trend towards HD advertising, but the delivery landscape still has one foot planted in SD workflows and delivery. Herein lies the problem; how to coordinate your spot to satisfy as many destination requirements as possible. Advertisers and agencies are left with a few choices. You can produce HD-only spots and distribute to HD-only and hybrid sites, or conversely, produce SD-only spots and distribute to SD-only and hybrid sites. The latter is a less-than-optimal solution, given the apparent trend towards HD advertising. Another common option, albeit the most costly, is to produce and deliver HD and SD spots.
While time, cost, reach, and quality are of primary concern in the advertising industry, one or more may suffer from inefficiencies with the aforementioned production options. HD- or SD-only limits your reach, but SD also has the additional issue of inferior quality compared to HD. Providing both formats adds time and cost to your production, as well as file management inefficiencies. But there is another option that can save time, the hassle of coordinating multiple resources, and ultimately money with your spot delivery.
Some advertising distribution platforms are capable of automatically down-converting your HD spot to SD when the spot needs to be delivered to SD-only destinations. This additional service, which can typically be enabled in your account or through your account manager, simplifies the ad distribution process using an auto convert feature that seamlessly prepares and delivers your spot in the format (or formats) required by receive sites. Secondary to this benefit is that only one mezzanine file is necessary, which cuts production time and costs. Finally, knowing that you can provide HD and SD spots on-the-fly from your delivery service simplifies the media buying and order entry process.
* http://www.hawthornedirect.com/tim_hawthorne_articles/?post=we_want_our_hdtv_ads
Read More
Managing bandwidth growth
Recent estimates put Internet usage at 3.4 billion users by 2016.* When combined with increased demand from streaming video, software downloads, 4K content delivery and application usage, the strain on a cable operator’s network can be substantial. Because Internet services are generally a critical portion of the triple play, meeting this demand, managing the growth of your network, and meeting the expectations of your subscribers can be vital to the success of your business.
No matter what method you employ to manage bandwidth demand, there will be an investment outlay. Leasing fiber or laying your own lines are the two most common methods used to increase bandwidth. The latter will obviously require a substantial capital investment and takes longer to achieve efficiencies, but you will ensure direct connectivity in the long run. The former, and more common method, gives you a much faster way to expand your bandwidth and satisfy customer demand, however it also comes with variable costs, back-haul, and premiums for local traffic. A third, more effective option, offers the benefits of both.
The most effective way to improve your network, manage bandwidth and account for growth is to connect directly to a network backbone. This requires you to deploy a dedicated fiber connection to one of the 20 interconnect stations throughout the nation. The benefits of this deployment are substantial in terms of cost savings, scalability, and customer satisfaction. A direct connection to the backbone enables operators to optimize bandwidth usage and scale services as subscriber base grows and demand increases. In addition, circumventing backhaul reduces transit costs, while increasing capacity for newer IP technologies, such as VoIP and TVE. You’re able to offer more Internet-based solutions and products, but most importantly, the speed and stability of your network improves subscriber experience and satisfaction, which leads to greater customer retention and attraction.
Projections show marked increase in Internet usage and demand in the near future with no signs of slowing. It is never too late to take stock your network’s capability and capacity for growth and to explore options that keep you competitive and align with your business goals. The investment in a dedicated connection pays off in time and lays the groundwork for growth and viability.
*http://www.itbusinessedge.com/slideshows/show.aspx?c=96015
Read More
Simplifying your content management and distribution - the case for a single source file
Since 2010, “TV Everywhere” has become a technology, a process, a goal and a destination. We are all trying to get “there,” to the ubiquitous place where one file can be viewed on any device, authenticated, without buffering and monetizable through advertising. Sounds good, right? Unfortunately, we all know this is not as simple as it sounds – for distributors, for content providers, for equipment vendors, or for service providers.
One significant pain point on the content development and distribution side has been how to create, manage, and send all of the source files required to enable playback on all operating systems and devices at different resolutions. Most content owners are used to providing a single title in formats such as HD, SD, MPEG-2 or MPEG-4 to be ingested on multiple platforms and distributed to multiple destinations via satellite and fiber. As you know, handling this number of files requires labor, storage and bandwidth which often involves coordination with several teams inside of your organization and can be costly, not to mention the challenges presented by transcoding and maintaining the quality of experience (QofE). Now, imagine a world where you, as a content owner, could submit a single file which could be used to derive all of your other versions and formats – and you don’t have to do a thing?
While it may seem like a dream, this type of single source content generation is already possible. IP encoding and delivery simplifies file transcoding, creation and distribution using a single contribution quality asset. Often, these contribution grade files, optimally 50 mbps for an MPEG-2 transport stream and 25 mbps for an AVC H.264 transport stream, come in the form of an on demand asset or a linear feed received via a direct fiber connection. The quality of these source titles allows them to be transcoded into a variety of files that can be used for linear, on demand and streaming applications.
This approach allows a content provider to deliver native video assets to one location that can ingest, prepare, store, transform, QC and deliver each file in the appropriate format and to the correct location. Possible outputs include Adaptive Bit Rate (ABR) MPEG-4 packages for delivery to TVE systems; C3 files for VOD; MPEG-2 and MPEG-4 to the set-top-box (STB); online streaming; and future technologies such as H.265 encoding for ultra-HD. In addition, the single asset solution facilitates the seamless transfer of metadata and ad tagging using one derivative file.
Another advantage of single asset solutions is the ability to control the QoE through a high-end transcoding and monitoring system. Single asset solutions offer an alternative to multiple ABR streams for TVE, which can be fraught with imperfections that affect quality and the viewer experience. As our world becomes more IP-based and dependent upon higher video resolution formats, QoE will become even more important to brand loyalty and revenue generation.
Read More
CE trends drive market demand for outsourcing video origination
It doesn’t seem that long ago that creating video content and originating it were closely intertwined, with few processes standing between a TV show and its transmission via broadcast or satellite to viewers. Today, things are far from as simple or straightforward and there are no signs that they will become less complex any time soon.
These complexities and challenges are leading many content providers to invest more of their limited resources into future-proofing their video libraries and then outsourcing the origination of their video programming to industry suppliers.
Here is a look at the market trends driving those decisions:
Video formatting –Market researchers, like Strategy Analytics, are projecting that as many as 50% of U.S. TV households will own 4K “UltraHD” TV sets within in the next five years.* A growing number of content providers are already sourcing their content in 4K as a way to future-proof their libraries.
Of course, because it will take at least five years to get to 50% penetration of UltraHD, we will need to support HD and lower resolutions well into the next decade. By outsourcing video content origination, content creators can manage all of their assets in one “native” format which is then handed off to a fully equipped origination services supplier, helping to ensure the HD, IP and other “derivative” formats of the original video asset will have the highest level of video quality.
TVE and multi-platform content delivery –Content providers must also grapple with the need to reach audiences across a widening range of content delivery networks and viewing devices. According to Nielsen’s latest cross-platform report, the average American consumes nearly 60 hours of content each week across platforms including TV, radio, online, and mobile. A deeper dive into those numbers shows that TV users skew older while smart phone video users skew younger and richer. Overall, the amount of time American adults spent using smart phones in the fourth quarter nearly doubled from a year ago, but remained a small fraction of the hours per day they spent watching live television, Nielsen reported.**
As this data helps to illustrate, a TV Everywhere (TVE) approach toward content delivery is becoming increasingly important in order to realize subscription video and advertising revenue goals. This is particularly true when we consider the fact that while the viewing device can change between age groups, their interest in content doesn’t vary all that much. For example, a recent study by the CEA and NATPE found TV shows remain popular with all age demographics, but older Americans watch them on linear TV, Gen Xers prefer using time-shifting tools like VOD and DVRs, and Millennials will stream many of the same TV shows to their smart phones.***
The accelerating pace of technology obsolescence – Another factor driving the value of industry-based solutions for video content origination is the rapid pace with which a technology investment can become obsolete. We can see it reflected in such technology decisions as which type of video encoders or content servers to purchase as well as in transmission gear expenditures.
Keeping up with the accelerated pace of technology can drain a content company’s limited financial and human resources without a “future-proof” guarantee. In contrast, taking a “lease-versus buy” approach toward technology investments, particularly the ones that don’t directly build value for the enterprise, helps to optimize the value of those purchases.
Content origination facilities allow clients to pool their individual technology investments into a shared services solution that can absorb technology upgrades with greater cost efficiency than if they were operating their own facilities.
In the past, this was particularly valuable to content providers during their startup phase, before they built out their distributor base. In today’s video technology environment, it is becoming a valuable tool for content organizations of all sizes as a means of limiting their risk of purchasing technology that may have a shorter-than-expected shelf life.
Finally, it is also important for an organization to seek out a supplier that can meet all of their content origination needs, including QA and DR/business continuity, from one facility. Delivering your content to multiple platforms shouldn’t require multiple (origination) facilities. In addition to being a more cost-effective decision, a full-service solution will help to assure you are giving today’s media-savvy video audience the quality of experience that deserves their loyalty.
*www.prnewswire.com/news-releases/nearly-50-of-us-homes-will-own-a-4k-tv-by-2020-says-strategy-analytics-300044177.html
**www.broadcastingcable.com/blog/currency/nielsen-cross-platform-report-looks-phones/129595
***www.mediapost.com/publications/article/241445/millennials-big-on-streaming-but-still-watch-tra.html
Read More
How important is service in ad distribution?
As the technology and business models for ad distribution continue to evolve, there’s a growing trend toward self-service. The convenience makes sense; you can upload your assets, manage your orders and traffic, and review spots any time, from anywhere. It’s convenient for you, the advertiser, and for the distribution vendor. What it lacks, though, is personal service, the opportunity for hands-on troubleshooting, and the sense of accountability that comes with a full-service approach. That’s why many advertisers are looking for more than a distribution vendor. They’re looking for a distribution partner.
With a full-service distribution partner, you will be assigned an Account Manager who orchestrates all of the moving parts, from post house coordination and attending to any final production needs, to order insertion and understanding the billing details. Full-service partners also understand the nuances and complexity of order insertion and can interpret and implement your traffic instructions. For example, given its proximity, many advertisers wish to distribute ads in the Canadian market. A trusted partner will be able to navigate the crossover in a timely manner, whereas self-service models may be limited in this capacity.
Personal service is always valuable, but especially in situations where deadlines are tight and time is of the essence. A distribution partner works in tandem with you and your needs. For instance, a full-service partner may ramp up staffing to accommodate your needs during a significant ad campaign that coordinates with a new product launch. The extra hands can help ensure that deadlines are met without sacrificing quality. Even under imposing deadlines and high spot volume, a reliable partner will see your order to the end accurately and on time.
Distribution partners that provide a personalized service model offer a distinct advantage over the self-service model simply because experienced people are handling the details. In contrast to a fully automated process, account managers can coordinate last-minute production needs, quickly manage the delivery of physical assets to outlets that require them, and offer solutions along the way that can cut costs and save time.
Ad distribution is the last critical step in the execution of a well-planned campaign. Given the time and expense involved in developing advertising strategy and creative, you don’t want to leave this stage to chance. Even as ad distribution continues to rely on technology and automation, advertisers still value personal relationships with their partner of choice. It is this personal touch in ad delivery that provides security in knowing that your content and traffic instructions are in capable, experienced hands.
Read More
Making the most of your voice service
Since the launch of Voice over IP (VoIP), most providers have offered MVPDs bundled voice products, meaning that they sell voice services as an attractive white label “turnkey” solution. This solution includes a voice network, telephone numbers and long distance. Providers then charge for the service on a per-subscriber basis. At the time, this “plug-n-pay” solution made sense because it naturally fit into their consumer product lineup, required minimal resources to support and maintain, and shortened their time to market. However, as voice service and technology improve and become more accessible to MVPDs of all sizes, many are unbundling their current service in an effort to control costs and improve their customers’ experience and satisfaction.
There are significant benefits to be gained by unbundling voice services. For example, implementing least-cost routing (LCR) enables MVPDs to source multiple providers to terminate a call at the lowest possible cost per minute automatically as the call is initiated. Owning and operating LCR technology will pay off in time, especially if a carrier’s subscriber base voice usage is substantial. In addition, the LCR model allows MVPDs to move from the per-subscriber pricing model to a market-based, competitively-priced per-minute model. This move has resulted in substantial cost savings and increased profit margins for many MVPDs.
Another advantage to unbundling is the ability to control—and improve—the customers’ experience when MVPDs assume network management. In the bundled approach, should a single provider experience an outage, the MVPD is left without any options to complete calls until the provider resolves the issue. MVPDs that have unbundled and manage their own network, again through LCR, have some level of redundancy by receiving service from a variety of providers, and thereby improving the network quality, availability and overall customer experience.
The benefits of unbundling voice services are compelling, but there are some factors for an MVPD to consider when planning the move. First, what are the potential savings based on their subscriber base and the number of minutes per month they service? Second, what resources (money and personnel) will be required to assume more responsibility of the technology? (For example, supporting G7.11 and G7.29 codecs or interconnectivity to other voice network technologies (TDM, SIP, Public IP and GigE) may require additional expertise or expense.) Finally, what is the pay-back timeframe of up-front hardware costs to implement LCR?
When contemplating these considerations, keep in mind that many MVPDs have already found the cost benefit coupled with an improved customer experience to be two key advantages to an unbundled voice solution.
Read More
Today’s TV Requires a Strong Backend Video System
To sum up all the previous blog posts about today’s TV, media companies need to have the ability to improve their advertising, windowing, content discovery, and viewer relationships. Why? Because it is the new baseline in offering today’s TV. Few industries are evolving faster than pay TV, and a media company needs a backend video system that will help them meet this baseline and provide enough flexibility to eventually exceed it… today, tomorrow, or even ten years from now.
mpx has been used by the world’s leading media companies to power unique, compelling, and value-generating TV businesses for over a decade.
From media upload to distributing content to websites and all types of devices, there isn’t an aspect of online TV technology that thePlatform hasn’t touched. This experience, coupled with our service-level agreement of 99.99% availability for consumer-facing services, service-oriented architecture, open APIs, and 24×7 customer support all makes mpx an enterprise-class system that can be relied on.
As the industry’s leading back-end video publishing and management system, mpx can fully support today’s TV experience. Through one seamless system, media companies and pay TV operators can:
Use mpx Replay to deliver new forms of content;
Make their advertising more effective with thePlatform’s advertising policies and broad advertising partner network;
Easily test and adopt new business models with Advanced Commerce;
Help viewers quickly discover the content they want to watch the most with Ways to Watch;
Deliver a customized and more satisfying viewing experience with mpx Identity Service; and
Rest assured knowing that mpx is an enterprise-class system that will reliably work.
Read More
The growing demand for aligning ad delivery technology with marketing strategy
A study conducted last year by the CMO Council found a growing trend among marketers to ensure their technology decisions align properly with their company’s marketing strategy. “With an increasing number of technologies being added to the marketing toolkit, marketers are beginning to view technology as a key component of their overall strategy,” the report, entitled “Quantify How Well You Unify,” found.*
As the CMO Council report explains, “Marketers are challenged to improve the economics and value of random and disparate marketing cloud investments across multiple platforms and solution areas.”
How important is the decision on ad delivery technology to this objective? Very. “Digital customers now demand targeted, relevant and meaningful engagement at every turn. As more customers consider defection from brands because the information, content, services, promotions and engagements lack relevance and resonance, marketers must look to data-driven strategies that deliver the right message to the right customer at the right time—and in the right channel. Marketing must forge and reinforce the ties between technology, customer data and the customer experience,” the CMO Council reminds us,
This observation illustrates the role played by each technology solution involved in the creation, management, and delivery of an advertisement. Examining those interrelationships from the ad delivery standpoint underscores the role of five elements that link the consumer’s experience of an ad with an organization’s marketing strategy:
Insight – An ad delivery solutions provider needs to demonstrate both knowledge and understanding in order to align its solutions with an advertiser’s marketing strategy. Achieving this level of wisdom about an advertiser’s ad delivery needs requires both market experience and technical expertise.
Technical capability – This wisdom is necessary in order to create and implement the technology that is best suited to meet an advertiser’s needs. In addition to robust asset management and delivery capabilities, ad delivery solutions must integrate with the other partners in the marketing technology ecosystem, including the advertiser’s ad management systems, the content management technology used its by ad agencies and their creative and post houses, and the traffic management systems used by its media providers.
Timeliness – As the CMO study notes, serving the right ad to the right customer also depends upon timeliness. Ad delivery solutions need to align with the growing demand for the near real-time ad-buying decisions that encompass multiple media platforms, including “live” linear media and on-demand content.
Accuracy – A marketer’s ability to determine the effectiveness of an ad campaign depends upon solutions that help to ensure an ad was served where and when it was intended. This requires integration between the reporting solutions used by all collaborators in the media placement process.
Quality – When advertisers spend tens, if not hundreds of thousands of dollars on the creative to adequately represent the image they wish to have with consumers, they cannot risk an ad delivery solution that fails to ensure the bandwidth necessary for delivering a high QoE (quality of experience). End-to-end video quality monitoring is essential in order to ensure every touch-point for marketing creative adheres to high quality standards and provides assurance that the intended level of quality was maintained throughout the ad delivery process.
As the CMO study concludes, marketers who successfully manage and integrate technology can achieve measurable business and operational gains on their investments as well as better business upside. In order to accomplish these outcomes, the marketing technologies a company leases must demonstrate they understand your marketing strategy and can deliver solutions that align with it.
*https://www.cmocouncil.org
Read More
Demand for DAI growing across linear and VOD TV platforms
As Light Reading’s Alan Breznick observed last month, there are a number of signs that dynamic advertising insertion (DAI) is beginning to take off.*
A review of 2014 found the number of viewable VOD ad impressions increased dramatically during each quarter last year, jumping from 803 million in the first quarter to over 2.6 billion by the fourth quarter. Advertiser categories represented in 2014 DAI placements included the auto, QSR, financial services, consumer electronics, pharmaceutical, toy, shipping, hospitality, food, health & beauty, fitness, theatrical, retail, home improvement, beer & wine, spirits, and U.S. Armed Forces categories.**
The sharp uptick in the use of DAI technology illustrates its expanding role in enabling advertisers to insert their marketing campaigns into media placements available via archived VOD content. Rentrak reported 66% of broadcast primetime program viewing occurs after the third day of its original airing. The ability to insert timely ads across all episodes of a TV show represents “the opportunity to generate untold millions in additional advertising dollars for VOD.”***
Advertisers are using DAI to place ads on linear television as well as VOD content. This development illustrates DAI’s importance in supporting advertiser demand for what is referred to as “programmatic” ad buying. Essentially, programmatic buying represents a highly automated system that allows advertisers to make near real-time media buying decisions by using “big data” to help them project results for an ad avail and determine how much they should pay for it.
The industry is already beginning to respond to the demand for automated media buying. In January, ESPN became one of the first TV networks to sell part of its spot inventory via programmatic advertising.**** To illustrate how media providers like ESPN can actually benefit from programmatic selling, a recent posting by Lost Remote uses the example of selling an ad avail in the fourth quarter of the Superbowl to the highest bidder at the end of the third quarter. As the posting observes, “ESPN certainly does not have an inventory problem. What they see, then, is the potential for programmatic advertising to command much higher prices on the open market – not lower prices.”*****
On the flip side of the inventory spectrum, utilizing unsold slots, or “remnant” inventory, to increase impressions, can salvage campaigns that are under-producing over time.
When we consider the processes required to deliver and insert that ad in such a short time frame, we appreciate the importance of DAI technology to the future of television advertising. In a “TV Everywhere” world, where ads will be delivered across multiple platforms for both linear and on-demand viewing, in near real-time, the ability to dynamically insert advertising will be fundamental to optimizing the value of each ad avail for content providers and advertisers alike.
*http://www.lightreading.com/video/multi-screen-video/ad-insertion-vendor-scores-patent-customers-/d/d-id/713816?itc=lrnewsletter_cabledaily
**http://www.canoe-ventures.com/Canoe_2014_VOD_DAI_Summary.pdf
***http://investor.rentrak.com/releasedetail.cfm?ReleaseID=838792
****http://blogs.wsj.com/cmo/2015/01/12/turbo-tax-becomes-first-programmatic-ad-buyer-on-espns-sportscenter/
*****http://www.adweek.com/lostremote/programmatic-ad-buying-coming-to-a-tv-near-you/49716
Read More
Today’s TV Requires Knowing Your Audience
All smart businesses know that they should know their customer… their likes, dislikes, habits, demographics, etc. Observing and acting upon this personal information permits them to provide better service/product offerings. The end result is a more satisfied customer, one that is likely to continue consuming the service/product offerings. Lucky for pay TV businesses, getting to know and satisfy viewers has never been easier thanks to new technologies.
By studying viewer demographics and observing how viewing preferences change over time, a media company will have more success at serving a menu of content that the audience will be inclined to consume. And the more content is discovered and consumed, the more it can be monetized. Today, media companies can get to know their viewers multiple ways. They can track the frequency of a viewer’s online subscription viewing through their anonymous TV Everywhere ID. They can see what types of content a viewer purchases through their commerce activity. And, they can review a viewer’s viewing history. All of this valuable viewer information is now readily available to most media companies.
However, without a strong customer relationship management (CRM) or identity management system, this information is not easily analyzed and acted upon. A strongCRM system tracks all of these viewer insights, performs the analysis, and provides helpful observations that can be used to generate a custom pay TV experience for the viewer. This experience might include targeted advertising; promotions (such as rent two movies, get one free); quick content discovery (such as recommending past seasons of The Simpsons); and a UI that reduces the time it takes to start watching a video (such as letting a viewer pause their favorite show on their iPad and resume watching on their iPhone). In the months ahead, custom user experiences based onCRM information will make for a more satisfying viewing experience and a more successful pay TV business. The mpx Identity Service makes integrating and working with CRM systems a straightforward task, allowing media companies and pay TV operators to keep personally identifiable information independent of mpx.