Break the hive mentality: going vendor agnostic with DataBee
It’s easy to get caught up in the hive mentality, and it happens more than you think when purchasing cybersecurity products and services.
Recently, the Federal Trade Commission has been launching investigations into anticompetitive practices related to the cybersecurity industry. Anti-competitive practices can result in cybersecurity tools that don’t interoperate or are cost-prohibitive to do so-- keeping you “locked-in” to that particular vendor. It’s time to break out of the hive mentality to build the security that enterprises deserve.
What is vendor lock-in?
Vendor lock-in is when a customer becomes dependent or over-reliant on a specific vendor’s product or services, making it difficult to break up or diversify from that vendor difficult. This can happen when vendors create proprietary tools, systems, and products that deviate from open-source resources or industry standards. This makes the product incompatible with others and expertise in that product less transferable. Usually, the longer one is “locked-in,” the more challenging and expensive it becomes to transition away from that vendor.
Security teams should invest in tools that are compatible with a variety of product ecosystems across a variety of vendors and that can derive meaning and insights from across vendors.
How can DataBee help you avoid vendor lock-in?
DataBee’s cloud-native security and compliance data fabric offers users a vendor-agnostic solution that can extract data from various sources and transform it into the desired format to support continuous compliance, SIEM de-coupling, simple & advanced threat hunting, and behavioral baselines with anomaly detection.
We offer customers the freedom to leverage:
Data lakes and data sources of choice: DataBee offers an extensive list of supported data sources (250+ and counting) and data lakes, and the list is constantly expanding. Bring in data from disparate sources, and DataBee will serve as the glue to piece it all together. The data flows through DataBee, without needing it to be stored, entering our product as a raw event and exiting as a normalized and enriched full time-series dataset into your data storage solution of choice. There is no holding data hostage.
Visibility and compatibility across cloud, hybrid, and on-prem solutions: DataBee centralizes insights for all your data sources regardless of where they sit in your security architecture, enabling customers to extract more value from what they already have.
Data normalization via the Open Cybersecurity Schema Framework (OCSF): OCSF is an implementation-agnostic, open-source framework used for data normalization and standardization. Data normalization helps ensure that your information all speaks the same language, is stored only once, and is updated consistently throughout your database. This makes it easier for DataBee to correlate data, reduce redundancies, and derive insights with reliable results.
Sigma Formatted Rules for Streaming Detections: DataBee’s active detection streams apply Sigma formatted rules over OCSF-normalized security data while en route to their storage destination. This enables DataBee active detections to integrate into a given existing security ecosystem with minimal customizations. Sigma rules provide a standardized syntax for defining detection logic, enabling security professionals to comprehensively define parameters for identifying potential security incidents. With Sigma-formatted detections leveraging OCSF in DataBee, organizations can swap out security vendors without needing to update log parsers or security detection content.
What are the benefits of a vendor-agnostic approach?
Interoperability, scalability, and flexibility: DataBee brings together disparate and diverse systems under one roof. This enables you to future-proof your organization: Freely expand and evolve by adding or removing systems without impacting your compatibility with DataBee. Scale to up to 10,000 streaming detections applied to petabytes of data a day in near real-time without requiring an overhaul of your infrastructure.
Value-based purchasing: Being vendor agnostic allows you to choose the products that are the best for your needs and the best in the industry, allowing you to adopt tools that are “best-of-breed.” It also gives your employees exposure to industry-standard skills, tools, and techniques that will be transferable across a variety of products.
Cost-effectiveness: Over-reliance on a single product suite or vendor can be expensive. It can make pricing and contracts less competitive. It can also make deriving insights across systems more challenging if your systems do not play well with each other, requiring more time and resources to come to the same conclusion. Being vendor-agnostic enables you to maximize the value of the products you pay for while managing costs across all your systems.
Heightened visibility and control: Centralized monitoring across a variety of solutions. Allows you to make more intentional choices about the vendors you select and how you integrate them into your cybersecurity infrastructure. Some vendors may see what others do not, increasing the likelihood of a faster response.
Stronger security: Vendor agnosticism reduces overreliance on a single vendor to provide and maintain your suite of products. Vendor lock-in can consolidate your resources, leading to a smaller attack surface or even a single point of failure. In the event of a security breach or outage, having many vendors can reduce your total attack surface and negative impacts on business operations.
Ready to break the hive mentality and empower your organization with a flexible, resilient security strategy? Request a custom demo to learn how DataBee can fast-track your transition to vendor-agnostic.
Read More
Managed Terrestrial Distribution gets more channels, more HD, and more value to customers and operators alike
It’s been three years since MVPDs accommodated the big 5G spectrum reallocation, and three years since we started helping companies to answer the question, “what would it look like to move our delivery model from orbit and down to earth?” Since then, our Managed Terrestrial Distribution (MTD) model has continued to grow – not just in population, but in its scope of offerings.
We’ve recently rolled out a significant expansion to our channel lineup, giving services the opportunity to add even more value and interest to their subscribers:
Lots of premium destinations across genres, including top-tier sports channels, A-list studio channels for movies and original entertainment,
More choices for kids and family, shopping, and faith-based programming
More HD content than ever before – three times the number of HD channels versus satellite-only
Explore our channel lineup
A better way to serve the content customers want
For existing MTD customers, this represents not just a wider offering, but also an opportune time to gain more operational efficiency by moving more services to their MTD workflow. CTS is an active partner, helping providers all over the country with a seamless migration and continuous support.
For businesses exploring the move from satellite services towards eventually a fully IP-based delivery model, there’s a lot more to share, as MTD continues its host of benefits – from reduced operational complexity to increased service offerings for customers.
Customers share their MTD experience
Managed Terrestrial Distribution was really envisioned as a way for cable providers to continue evolving along with their customers. We all certainly know what it’s like on the customer end: our screens are better, our expectations are higher, and as 4K and 8K become more and more prevalent, HD programming is almost table stakes at this point. The idea here on the Comcast Technology Solutions side was to turn the 5G spectrum situation into a real opportunity for providers to reimagine themselves in a way that wouldn’t disrupt daily operations.
One of the businesses that made the move to MTD was Sister Lakes Cable, a phone, TV, and internet provider in the Michigan area. Tom Wilson (Operations Manager), let us know how things have gone so far:
Curious about more ways that Managed Channel Origination can help keep MVPDs competitive into the 2030’s and beyond? There’s plenty more to learn here.
Read More
Ad creative production: Centralize or decentralize?
The shift from traditional advertising models to more diverse, digital-focused strategies has brought brand marketers to an interesting crossroads: the choice between centralized production, which offers streamlined efficiency and brand consistency, and decentralized production/fragmentation, a model that can deliver greater flexibility, diverse creative input, and specialized expertise.
Choosing between centralization and fragmentation can be daunting, requiring a delicate balancing act between consistency and creativity, management of complex collaborations, important cost implications, and the need to mitigate risk. Yet for others, it feels like being a kid in a candy shop with a hundred-dollar bill — full of possibilities and opportunities to explore.
As part of our Content Circle of Life series, we asked industry experts from The Team Companies, APR, and our own AdFusion™ team to weigh the pros and cons of centralization and fragmentation in creative production. Check out their in-depth discussion to explore how marketers are meeting their needs and driving success in an evolving landscape by:
Fostering a culture of creative collaboration
Implementing flexible frameworks
Leveraging composable technology
In the meantime, here are just a few of the highlights from their conversation to get you started…
Foster a culture of creative collaboration
Traditionally, creative and production tasks have been handled in silos, with teams working separately on different aspects of a campaign. This approach, however, can limit the exchange of ideas, reduce flexibility, and lead to inefficiencies. Instead, in today’s fast-paced and dynamic landscape, brands are finding tremendous value in building cross-functional teams that bring together diverse skills and perspectives and take their message to as many screens as possible.
A collaborative environment breaks down barriers, encouraging communication and teamwork across marketing, creative, production, and even procurement. This integration across an end-to-end creative management platform allows teams to share insights, address challenges more quickly, and ensure that every aspect of the project aligns with the overall brand vision.
Encouraging openness also means creating a safe space where team members feel comfortable sharing unconventional or bold ideas without fear of judgment. Creating this type of environment:
Nurtures creativity
Fosters experimentation
Empowers teams to adapt quickly to changing market demands
In a world where the ability to adapt and innovate is crucial, fostering a culture of openness and collaboration is no longer just a good practice — it is essential for brands to stay competitive.
Implement flexible frameworks
Creative experimentation should absolutely be encouraged regardless of whether your business follows a centralized or decentralized production model. Teams need the freedom to test new ideas without the pressure of immediate perfection. Yet everyone wants the freedom and flexibility to adapt and innovate — but not at the cost of losing control over production processes. So, how can brands achieve this ideal balance?
By designing frameworks that offer structure but enable collaboration, rather than restricting it.
Leverage AI to drive flexibility and innovation
The secret to cultivating a flexible framework? Emerging technology, such as:
AI
Virtual production
Collaboration platforms
These types of innovative tech are foundational to building and supporting flexible frameworks that spur creativity and innovation.
Technology is evolving so fast that brands should leverage it not just for ideation, but also for execution in creative production. From streamlining content creation to enhancing data analysis, AI is already transforming creative production. Tools like VideoAI™ can analyze vast amounts of video data to help brands identify and repurpose content, reducing the need for additional production.
Maximize agility with composable technology
One of the biggest misconceptions advertisers have about composable technology is that it introduces unnecessary complexity, making operations even harder to manage. In reality, composable platforms simplify processes in two main ways:
Integrating essential data into a single unified system
Providing better control and oversight across the entire creative production workflow
Instead of forcing a fragmented approach, a true composable ecosystem, such as one enabled by AdFusion, works in symphony with specialized partners like APR and The Team Companies to streamline processes and handle specific workflows. In turn, this reduces the risk of miscommunication or inefficiency and results in greater visibility across marketing operations.
Whether it’s preventing delays, managing legal concerns, or ensuring that media trafficking and distribution are executed as planned, a composable ecosystem supports partnerships that empower brands to identify and mitigate risks early and effectively.
Conclusion: A real-world case study
As with most things in life, there is no one-size-fits-all solution. Brands must find an approach that centralizes for consistency and scale but doesn’t sacrifice the innovation and diversity that come from working with specialized partners. This balance will enable brands to navigate the complexity while remaining agile enough to seize new opportunities.
What might this look like in the real world? Take a look at this case study snapshot to see how The Team Companies + AdFusion integrate business functions to drive creative production.
Challenge:
A major challenge for brands managing creative production is ensuring smooth collaboration between specialized teams while maintaining oversight of critical business functions. Miscommunication, inefficiencies, and delays often arise from siloed systems, leading to compliance issues, missed deadlines, and increased costs.
Solution:
The Team Companies addresses this issue by managing business affairs, talent, and payroll for ads distributed through the AdFusion platform. By integrating key data directly into the AdFusion ecosystem, The Team Companies provides brands with centralized oversight of essential and seamless management of business functions without disrupting the specialized workflows of different teams.
Results:
This composable approach, enabled by the AdFusion platform, has proven highly effective. It allowed advertisers to maintain transparency and compliance across the entire creative production process, avoiding common pitfalls such as miscommunication and costly delays. The integration facilitates a cohesive strategy across both internal and external teams, enhancing collaboration and ensuring brand safety. Additionally, by streamlining processes and centralizing insights, brands are able to accelerate time to market, mitigate risks, and avoid fines related to compliance issues.
Ultimately, integrating The Team Companies workflows with AdFusion maximizes media investments and drives overall campaign success.
Get in touch to find out how AdFusion can help your brand make faster, smarter, and more data-driven decisions.
Read More
Monitoring and logging: the eyes and ears of security
Why do monitoring and logging matter?
Although this is a foundational question in cybersecurity and networking, National Cybersecurity Awareness Month makes this a great time to (re)visit important topics.
Monitoring and logging are very similar to security cameras and home alarm systems. They help you keep an eye on what’s happening in your applications and systems. If – when – something unusual occurs, analysts can leverage information from monitoring and logging solutions to respond and manage potential issues.
In this blog, I explore some tips from my experience as a DevOps and Systems Engineer.
10 tips for effective monitoring and logging:
Set up alerts for unusual activity
Use monitoring tools to set up alerts for machine or human behaviors that don’t seem right. This could be, for example, a user who has experienced multiple failed logins attempts or a server with a sudden spike in traffic. This way, you can prioritize and quickly investigate suspicious activities.
If it’s important, log it
Adversaries are becoming clever in hiding their tracks. This makes logging key events, such as user logins, changes to business-critical data, and system errors, important. The information gleaned from logs can help shed light on a bad actor’s trail.
Regularly review log
Don’t just collect logs—make it a habit to review them regularly. Collaborate with your team and experts to capture and understand details from logs. Look for patterns or anomalies that could indicate a security issue.
Leverage SIEMs
Security Information and Event Management (SIEM) tools are great to collect and analyze log data from different sources, helping you detect security incidents more efficiently.
Retain logs for digital forensics
Your industry regulations may already require this, but storing your logs will not only keep you compliant but can also help you perform security investigations. SIEMs can be expensive depending on the throughput of your organization. Security data fabrics, such as DataBee, can help you decouple storage and federate security data to a centralized location like a data lake, making it easier to search through raw logs or optimized datasets to help you catch important information.
Establish a response plan
Ideally before a security event occurs, your team should have a plan in place to respond to an incident. This should include who to contact and the steps to contain any potential threats.
Educate your team
Make sure everyone on your team understands the importance of monitoring and logging. Training can help them recognize potential security threats and respond appropriately.
Keep your tools updated
Regularly update your monitoring and logging tools to ensure you’re protected against the latest threats. Outdated tools might miss important security events.
Test your monitoring setup
Running tabletops can help you test your monitoring systems and response plans to ensure they’re working correctly. Simulate incidents to see if your alerts trigger as expected.
Stay informed
Keep up to date with the latest security trends and threats. This knowledge can help you improve your monitoring and logging practices continuously.
By following these tips, you can enhance your organization's security posture and respond more effectively to potential threats. Monitoring and logging might seem like technical tasks, but they play a vital role in keeping your systems safe!
Read More
How Continuous Controls Monitoring (CCM) can make cybersecurity best practices even better
Like many cybersecurity vendors, we like to keep an eye out for the publication of the Verizon Data Breach Investigations Report (DBIR) each year. It’s been a reliable way to track the actors, tactics and targets that have forced the need for a cybersecurity industry and to see how these threats vary from year to year. This information can be helpful as organizations develop or update their security strategy and approaches.
All of the key attack patterns reported on in the DBIR have been mapped to the Critical Security Controls (CSC) put out by the Center for Internet Security (CIS), a community-driven non-profit that provides best practices and benchmarks designed to strengthen the security posture of organizations. According to the CIS, these Controls “are a prescriptive, prioritized and simplified set of best practices that you can use to strengthen your cybersecurity posture.”
Many organizations rely on the Controls and Safeguards described in the CIS CSC document to guide how they build and measure their security program. Understanding this, we thought it might be useful to map the Incident Classification Patterns described in the 2024 DBIR report, to the guidance provided in the CIS Critical Security Controls, Version 8.1, and then to the CSC Controls and Safeguards that DataBee for Continuous Controls Monitoring (CCM) reports on. As you’ll see, CCM – whether from DataBee or another vendor (😢) – is a highly useful way to measure progress toward effective controls implementation.
The problem, proposed solutions, and how to measure their effectiveness
The 2024 DBIR identifies a set of eight patterns for classifying security incidents, with categories such as System Intrusion, Social Engineering, and Basic Web Application Attacks leading the charge. Included in the write-up of each incident classification is a list of the Safeguards from the CIS Critical Security Controls that describe “specific actions that enterprises should take to implement the control.” These controls are recommended for blocking, mitigating, or identifying that specific incident type. CIS Controls and Safeguards recommended to combat System Intrusion, for example, include: 4.1, Establish and Maintain a Secure Configuration Process1; 7.1, Establish and Maintain a Vulnerability Management Process2; and 14, Security Awareness and Skills Training3. Similar lists of Controls and Safeguards are provided in the DBIR for other incident classification patterns.
Continuous Controls Monitoring (CCM) is an invaluable tool to measure implementation for cybersecurity controls, including many of the CIS Safeguards. These might include measuring the level of deployment for a solution within a population of assets, e.g., is endpoint detection and response implemented on all end user workstations and laptops? Or has a task been completed within the expected timeframe, such as the remediation of a vulnerability, closure of a security policy exception, or completion of secure code development training? While reporting on these tasks individually may seem easy enough, CCM takes it to the next level by reporting on a large set of controls through a single interface, rather than requiring users to access a series of different interfaces for every distinct control. Additionally, CCM supports the automation of data collection and then refreshing report content so that the data being reported is kept current with significantly less effort.
Doing (and measuring) “the basics”
The CIS CSCs are divided into three “implementation groups.” The CIS explains implementation groups this way: “Implementation Groups (IGs) are the recommended guidance to prioritize implementation of the CIS Critical Security Controls.” The CIS defines Implementation Group 1 (IG1) as “essential cyber hygiene and represents an emerging minimum standard of information security for all enterprises.” In the CIS CSCs v8.1, there are 56 Safeguards in implementation group 1, slightly more than a third of the total Safeguards. Interestingly, most of the Safeguards listed by Verizon in the DBIR are from implementation group 1, the Safeguards for essential cyber hygiene, that is, “the basics.”
Considering “the basics,” a few years ago, the 2021 Data Breach Investigations Report made this point:
“The next time we are up against a paradigm-shifting breach that challenges the norm of what is most likely to happen, don’t listen to the ornithologists on the blue bird website chirping loudly that “We cannot patch manage or access control our way out of this threat,” because in fact “doing the basics” will help against the vast majority of the problem space that is most likely to affect your organization.” (page 11)
Continuous controls monitoring is ideally suited to help organizations measure their progress when implementing essential security controls. That is, those controls that will help against “the vast majority of the problem space.” These essential controls are the necessary foundation on which more specialized and sophisticated controls can be built.
Moving beyond the basics
Of course, CCM is not limited to reporting on the basics. As Verizon notes, the CIS Safeguards listed in the 2024 DBIR report are only a small subset of those which could help to protect the organization, or to detect, respond to, or recover from an incident. Any control which lends itself to measurement, especially when expressed as a percentage of implementation, is a viable candidate for CCM. Additionally, the measurement can be compared against a target level of compliance, a Key Performance Indicator (KPI), to assess if the target is being met, exceeded, or if additional work is needed to reach it.
The Critical Security Controls from CIS provide a pragmatic and comprehensive set of controls for organizations to improve their essential cybersecurity capabilities. CCM provides a highly useful solution to measure the progress towards effective implementation of the controls, both at the organization level, and the levels of management that make up the organization.
Mapping incident classification patterns to CIS controls & safeguards to DataBee for Continuous Controls Monitoring dashboards
DataBee’s CCM solution provides consistent and accurate dashboards that measure how effectively controls have been implemented, and it does this automatically and continuously. Turns out, it produces reports on many of the Controls and Safeguards detailed in the CIS CSC. Here are some examples:
The DBIR recommends Control 04, "Secure Configuration of Enterprise Assets and Software," as applicable for several Incident Classification Patterns, namely System Intrusion, and Privilege Misuse. The Secure Configuration dashboard for DataBee for Continuous Controls Monitoring reports on this CSC Control and many of its underlying Safeguards.
Control 10, “Malware Defenses,” is also listed as a response to System Intrusion in the DBIR. The Endpoint Protection dashboard supports this control. It shows the systems protected by your endpoint detection and response (EDR) solutions and compares them to assets expected to have EDR installed. DataBee reports on the assets missing EDR and which consequently remain unprotected.
“Security Awareness and Skills Training,” Control 14, is noted in the DBIR as a response to patterns System Intrusion, Social Engineering, and Miscellaneous Errors. The DataBee Security Training dashboard can provide status on training from all the sources used by your organization.
In addition to supporting the controls and safeguards listed in the DBIR, the DataBee dashboards also report on CSC controls such as Control 01, “Inventory and Control of Enterprise Assets.” While the DBIR does not list Control 01 explicitly, the information reported by the Asset Management dashboard in DataBee is needed to support Secure Configuration, Endpoint Protection, and other dashboards. That is, the dashboards that do support the CIS controls listed in the DBIR.
With the incident patterns in the 2024 Verizon Data Breach Investigations Report mapped to the Critical Security Controls and Safeguards provided by the Center for Internet Security, security teams are given a great start – or reminder – of the best practices and tools that can help them avoid falling ‘victim’ to these incidents. Continuous controls monitoring bolsters an organization’s security posture even more by delivering dashboards that report on the performance of an organization’s controls; reports that provide actionable insights into any security or compliance gaps.
If you’d like to learn more about how DataBee for Continuous Controls Monitoring supports the Controls and Safeguard recommendations provided in the CIS CSC, be in touch. We’d love to help you get the most out of your security investments.
Read More
Status Update: DataBee is a now an AWS Security Competency Partner
We are proud to announce that DataBee is recognized as one of only 35 companies to achieve the AWS Security Competency in Threat Detection and Response. We have worked diligently to help customers gain faster, better insights from their security data, making today meaningful to us as a team. This exclusive recognition underscores the value and impact that DataBee’s advanced capabilities bring to customers.
Achieving an AWS Security Competency requires us to have deep technical AWS expertise. Inspired by Comcast's internal CISO and CTO organization, the DataBee platform connects disparate security data sources and feeds, enabling customers to optimize their AWS resources. Our AWS Competency recognition validates our ability to leverage our internal, technical AWS knowledge so that customers can achieve the same proven-at-scale benefits.
More importantly, earning this badge is a testament to the success we have achieved in partnering with our customers, and validates that DataBee has enabled customers to transform vast amounts of security data into actionable insights for threat detection and response.
Our continued collaboration with AWS reflects our dedication to driving innovation and delivering high-quality security solutions that meet the evolving needs of our customers. We are proud to be recognized for our efforts and remain committed to helping our customers achieve their security goals efficiently and effectively.
Read More
Live sports video: A winning playbook for content delivery and management
For streaming, broadcasting, or advertising, live sports represent some of the most exciting and most complex events to manage — and some of the most lucrative, particularly when it comes to breaking into new markets.
But when you’re only live once, the pressure to deliver flawless real-time content is immense, and the coordination required between technology, vendors, and teams is absolutely critical to bring home the win. As part of our Content Circle of Life series, Bill Calton, VP of Operations for Comcast Technology Solutions, sat down with David Wilburn, Vice President of TVE & VOD Software Engineering, Peacock, and Jorge de la Nuez, Head of Technology and Operations at the Olympic Channel, to discuss some of the key aspects of delivering/managing live sports content.
Read the highlights below or watch the full conversation here for expert perspectives on some of the most critical topics in live sports media, including:
Vendor/technology partner selection
AI in sports
Infrastructure challenges
The role of social media and advertising
The importance of vendor selection in live sports video
Bill Calton, VP of Operations, Comcast Technology Solutions: We’ve just been through the 2024 Summer Olympic Games in Paris, where over 3 billion viewers consumed some portion of this event’s content globally. And we all know that when we prepare for live sporting events, like the Olympics, we often work without a net.
What is your process for getting a vendor in place prior to these events?
Jorge de la Nuez, Head of Technology and Operations, Olympic Channel:
Vendor selection for events like the Olympics is a complex and lengthy process. It often takes years to build relationships with trusted vendors, particularly in such high-stakes environments.
Auditing and competitive bidding processes are key elements, but personal relationships and knowledge of a vendor's track record are just as important.
Preparing for an event like the Olympics requires testing for load capacity and auditing the entire ecosystem to ensure no part of the infrastructure can fail.
David Wilburn, Vice President of TVE & VOD Software Engineering, Peacock:
Vendor relationships are critical, but so is ensuring they are true partners who understand the needs of the business and adapt to them.
It’s important to choose partners who are innovators and are evolving alongside the business, not just following their own roadmaps.
Partnerships should be flexible and scalable, with real-time communication that allows for quick issue resolution.
The role of AI in live sports content management and delivery
The possibilities seem endless in terms of how people consume sports content, from where they're consuming it to what type of device they're consuming the content on. And we're starting to see AI tools be leveraged to support this.
How have you seen AI play a role in the live sports media arena?
David Wilburn, Peacock:
AI is beginning to assist in automating sub-clipping for highlights and identifying players using jersey numbers or facial recognition. However, these systems are still evolving, so it will be interesting to see how this continues to develop. One of the challenges in leveraging AI for live video streaming is latency, especially in high-stakes environments like sports betting, where near-zero latency is a must.
Jorge de la Nuez, Olympic Channel:
AI has been integrated into Olympic content since Tokyo 2020, particularly in highlight creation and video content management.
AI tools are really useful for generating short-form video content for social media and search platforms
On the back end, AI helps home in on user profiling and content recommendation, which is driven by data analytics.
Infrastructure to ensure reliability for sports video
When the lights come on and fans tune in, your infrastructure needs to be ready. When you’re only live once, you need to effectively deal with redundancies, both from a cloud or on-premises perspective.
What do you do to ensure that these events go off without any interruption?
Jorge de la Nuez, Olympic Channel:
Managing infrastructure for the Olympics is a blend of cloud-based and on-premises systems. While the cloud offers flexibility, there are still challenges with cost and multi-vendor coordination.
Keeping the infrastructure simple when possible and relying on trusted partners for redundancy helps us manage complexity when the stakes are high.
David Wilburn, Peacock:
Having active-active multi-region services and auto-detection of issues is critical for high uptime, particularly during events like the Olympics or NFL games.
Chaos and load testing are essential to ensure that the systems can handle the massive demand of live events. Multi-CDN strategies also ensure that content delivery is never interrupted.
Social media, branding, and advertising for live sports content
People are grabbing sports content and leveraging it on their own social media.
Are you seeing any trends in advertising related to live sports that you think would be relevant?
Jorge de la Nuez, Olympic Channel:
The International Olympic Committee has integrated social media platforms like YouTube into its distribution strategy. It’s a balancing act between delivering free content and driving audiences to their own platforms.
At the same time, branded content is becoming more important than traditional ads, particularly on digital and social platforms.
David Wilburn, Peacock:
Advertising is changing. We’re going to be seeing a lot more of the use of "squeeze-back" ads, where video continues to play while ads are displayed on the screen. This allows viewers to stay engaged with live sports while still being exposed to relevant content from advertisers.
As the live sports media landscape continues to evolve, so will the technologies that support it. At the same time, the importance of strong partnerships and well-prepared teams cannot be overstated. If you’re looking for a teammate to help you adapt, innovate, and maintain excellence in sports content delivery and management, connect with us here.
Read More
Mastering DORA compliance and enhancing resilience with DataBee
Recently, DataBee hosted a webinar focused on the Digital Operational Resilience Act (DORA), a pivotal piece of EU legislation that is set to reshape the cybersecurity landscape for financial institutions. The talk featured experts Tom Schneider, Cybersecurity GRC Professional Services Consultant at DataBee and Annick O'Brien, General Counsel at CybSafe, who delved into the intricacies of DORA, its implications, and actionable strategies for compliance.
5 Key Takeaways for mastering DORA compliance and enhancing resilience:
In an effort to open dialogue and help organisations that need to comply with the DORA compliance legislations, we are sharing the takeaways from our webinar.
The Essence of DORA: DORA is not just another cybersecurity regulation; it addresses the broader scope of operational risk in the financial sector. Unlike frameworks that focus solely on specific cybersecurity threats or data protection, DORA aims to ensure that organisations can maintain operational resilience, even in the face of significant disruptions. This resilience means not just preventing breaches but also being able to recover swiftly when they occur.
Broad Applicability: DORA's reach extends beyond traditional banks, capturing a wide array of entities within the financial ecosystem, including insurance companies, reinsurance firms, and even crowdfunding platforms. The act emphasizes that any organisation handling financial data needs to be vigilant, especially as DORA becomes fully enforceable in January 2025.
Third-Party Risks: A significant portion of the webinar focused on the risks associated with third-party service providers, particularly cloud service providers. DORA places the onus on financial institutions to ensure that their third-party vendors are compliant with the same rigorous standards. This includes having robust technical and operational measures, conducting regular due diligence, and ensuring these providers can maintain operational resilience.
Concentration of Risk: DORA introduces the concept of concentration risk, which refers to the potential danger when an entire industry relies heavily on a single service provider. The webinar highlighted recent incidents, such as the CrowdStrike and Windows issues, underscoring the importance of not only identifying these risks but also diversifying to mitigate them.
Principles-Based Approach: Unlike prescriptive regulations, DORA is principles-based, focusing on the outcomes rather than the specific methods organisations must use. This approach requires financial institutions to continuously assess and update their operational practices to ensure resilience in a rapidly evolving technological landscape.
Moving Forward:
As the January 2025 deadline approaches, organisations are urged to review their existing compliance frameworks and identify how they can integrate DORA's requirements without reinventing the wheel. Many of the principles within DORA overlap with other frameworks like GDPR and NIST, providing a foundation that organisations can build upon.
For those grappling with the complexities of DORA, the webinar emphasized the importance of preparation, regular testing, and continuous improvement. By leveraging existing policies and procedures, financial institutions can align with DORA's objectives and ensure they are not only compliant but also resilient in the face of future challenges.
Databee can significantly enhance compliance with DORA by streamlining the management of information and communication technology (ICT) assets. DataBee for Continuous Controls Monitoring (CCM) offering weaves together data across multiple sources, enabling organisations to automate the creation of a reliable asset inventory. By providing enriched datasets and clear entity resolution, Databee reduces complexity of managing and monitoring ICT assets, improves auditability, and ensures that compliance and security measures are consistently met across the enterprise, ultimately supporting the resilience and security of critical business operations.
Watch the recording of the webinar here or request a demo today to discover how DataBee can help you become DORA compliant.
Read More
Market opportunities in MENA for streaming, advertising, and sports
With a subscription video on demand (SVOD) services market expected to surpass $1.2 billion by the end of this year, it’s no wonder that entertainment and media companies are looking at the Middle East and North Africa (MENA) with increased focus. While the region presents unique opportunities for expansion and greater content monetization, reaching this diverse and often fragmented audience presents distinct challenges.
At our 2024 MENA Monetization Summit in Dubai, industry leaders discussed the innovative strategies they’ve used to thrive in this dynamic market. Read on to learn the drivers behind their success and gain strategic insights for effective content monetization in this rapidly evolving region.
MENA’s streaming and advertising market: highlights to know
Opportunities in MENA are evolving rapidly, driven by a young, tech-savvy population and increasing digital penetration. Consider the following statistics from global analysts Omdia:
MENA’s SVOD services market generated over $1 billion in revenues in 2023 and is expected to surpass $1.2 billion in 2024.
Online video advertising in MENA is expected to grow by 67% in revenue by 2028 while online video subscription is expected to grow by 19%.
Already, the Free Ad-Supported TV (FAST) market in MENA has topped $7 million — with the potential to quadruple in the next five years.
Saudi Arabia sees the highest consumption of YouTube videos globally.
So, how can businesses capitalize on these burgeoning markets? There are a few key considerations when evaluating opportunities for content monetization in MENA:
First, underserved sports content is a great avenue to explore. The popularity of sports such as cricket, rugby, mixed martial arts, and fighting sports in the region opens up significant opportunities to stake out new territory. Since these sports are not as heavily contested by major players, new market entrants can quickly and effectively carve out a niche.
When expanding into MENA, localized content will be particularly important for success. Content tailored to local tastes, cultural norms, and preferences is crucial. This means finding opportunities to produce and broadcast local sports, creating region-specific reality shows, and emphasizing local celebrities and events. Platforms that offer a mix of local productions and international content stand a better chance of engaging the audience.
Similar to findings from other parts of the world, FAST channels are becoming increasingly popular in MENA, providing an alternative to traditional pay-TV. These channels attract large audiences by offering free content supported by advertising. This CTS webinar dives deeper into FAST channel technology.
FAST channel revenues in MENA reached $7.2 million in 2023 and are projected to quadruple in the next 5 years.
Success stories from MENA
Several companies are successfully navigating the MENA market challenges by leveraging specific strategies and focusing on underserved segments. STARZ PLAY Arabia and Shahid together make up approximately 40% of the total over-the-top (OTT) services market in the region according to Q4 2023 Omdia data.
Shahid and STARZ PLAY lead the MENA streaming video market.
STARZ PLAY: With over 3.5 million subscribers and growing, this highly successful SVOD service has seen tremendous success by focusing on underserved sports and licensed Hollywood content. Here’s the strategy at a glance:
Securing sports rights including UFC, Cricket World Cup, ICC tournaments
Enhancing the user experience with sport-specific UI features
Shahid, part of the MBC Group, has established itself as a leading platform in the MENA region by:
Leveraging its extensive library of premium Arabic content
Growing both its ad-supported video on demand (AVOD) and SVOD services in tandem but with a heavier emphasis on advertising
Key trends to watch across the region
Shahid, STARZ PLAY, AWS, FreeWheel, and other media companies that have seen considerable success in MENA have tapped into key trends in the region.
Shifting toward hybrid models: In the Middle East, pay-TV is still important, but online advertising is growing rapidly. Giant entertainment companies such as Netflix and Amazon Prime are exploring ad-supported content. The expected growth in subscriptions combined with the increasing importance of advertising revenue in the region highlight this trend.
Importance of data and personalization: AI is revolutionizing content monetization in MENA by enhancing personalization and operational efficiency. AI is helping content providers with deeper understanding in user behavior and preferences, allowing for highly targeted and contextual advertising. By using AI to analyze vast amounts of data, companies can predict churn behaviors, personalize content recommendations, and optimize advertising strategies. Employing AI in content production processes, such as automated subtitling in multiple languages, is becoming a cost-effective way to make content more accessible and widen reach.
Rise of sports: The growing popularity of esports and niche sports presents a lucrative opportunity. The addition of new sports in the Olympic program could lead to increased engagement and monetization.
Looking to grow in MENA? Here’s your roadmap.
To thrive in the competitive MENA market, industry leaders recommend adopting the following strategies:
Identify and focus on areas underserved by major players, such as specific sports or localized content. This could mean using unique entertainment formats, such as live sports events, to attract audiences, foster brand loyalty, and maximize monetization potential.
Leverage partnerships and form strategic alliances with local telecom operators, device manufacturers, and opportunities for managed channel origination to boost visibility and distribution opportunities.
Invest in technology and infrastructure to ensure robust technology and infrastructure to handle high concurrent user traffic. In particular, cloud services and scalable solutions are essential for maintaining a seamless user experience and meeting the expectations of the region’s audiences.
Enhance fan engagement by leveraging AI and data analytics to create personalized and interactive experiences. It’s a great way to boost real-time engagement during live events, fantasy sports integration, key moments and highlights, and tailored content recommendations.
Diversify revenue streams and combine subscription services with advertising. Explore opportunities in branded content and sponsorships to maximize revenue potential. The future of content monetization lies in hybrid models that combine subscriptions with advertising. The expected growth in the number of subscriptions and the increasing importance of advertising revenue highlight this trend.
There is immense potential for success in MENA provided that companies can navigate its complexities and leverage its unique opportunities. Understanding local market dynamics and tailoring strategies accordingly will be key to capitalizing on the opportunities at hand.
Looking for a trusted partner to help support your strategies? Contact us today.
Read More
Bee sharp: putting GenAI to work for asset insights with Beekeeper AI™
Artificial intelligence (AI) and music are a lot alike. When you have the right components together, like patterns in melodies and rhythms, music can be personal and inspire creativity. In my experience having worked on projects that developed AI for IT and security teams, data can help recognize patterns from day-to-day activities and frustrations that can be enhanced or automated.
I started working in AI technology development nearly a decade ago. I loved the overlaps between music and programming. Both begin with basic rules and theory, but it is the human element that brings AI (and music) to life.
Recently, we launched BeeKeeper AI™ from DataBee, a generative AI (genAI) tool that uses patent-pending entity resolution technology to find and validate asset and device ownership. Inspired by our own internal cybersecurity and operations teams struggles of chasing down ownership, which sometimes added up to 20+ asset owner reassignments, we knew there was a better way forward. Through integrations with enterprise chat clients like Teams, BeeKeeper AI uses your data to speak to your end users, replacing the otherwise arduously manual process of confirming or redirecting asset ownership.
What’s the buzz about BeeKeeper AI from DataBee?
Much like how a good song metaphorically speaks to the soul, BeeKeeper AI’s innovative genAI approach is tuned to leverage ownership confidence scores that prompt it to proactively reach out to end users. Now, IT admins and operations teams don’t have to spend hours each day reaching out to asset owners who often become frustrated over having their day interrupted. Further, by using BeeKeeper AI for ‘filling in the blanks’ of unclaimed or newly discovered assets, you have an improved dataset of who to reach out to when security vulnerabilities and compliance gaps appear.
BeeKeeper AI, a part of DataBee for Security Hygiene and Security Threats, uses an entity resolution technology to identify potential owners for unclaimed assets and devices based on a few factors such as comparing authentication logs.
BeeKeeper AI is developed with a large language model (LLM) that features strict guardrails to keep conversations on track and hallucinations at bay when engaging these potential owners. This means that potential asset owners can simply respond “yes” or suggest someone else and move on with their day.
Once users respond, BeeKeeper AI can do the rest – including looking for other potential owners, updating the DataBee platform, and even updating the CMDB, sharing its learnings with other tools.
Automatic updates to improve efficiency and collaboration
Most IT admins and operations teams heave a sigh every time they have to manually update their asset inventories. If you’ve been using spreadsheets to maintain a running, cross-referenced list of unclaimed devices and potential owners, then you’re singing the song of nearly every IT department globally.
This is where BeeKeeper AI harmonizes with the rest of your objectives. When BeeKeeper AI automatically updates the DataBee platform, everyone across the different teams have a shared source of data, including:
IT
Operations
Information security
Compliance
Unknown or orphaned assets are everyone’s responsibility as they can become a potential entry point for security incidents or create compliance gaps. BeeKeeper AI can even give you insights from its own activity, allowing you to run user engagement reports to quantify issues like:
Uncooperative users
Total users contacted and their responses
Processed assets, like validated and denied assets
Since it automatically updates the DataBee platform, BeeKeeper AI makes collaboration across these different teams easier by ensuring that they all have the same access to cleaner and more complete user and asset information that has business context woven in.
Responsible AI for security data
AI is a hot topic, but not all AI is the same. At DataBee, we believe in responsible AI with proper guardrails around the technology’s use and output.
As security professionals, we understand that security data can contain sensitive information about your people and your infrastructure. BeeKeeper AI starts from your clean, optimized DataBee dataset and works within your contained environment. Unique to each organization’s data, BeeKeeper AI’s guardrails keep sensitive data from leakage.
This is why BeeKeeper AI sticks to what it knows, even when someone tries to take it off task. Our chatbot isn’t easily distracted and refocuses attempts to engage back to its sole purpose - identifying and finding the right asset owners.
Making honey out of your data with BeeKeeper AI
BeeKeeper AI leverages your security data to proactively reach out to users and verify whether they own assets. With DataBee, you can turn your security data into analytics-ready datasets to get insights faster. Let BeeKeeper AI manage your hive so you can focus on making honey out of your data.
If you’re ready to reduce manual, time-consuming, collaboration-inhibiting processes, request a custom demo to see how DataBee for Security Hygiene can help you sing a sweeter tune.
Read More
DataBee: Who do you think you are?
2024 has been a big “events” year for DataBee as we’ve strived to raise awareness of the new business and the DataBee Hive™ security, risk and compliance data fabric platform. We’ve participated in events across North America and EMEA including Black Hat USA, the Gartner Security & Risk Management Summits, FS-ISAC Summit, Snowflake Data Cloud Summit and AWS re:Inforce, and of course, the RSA Conference. At RSA, we introduced to the community our sweet (haha) and funny life-size bee mascot, who ended up being a big hit among humans and canines alike.
Participation in these events has been illuminating on many important fronts. For the DataBee “hive” it’s been invaluable, not only for the conversations and insights we gain from real users across the industry, but also for the feedback we receive as we share the story of DataBee’s creation and how it was inspired by the security data fabric that Comcast’s Global CISO, Noopur Davis, and her team developed. In general, we’ve been thrilled with the response that DataBee has received, but consistently, there’s one piece of attendee feedback that really gives us pause:
“Why would Comcast Technology Solutions enter the cybersecurity solutions space?”
In other words, “what the heck is Comcast doing here?”
This statement makes it pretty clear: Comcast might be synonymous with broadband, video, media and entertainment services and experiences, but may be less associated with cybersecurity.
But it should be. While Comcast and Xfinity may not be immediately associated with cybersecurity, Comcast Business, a $10 billion business within Comcast, has been delivering advanced cybersecurity solutions to businesses of all sizes since 2018. With our friends at Comcast Business, the DataBee team is working hard to change perceptions and increase awareness of Comcast’s rich history of innovation in cybersecurity.
Let’s take a quick look at some of the reasons why the Comcast name should be synonymous with cybersecurity
Comcast Business
Comcast Business is committed to helping organizations adopt a cybersecurity posture that meets the diverse and complex needs of today’s cybersecurity environment. Comcast Business’ comprehensive solutions portfolio is specifically engineered to tackle the multifaceted challenges of the modern digital landscape. With advanced capabilities ranging from real-time threat detection and response, Comcast Business solutions help protect businesses. Whether through Unified Threat Management systems that simplify security operations, cloud-based solutions that provide flexible defenses, or DDoS mitigation services that help preserve operational continuity, Comcast Business is a trusted partner in cybersecurity. Comcast Business provides the depth, effectiveness, and expertise necessary to enhance enterprise security posture through:
SecurityEdge™
Offering advanced security for small businesses, SecurityEdge™ is a cloud-based Internet security solution that helps protect all connected devices on your network from malware, phishing scams, ransomware, and botnet attacks.
SD-WAN with Advanced Security
Connect users to applications securely both onsite and in the cloud
Unified Threat Management (UTM)
Delivered by industry leading partners, UTM solutions provide an integrated security platform that combines firewall, antivirus, intrusion prevention, and web filtering to simplify management and enhance visibility across the network.
DDoS Mitigation
Security for disruption caused by Distributed Denial of Service attacks by helping to identify and block anomalous spikes in traffic while allowing for desired functionality of your services.
Secure Access Service Edge (SASE)
Integrating networking and security into a unified cloud-delivered service model, our SASE framework supports dynamic secure access needs of organizations, facilitating secure and efficient connectivity for remote and mobile workers.
Endpoint Detection and Response (EDR)
Help safeguard devices connected to your enterprise network, using AI to detect, investigate, remove, and remediate malware, phishing, and ransomware
Managed Detection and Response (MDR)
Extend EDR capabilities to the entire network and detect advanced threats, backed up with 24/7 monitoring by a team of cybersecurity experts.
Vulnerability Scanning and Management
Helps identify and manage security weaknesses in the network and software systems, a proactive approach that helps protect potential entry points for threat actors.
Comcast Ventures
Did you know that Comcast has a venture capital group that backs early-to-growth stage startups that are transforming sectors like cybersecurity, AI, healthcare, and more?
Some of the innovative cybersecurity, data and AI-specific companies that Comcast Ventures has invested in include:
BigID
SafeBase
HYPR
Resemble AI
Bitsight
Uptycs
Recently, cybersecurity investment and advisory firm NightDragon announced a strategic partnership with Comcast Technology Solutions (CTS) and DataBee that also included Comcast Ventures. As a result of this strategic partnership, CTS, Comcast Ventures and DataBee will gain valuable exposure to the new innovations coming from NightDragon companies.
Comcast Cybersecurity
As I write this, Comcast Corporation is ranked 33 on the Fortune 500 list, so – as you might guess – it has an expansive internal cybersecurity organization. With $121 billion+ in annual revenues, over 180,000 employees around the globe, and a huge ecosystem of consumers and business customers and partners, Comcast takes its security obligations very seriously.
Our cyber professionals collectively hold and are awarded multiple patents each year. We lead standards bodies, and we participate and provide leadership in multiple policy forums. Our colleagues contribute to Open-Source communities where we share our security innovations. We are an integral part of the global community of cybersecurity practitioners – we present at conferences, learn from our peers, hold multiple certifications, and publish in various journals. We are a contributing member of the Communications ISAC, and the CISA Joint Cyber Defense Collaborative. A sampling of internal research and development efforts within Comcast’s cybersecurity organization include:
One-time secure secrets sharing
Security data fabric (Note: the inspiration for DataBee®)
Anomaly detection
AI-based secrets detection in code
AI-based static code analysis for privacy
Crypto-agility risk assessment
Machine-assisted security threat modeling
Scoping of threats against AI/ML apps
Persona-based privacy threat modeling
PKI and token management systems
Certificate lifecycle management and contribution to industry IoT stock
R&D for BluVector Network Detection and Response (NDR) product
The Comcast Cyber Security (CCS) Research team, “conducts original applied and fundamental cybersecurity research”. Selected projects that the team is working on include research on security and human behavior, security by design, and emerging technologies such as post quantum cryptography. CCS works with technology teams across Comcast to identify and explore security gaps in the broader cyber ecosystem.
The Comcast Cybersecurity team’s work developing and implementing a security data fabric platform was the inspiration for what has become DataBee. Although the DataBee team has architected and built its commercial DataBee Hive™ security, risk and compliance data fabric platform from “scratch” (so to speak), it was Comcast’s internal platform – and the great results that it has, and continues, to deliver – that proved such a solution could be a game-changer, especially for large, complex organizations. While DataBee Hive has been designed to address the needs and scale of any type of enterprise or IT architecture, we were fortunate to be able to tap into the learnings that came from the years and countless person hours of development that went into building Comcast’s internal security data fabric platform, and then operating it at scale.
DataBee Cybersecurity Suite
Besides being home to the DataBee Hive security data fabric platform and products, it’s worth noting that the DataBee business unit of Comcast Technology Solutions is also home to BluVector, an on-premises network detection and response (NDR) platform. Comcast acquired BluVector in 2019, which was purpose-built to protect critical government and enterprise networks. BluVector continues to deliver AI-powered NDR for visibility across network, devices, users, files, and data to discover and hunt skilled and motivated threats.
Comcast and cybersecurity? Of course.
So, the next time you come across DataBee, from Comcast Technology Solutions, and you think to yourself “why is Comcast in the enterprise security market with DataBee?!” – think again.
From small and mid-size organizations to large enterprises and government agencies; and from managed services to products and solutions; and from on-premises to cloud-native… Comcast’s complete cybersecurity “portfolio” covers the gamut.
Want to connect with someone to determine what’s right for your organization? Contact us, and in “Comments”, let us know if you’d like to evaluate solutions from both DataBee and Comcast Business. We’ll look forward to exploring options with you!
Read More
Compliance Takes a Village: Celebrating National Compliance Officer Day
If the proverb is, it takes a village to raise a child, then the corollary in the business world is that it takes a village to get compliance right. And in this analogy, compliance officers are the mayor of this village. Compliance officers schedule audits, coordinate activities, oversee processes, and manage documentation. They are the often-unsung heroes whose work acts as the foundation of your customers’ trust, helping you achieve certifications and mitigate risk.
While your red teamers and defenders get visibility because they sit at the frontlines, your compliance team members are strategizing and carving paths to reduce risk and enable programs. For this National Compliance Officer Day, we salute these mayors of the compliance village in their own words.
Feeling Gratitude
There is a great amount of pride when compliance officers are able to help you build trust with your customers, but there is also an immense amount of gratitude from the compliance teams for the internal relationships built within the enterprise
Yasmine Abdillahi, Executive Director of Security Risk and Compliance and Business Information Security Officer at Comcast, expressed gratitude for executive leader Sudhanshu Kairab whose ability to grasp the core business fundamentals have allowed Comcast to implement robust compliance frameworks that mitigate risks and support growth and trust.
“[Sudhanshu] consistently demonstrates a keen awareness of industry trends, enabling us to stay ahead of emerging challenges and opportunities. His ability to sustain and nurture a strong network, both internally and externally, has proven invaluable in fostering collaboration and ensuring we remain at the forefront of GRC best practices. His multifaceted approach to leadership has not only strengthened our risk posture but has also positioned our GRC function as a key driver of innovation and business growth.”
Compliance professionals rely on their strategic internal business partners to succeed. When enterprise leaders empower the GRC function, compliance and risk managers can blossom into their best business enabling selves.
In return, compliance leaders allow the enterprise to provide customers with the assurance they need. In today’s “trust but verify” world, customers trust the business when the compliance function can verify the enterprise security posture.
Collaboration, Communication, and Education
At its core, your compliance team acts as the communications glue that binds together the various cybersecurity functions.
For Tom Schneider, who is a part of the DataBee team as a Cybersecurity GRC Professional Services Consultant, communication has been essential to his career. When working to achieve compliance with a control, communicating clearly and specifically is critical, especially when cybersecurity is not someone’s main responsibility. Clear communication educates both sides of the compliance equation.
“Throughout my career, I have learned from the many people I’ve worked with. They have included management, internal and external customers, and auditors. I’ve learned from coworkers that were experts in some specific technology or process, such as vulnerability management or identity management, as well as from people on the business side and how things appear from their perspective.”
GRC’s cross-functional nature makes compliance leaders some of the enterprise’s most impactful teachers and learners. Compliance officers collaborate across different functions - security, IT, and senior leadership. As they learn from their internal partners, they, in turn, educate others.
Compliance officers are so much more than the controls they document and the checklists they review. They facilitate collaboration because they can communicate needs and build a shared language.
Compliance Officers: Keeping It All Together
A compliance officer’s role in your organization goes far beyond their job descriptions. They are cross-functional facilitators, mentors, learners, leaders, enablers, and reviewers. They are the ones who double check the organization’s cybersecurity work. Every day, they work quietly in the background, but for one day every year, we have the opportunity to let them know how important they are to the business.
DataBee from Comcast Technology Solutions gives your compliance officer a way to keep their compliance and business data together so they can communicate more effectively and efficiently. Our security data fabric empowers all three lines of defense - operational managers, risk management, and internal audit - so they can leave behind spreadsheets and point-in-time compliance reporting relics of the past. By leveraging the full power of your organization’s data, compliance officers can implement continuous controls monitoring (CCM) with accurate compliance dashboard and reports for measuring risk and reviewing controls’ effectiveness.
From our Comcast compliance team to yours, thank you for all you do. We see you and appreciate you - today and every day.
Read More
Best practices for PCI DSS compliance...and how DataBee for CCM helps
For planning compliance with the Payment Card Industry Data Security Standard (PCI DSS), the PCI Security Standards Council (SSC) supplies a document that provides excellent foundational advice for both overall cybersecurity, and PCI DSS compliance. Organizations may already be aware of it, but regardless, it is a useful resource. And, it is interesting to read with Continuous Controls Monitoring (CCM) in mind.
The document lists 10 Recommendations for best practices which are useful, not just for PCI DSS compliance, but for overall security and compliance with organizational policies as well as frameworks and regulations to which the entity is subject. The best practices place a strong emphasis on ongoing, continuous compliance. That is, for organizations “to protect themselves and their customers from potential losses or damages resulting from a data breach, they must strive for ways to maintain a continuous state of compliance throughout the year rather than simply seeking point-in-time validation.”
While the immediate goal may be to attain a compliant Report on Compliance (ROC), that immediate goal, and the longer-term viability of the security program, are aided by establishing a program around continuous compliance and the ability to measure it.
Here are the SSC’s 10 Best Practices for Maintaining PCI DSS Compliance:
Develop and Maintain a Sustainable Security Program
Develop Program, Policy, and Procedures
Develop Performance Metrics to Measure Success
Assign Ownership for Coordinating Security Activities
Emphasize Security and Risk Management to Attain and Maintain Compliance
Continuously Monitor Security Controls
Detect and Respond to Security Control Failures
Maintain Security Awareness
Monitoring Compliance of Third-Party Service Providers
Evolve the Compliance Program to Address Changes
Some detail around the 10
The first recommendation, “Develop and Maintain a Sustainable Security Program” is short, but notes that, “Any cardholder data not deemed critical to business functions should be removed from the environment in accordance with the organization’s data-retention policies… In addition, organizations should evaluate business and operating procedures for alternatives to retaining cardholder data.” Outsourcing the processing of cardholder data to entities that specialize in this work is an option that many organizations take. When that is not a viable option, minimizing the amount of data collected, and securely deleting it as specified in the organization’s data retention policy is the next best option.
“Develop Program, Policy, and Procedures” is the second recommendation. Along with developing and maintaining these documents, accountability must be assigned “to ensure the organization's sustainable compliance.” Additionally, PCI DSS v4.0 has a requirement under each of the twelve principal requirements stating that “Roles and responsibilities for performing activities” for each principal requirement “are documented, assigned, and understood.” If this role does not already exist, something for organizations to consider would be designating a “compliance champion” for each business unit. The compliance champions could work with their management to assume accountability for the control compliance for assets and staff assigned to the business unit.
“Develop Performance Metrics to Measure Success” follows. This recommendation includes “Implementation metrics” (which measure the degree to which a control has been implemented, and are usually described as percentages), and “Efficiency and Effectiveness Measures” (which evaluate attributes such as completeness, consistency, and timeliness). These metrics show if a control has been implemented over the expected range of the organization’s assets, if it has been implemented consistently, and is being executed when expected. These metrics play a key role in assessing compliance in a continuous way.
Measurement of implementation metrics and effectiveness metrics for completeness and consistency are core components of DataBee for CCM. For example, in the case of Asset Management, users can see if assets in scope for PCI DSS are flagged as being in scope correctly, if the asset owner is accurate, and if other data points such as physical location are present. The ability to see continuously refreshed data on a CCM dashboard, as opposed to having to create a point in time report, or have the knowledge to access this data through a product specific portal, makes it practical for teams to see accurate metrics in an efficient way.
The fourth recommendation is to “Assign Ownership for Coordinating Security Activities.” An “individual responsible for compliance (a Compliance Manager)” is the main point of this recommendation. However, the recommendation notes that Compliance Manager should be “given adequate funding and resources… and granted the proper authority to effectively organize and allocate such resources.” The effective organization of resources could include delegating tasks throughout the organization to managers over units within the larger organization. This recommendation ends by noting that the organization must ensure that “the goals and objectives of its compliance program are consistently achieved despite changes in program ownership (i.e., employee turnover, change of management, organization merger, re-organization, etc.). Best practices include proper knowledge transfer, documentation of existing controls and the associated responsible individual(s) or team(s).”
Using the DataBee for CCM dashboards to assign accountability for assets and staff to the appropriate business units helps with this recommendation.
It clarifies the delegation of responsibility for assets and staff to the business unit’s management.
Furthermore, it would help drive the effective achievement of objectives of the compliance program during transitions in the Compliance Manager role.
Delegation of control compliance to the business unit’s management would enable them to continue with their tasks while a new Compliance Manager is hired and during the time needed for the Compliance Manager to adjust to their role.
“Emphasize Security and Risk Management to Attain and Maintain Compliance,” the fifth recommendation asserts that “PCI DSS provides a minimum set of security requirements for protecting payment card account data…,” and that “Compliance with industry standards or regulations does not inherently equate to better security.”
This point cannot be emphasized highly enough: “A more effective approach is to focus on building a culture of security and protecting an organization’s information assets and IT infrastructure and allow compliance to be achieved as a consequence.” The ongoing measurement of control implementation by CCM supports a culture of security. Organizations can use the information provided by DataBee for CCM to not only enable continuous reporting, but through it to support continuous remediation of control failures.
The next recommendation, “Continuously Monitor Security Controls,” describes how “the use of automation in both security management and security-control monitoring can provide a tremendous benefit to organizations in terms of simplifying monitoring processes, enhancing continuous monitoring capabilities, and minimizing costs while improving the reliability of security controls and security-related information.”
Ongoing monitoring of data that is frequently refreshed can be a core component for ongoing compliance. Ultimately, implementing a continuous controls monitoring program will help reduce extra workload as the PCI DSS assessment date approaches. DataBee for CCM is a tool that supports the necessary continuous monitoring.
The seventh recommendation, “Detect and Respond to Security Control Failures,” applies to two situations:
controls which have failed, but with no detectable consequences, and
control failures that escalate to security incidents.
PCI SSC notes that, “The longer it takes to detect and respond to a failure, the higher the risk and potential cost of remediation.” Continuous monitoring can help the organization to reduce the time it takes to detect a failed control.
Recommendation eight, “Maintain Security Awareness” speaks to the need to train the workforce, especially regarding how to respond to social engineering. Security training, both for the staff in general and role-based training for specific teams, is one of the requirements that DataBee for CCM reports on through its dashboards.
Recommendation nine is “Monitoring Compliance of Third-Party Service Providers,” and ten is “Evolve the Compliance Program to Address Changes.” A robust compliance program that is in place throughout the year can be more capable of evolving and adapting to change than an assessment focused program that allows controls to drift out of compliance between assessments. Continuous monitoring is key for combating compliance drift once an assessment has been completed.
After the ten recommendations, the main body of the document concludes with a section about the “Commitment to Maintaining Compliance.” Two of the key actions for maintaining continuous compliance are, “Assigning responsibility for ensuring the achievement of their security goals and holding those with responsibility accountable,” and “Developing tools, techniques, and metrics for tracking the performance and sustainability of security activities.” DataBee for CCM enables both these tasks.
The main theme of the “Best Practices for Maintaining PCI DSS Compliance” is that continuous compliance with PCI DSS that is maintained throughout the year is the goal. Ultimately, this helps improve the overall security posture of the organization. Making the required compliance activities business as usual tasks that are continuous throughout the year can also help with the specific goal of achieving a compliant result for a PCI DSS assessment when it comes due.
How DataBee for CCM fits in
We envisioned and realized DataBee for CCM as a fantastic fit for an evolving compliance program. Using the DataBee dashboards, with their continuously updated information that can be accessible to everyone who needs to see it, help free up time for GRC and other teams to focus on the evolution of the cybersecurity program. Given the rapid change in the cyber-threat landscape, and the frequent changes in security controls and regulatory requirements, turning report creation over to CCM to give time back to your people for higher value work is a win for your organization.
DataBee for CCM helps by providing consistent data to all teams, GRC, executive management, business management, IT, etc., so that everyone is working from the same information. This helps to delegate control compliance, and clearly identify accountable and responsible parties. Furthermore, DataBee for CCM shows executives, GRC, business managers and others content for multiple controls, from many different tools, through a single interface (as opposed to GRC needing to create multiple reports, or business managers and others having to create their own, possibly erroneous, reports). Additional dashboards can be created to report on other controls that are in scope for PCI DSS, such as secure configuration, business continuity, and monitoring the compliance of third-party service providers. Any control for which data is available to create useful dashboard content is a candidate for a DataBee for CCM dashboard.
Read More
Enter the golden age of threat detection with automated detection chaining
During my time as a SOC analyst, triaging and correlating alerts often felt like solving a puzzle without the box or knowing if you had all the pieces.
My days consisted of investigating alerts in an always-growing incident queue. Investigations could start with a single high or critical alert and then hunt through other log sources to piece together what happened. I had to ask myself (and my team) if this alert and that alert had any identifiable relationships or patterns with the ones they investigated that day, even though the alerts looked unrelated by themselves. Most investigations inevitably relied on institutional knowledge to find the pieces of your puzzle, searching by IP for one data source and the computer name in another. Finding the connections between the low and slow attacks in near real-time was a matter of chance and often discovered via threat-hunting efforts, slipping through the cracks of security operations. This isn’t an uncommon story and it's not new either – it’s the same problems faced during the Target 2013 breach and the National Public Data Network 2024 breach.
That’s why we launched automated detection chaining as part of the DataBee for Security Threats solution. Using a patent-pending approach to entity resolution, the security data fabric platform can chain together alerts from disjointed tools that could be potentially tied to an advanced persistent threat, insider threat, or compromised asset. What I like to call a “super alert” is presented in DataBee EntityViews™, which aggregates alerts into a time-series, or chronological, view. Now it’s easier to find attacks that span security tools and the MITRE ATT&CK framework. With our out-of-the-box detection chain, you can automatically create a super alert before the adversary reaches the command-and-control phase.
Break free from vendor-specific detections with Sigma Rules
Once a security tool is fully deployed in the network and environment, it becomes near impossible to change out vendors without significant operational impact. The impact is more than just replacing the existing solution, it's also updating all upstream and downstream integration points, such as custom detection content or log parsers. This leads to potential gaps in coverage due to limitations in the tooling deployed and the tools desired. Standard logging is done to a vendor-agnostic schema, and then an open-source detection framework is applied.
The DataBee Platform automated migrating to the Open Cybersecurity Schema Framework (OCSF), which has become increasingly popular with security professionals and is gaining adoption in some tools. Its vendor-agnostic approach standardizes disparate security logs and data feeds, giving SOC teams the ability to use their security data more effectively. Active detection streams in DataBee apply Sigma formatted rules over security data that is mapped to a DataBee-extended version of OCSF to integrate into the existing security ecosystem with minimal customizations. DataBee handles the translation from the Sigma taxonomy to OCSF to help lower the level of effort needed to adopt and support organizations on their journey to vendor-agnostic security operations. Sigma-formatted detections are imported and managed via GitHub to enable treating detections as code. By breaking free of proprietary formats, teams can more easily use vendor-agnostic Sigma rules to gain security insights from across all their tools, including data stored in security data lakes and warehouses.
The accidental insider threat
Accidental insider threats often begin with a phishing attack containing a malicious link or download that tricks the user. The malware is too new or has morphed to evade your end point detection. Then it spreads to whatever other devices it can authenticate to. Detecting the scope of the lateral movement of the malware is challenging because there is so much noise to search through. With DataBee EntityViews, SOC teams can easily review the historical information connected to the organization’s real-world people and devices, giving them a way to trace the progression of events.
Looking at a user’s profile shows relevant business contexts that can aid the investigation:
Job Title to hint at what is normal behavior
Manager to know who to go to for questions or if action needs to be taken
Owned assets that may be worth investigating further
The Event Timeline shows the various types of OCSF findings associated with the user.
By scrolling through the list of findings, a SOC analyst can quickly identify several potential issues, including malware present within the workstation. Most notable, the MITRE ATT&CK detection chain has triggered. In this instance, we had multiple data sources that alerted on different parts of the ATT&CK chain producing a super alert. The originating events are maintained as evidence and easily accessible to the analyst:
EntityViews allow for bringing the events from devices that the current user owns to help simplify the process of pulling together the whole story. In our example the device is the user’s laptop so it's likely that all of the activity is carried out by the user:
The first thing of note is the unusual number of authentication attempts to devices that seem atypical for a developer such as a finance server. As we continue to scroll through the user’s timeline, reviewing events from a variety of data sources, we finally come across our smoking gun. In this instance, we are able to see the phishing email that user clicked the link on that is our initial point of compromise:
It’s clear the device has malware on it, and the authentication attempts imply that the malware was looking to spread further in the network. To visualize this activity, we can leverage the Related Entities graphical view in the Activity section of EntityViews. SOC analysts can use a graphical representation and animation of the activity to visualize the connections between the compromised user and the organization. The graph displays other users and devices that have appearances in security findings, authentication, and ownership events. In our example, we can see that the user has attempted to authenticate to some atypical devices such as an HR system:
Filtering enables more targeted investigations, like focusing on only the successful authentication attempts:
Visualizations such as this in DataBee enable more accurate, timely and complete investigations. From this view, the SOC analysts can select any entity to see their EntityView with the activity associated with the related users and devices. Rather than pivoting between multiple applications or waiting for data to be reprocessed, they have real-time access to information in an easy to consume format.
Customizing detection chains to achieve organizational objectives
Detection Chains are designed to enable advanced threat modeling in a simple solution. Detection Chains can be created in the DataBee platform leveraging all kinds of events that flow through the security data fabric. DataBee ships with 2 detection chains to get you started:
MITRE ATT&CK Chain: Detect advanced low and slow attacks that span the MITRE ATT&CK chain before reaching Command & Control.
Potential Insider Threat: Detect insider threats who are printing out documents, emailing personal accounts, and messing with files in the file share.
These chains serve as a starting point. The intent is that organizations add and remove chains based on their specific needs. For example, you may want to extend the potential insider threat rule to include more potential email domains or limit file share behavior to accessing files that contain trade secrets or sales information.
Automated detection chains are nearly infinity flexible. By chaining together detections from the different data sources that align to different parts of the attack chain specific to a user or device, DataBee enables building advanced security analytics for hunting the elusive APTs and getting ahead of pesky ransomware attacks.
Building a better way forward with DataBee
Every organization is different, and every SOC team has unique needs. DataBee’s automated detection chaining feature gives SOC analysts a faster way to investigate complex security incidents, enabling them to rapidly and intuitively move through vast quantities of historical data.
If you’re ready to gain the full value of your security data with an enterprise-ready security, risk, and compliance data fabric, request a custom demo to see how DataBe for Security Threats can turn static detections into dynamic insights.
Read More
You've reduced data, so what's next
Organizations often adopt data tiering to reduce the amount of data that they send to their analytics tools, like Security Information and Event Management (SIEM) solutions. By diverting data to an object store or a data lake, organizations are able to manage and lower costs by minimizing the amount of data that their SIEM stores. Although they achieve this tactical objective, the process creates data silos. While people can query the data in isolation, they often fail to glean collective insights across the silos.
Think of the problem like a large building with cameras across its perimeters. The organization can monitor each camera’s viewpoint, but no individual camera has the full picture, as any spy movie will tell you. Similarly, you might have different tools that see different parts of your security picture. Although SIEMs originally intended to tie together all security data into a composite, cloud applications and other modern IT and cybersecurity technology tool stacks generate too much data to make this cost-effective.
As organizations balance saving money with having an incomplete picture, a high-quality data fabric architecture can enable them to build more sustainable security data strategies.
From default to normalized
When you implement a data lake, the diverted data remains in its default format. When you try to paint a composite picture across these tools, you rely on what an individual data set understands or sees, leaving you to pick out individual answers from these siloed datasets.
Instead of asking a question once, you need to ask fragments of the question across different data sets. In some cases, you may have a difficult time ensuring that you have the complete answer.
With a security data fabric, you can normalize the data before landing it in one or more repositories. DataBee® from Comcast Technology Solutions uses extract, transform, and load processes to automatically parse security data, then normalizes it according to our extended Open Cybersecurity Schema Framework (OCSF) so that you can correlate and understand what’s happening in the aggregate picture.
By normalizing the data on its way to your data lake, you optimize compute and storage costs, eliminating some of the constraints arising from other data federation approaches.
Considering your constraints
Federation reduces storage costs, but it introduces limitations that can present challenges for security teams.
Latency
When you move data from one location to another, you introduce various time lags. Some providers will define the times per day or number of times that you can transfer data. For example, if you want data in a specific format, some repositories may only manage this transfer once per day.
Meanwhile, if you want to stream the data into a different format for collection, the reformatting can also create a time lag. A transformation and storage process may take several minutes, which can impact key cybersecurity metrics like mean time to detect (MTTD) or mean time to respond (MTTR).
When you query security datasets to learn what happened over the last hour, a (near) real-time data source will contribute to an accurate picture, while a delayed source may not have yet received data for the same period. As you attempt to correlate the data to create a timeline, you might need to use multiple data sources that all have different lag times. For example, some may be mostly real-time while another sends data five minutes later. If you ask the question at the time an event occurred, the system may not have information about it for another five minutes, creating a visibility gap.
Such gaps can create blind spots as you scale your security analytics strategy. The enterprise security team may be asking hundreds of questions across the data system, and the time delay can create a large gap between what you can see and what happened.
Correlation
Correlating activities from across your disparate IT and security tools is critical. Data gives you facts about an event while correlation enables you to interpret what those facts mean. When you ask fragments of a question across data silos, you have no way to automate the generation of these insights.
For example, a security alert will give you a list of events including hundreds of failed login attempts over three minutes. While you have these facts, you still need to interpret whether they describe malicious actors using stolen credentials or a brute force attack.
To improve detections and enable faster response times, you need to weave together information like:
The IP address(es) involved over the time the event occurred
The user associated with the device(s)
The user’s geographic location
The network access permissions for the user and device(s)
You may be storing this data in different repositories without correlation capabilities. For example, you may have converged all DNS, DHCP, firewall, EDR, and Proxy data in one repository while combining user access and application data in another. To get a complete picture of the event, you need to make at least, although likely more than, two single-silo queries.
While you may have reduced data storage costs, you have also increased the duration and complexity of investigating incidents, which gives malicious actors more time in your systems, making it more difficult to locate them and contain the threat.
Weaving together federated data with DataBee
Weaving together data tells you what and when something happened, enabling insights into activity rather than just a list of records. With a fabric of data, you can interpret it to better understand your environment or gain insights about an incident. With DataBee, you can focus on protecting your business while achieving tactical and strategic objectives.
At the tactical level, DataBee fits into your cost management strategies because it focuses on collecting and processing your data in a streamlined affordable way. It ingests security and IT logs and feeds, including non-traditional telemetry like organizational hierarchy data, from APIs, on-premises log forwarders, AWS S3s, or Azure Blobs then automatically parses and maps the data to the OCSF. You can use one or more repositories, aligning with cost management goals. Simultaneously, data users can access accurate, clean data through the platform to build reliable analytics without worrying about data gaps.
The platform enriches your dataset with business policy context and applies patent-pending entity resolution technology so you can gain insights based on a unified, time-series dataset. This transformation and enrichment process breaks down silos so you can efficiently and effectively correlate data to gain real-time insights, empowering operational managers, security analysts, risk management teams, and audit functions.
Read More
The value of OCSF from the point of view of a data scientist
Data can come in all shapes and sizes. As the “data guy” here at DataBee® (and the “SIEM guy” in a past life), I’ve worked plenty with logs and data feeds in different formats, structures, and sizes delivered using different methods and protocols. From my experience, when data is inconsistent and lacks interoperability, I’m spending most of my time trying to understand the schema from each product vendor and less time on showing value or providing insights that could help other teams.
That’s why I’ve become involved in the Open Cybersecurity Schema Framework (OCSF) community. OCSF is an emerging but highly collaborative schema that aims to standardize security and security-related data to improve consistency, analysis, and collaboration. In this blog, I will explain why I believe OCSF is the best choice for your data lake.
The problem of inconsistency
When consuming enterprise IT and cybersecurity data from disparate sources, most of the concepts are the same (like an IP address or a hostname or a username) but each vendor may use a different schema (like the property names) as well as sometimes different ways to represent that data.
Example: How different vendors represent a username field
Vendor
Raw Schema Representation
Vendor A (Firewall)
user.name
Vendor B (SIEM)
username
Vendor C (Endpoint)
usr_name
Vendor D (Cloud)
identity.user
Even if the same property name is used, sometimes the range of values or classifications might vary.
Example: How different vendors represent “Severity” with different value ranges
Vendor
Raw Schema Representation
Possible Values
Vendor A (Firewall)
severity
low, medium, high
Vendor B (SIEM)
severity
1 (critical), 2 (high), 3 (medium), 4 (low)
Vendor C (Endpoint)
severity
info, warning, critical
Vendor D (Cloud)
severity
0 (emergency) through 7 (debug)
In a non-standardized environment, these variations require custom mappings and transformations before consistent analysis can take place. That’s why data standards can be helpful to govern how data is ingested, stored, and used, maintaining consistency and quality so that it can be used across different systems, applications, and teams.
How can a standard help?
In the context of data modeling, a "standard" is a widely accepted set of rules or structures designed to ensure consistency across systems. The primary purpose of a standard is to achieve normalization—ensuring that data from disparate sources can be consistently analyzed within a unified platform like a security data lake or a security information event management (SIEM) solution. From a cyber security standpoint, this becomes evident in at least a few common scenarios:
Analytics: A standardized schema enables the creation of consistent rules, models, and dashboards, independent of the data source or vendor. For example, a rule to detect failed login attempts can be applied uniformly, regardless of whether the data originates from a firewall, endpoint security tool, or cloud application.
Threat Hunting - Noise Reduction: With normalized fields, filtering out irrelevant data becomes more efficient. For instance, if every log uses a common field for user identity (like username), filtering across multiple log sources becomes much simpler.
Threat Hunting - Understanding the Data: Having a single schema instead of learning multiple vendor-specific schemas reduces cognitive load for analysts, allowing them to focus on analysis rather than data translation.
For log data, several standards exist. Some popular ones are: Common Event Format (CEF), Log Event Extended Format (LEEF), Splunk's Common Information Model (CIM), and Elastic’s Common Schema (ECS). Each has its strengths and limitations depending on the use case and platform.
Why existing schemas like CEF and LEEF fall short
Common Event Format (CEF) and Log Event Extended Format (LEEF) are widely used schemas, but they are often too simplistic for modern data lake and analytics use cases.
Limited Fields: CEF and LEEF offer a limited set of predefined fields, meaning most log data ends up in custom fields, which defeats the purpose of a standardized schema.
Custom Fields Bloat: In practice, most data fields are defined as custom, leading to inconsistencies and a lack of clarity. This results in different interpretations of the same data types, complicating analytics.
Overloaded Fields: Without sufficient granularity, crucial data gets overloaded into generic fields, making it hard to distinguish between different event types.
Example: Overloading a single field like “message” to store multiple types of information (e.g., event description, error code) creates ambiguity and reduces the effectiveness of automated analysis.
The limits of CIM and ECS: vendor-specific constraints
Splunk CIM and Elastic ECS are sophisticated schemas that better address the needs of modern environments, but they are tightly coupled to their respective ecosystems.
Proprietary Optimizations:
CIM: Although widely used within Splunk, CIM is proprietary and lacks an open-source community for contributions to the schema itself. Its design focuses on Splunk’s use cases, which can be limiting in broader environments.
ECS: While open-source, ECS remains heavily influenced by Elastic’s internal needs. For instance, ECS optimizes data types for Elastic’s indexing and querying, like the distinction between keyword and text fields. Such optimizations can be unnecessary or incompatible with non-Elastic platforms.
Field Ambiguity:
CIM uses fields like src and dest, which lack precision compared to more explicit options like source.ip or destination.port. This can lead to confusion and the need for additional context when performing cross-platform analysis.
Vendor-Centric Design:
CIM: The field definitions and categories are tightly aligned with Splunk’s correlation searches, limiting its relevance outside Splunk environments.
ECS: Data types like geo_point are unique to Elastic’s product features and capabilities, making the schema less suitable when integrating with other tools.
How OCSF addresses these challenges
The OCSF was developed by a consortium of industry leaders, including AWS, Splunk, and IBM, with the goal of creating a truly vendor-neutral and comprehensive schema.
Vendor-Neutral and Tool-Agnostic: OCSF is designed to be applicable across all logs, not just security logs. This flexibility allows it to adapt to a wide variety of data sources while maintaining consistency.
Open-Source with Broad Community Support: OCSF is openly governed and welcomes contributions from across the industry. Unlike ECS and CIM, OCSF’s direction is not controlled by a single vendor, ensuring it remains applicable to diverse environments.
Specificity and Granularity: The schema’s granularity aids in filtering and prevents the overloading of concepts. For example, OCSF uses specific fields like identity.username and network.connection.destination_port, providing clarity while avoiding ambiguous terms like src.
Modularity and Extensibility: OCSF’s modular design allows for easy extensions, making it adaptable without compromising specificity. Organizations can extend the schema to suit their unique use cases while remaining compliant with the core model.
In DataBee’s own implementation, we’ve extended OCSF to include custom fields specific to our environment, without sacrificing compatibility or requiring extensive custom mappings. For example, we added the assessment object, which can be used to describe data around 3rd party security assessments or internal audits. This kind of log data doesn’t come from your typical security products but is necessary for the kind of use cases you can achieve with a data lake.
Now that we have some data points about my own experiences with some of the industry’s most common schemas, it’s natural to share a visualization through a comparison matrix of OCSF and two leading schemas.
OCSF Schema Comparison Matrix
Aspect
OCSF
Splunk CIM
Elastic ECS
Openness
Open-source, community and multi-vendor-driven
Proprietary, Splunk-driven
Open-source, but Elastic-driven
Community Engagement
Broad, inclusive community, vendor-neutral
Limited to Splunk community and apps
Strong Elastic community, centralized control
Flexibility of Contribution
Contributable, modular, actively seeks community input
No direct community contributions
Contributable, but Elastic makes final decisions
Adoption Rate
Early but growing rapidly across multiple vendors
High within Splunk ecosystem
High within Elastic ecosystem
Vendor Ecosystem
Broad support, designed for multi-vendor use
Splunk-centric, limited outside of Splunk
Elastic-centric, some third-party integrations
Granularity and Adaptability
Structured and specific but modular; balances adaptability with detailed extensibility
Moderately structured with more generic fields; offers broad compatibility but less precision
Highly granular and specific with tightly defined fields; limited flexibility outside Elastic environments
Best For
Flexible, vendor-neutral environments needing both detail and adaptability
Broad compatibility in Splunk-centric environments
Consistent, detailed analysis within Elastic environments
The impact of OCSF at DataBee
In working with OCSF, I have been particularly impressed with the combination of how detailed the schema is and how extensible it is. We can leverage its modular nature to apply it to a variety of use cases to fit our customers' needs, while re-using most of the schema and its concepts. OCSF’s ability to standardize and enrich data from multiple sources has streamlined our analytics, making it easier to track threats across different platforms and ultimately helping us deliver more value to our customers. This level of consistency and collaboration is something that no other schema has provided, and it’s why OCSF has been so impactful in my role as a data analyst.
If we have ideas for the schema that might be usable for others, the OCSF community is receptive to contributions. The community is already brimming with top talent in the SIEM and security data field and is there to help guide us in our mapping and schema extension decisions. The community-driven approach means that I’m not working in isolation; I have access to a wealth of knowledge and support, and I can contribute back to a growing standard that is designed to evolve with the industry.
Within DataBee as a product, OCSF enables us to build powerful correlation logic which we use to enrich the data we collect. For example, we know we can track the activities of a device regardless of whether the event came from a firewall or from an endpoint agent, because the hostname will always be device.name.
Whenever our customers have any questions about how our schema works, the self-documenting schema is always available at ocsf.databee.buzz (which includes our own extensions). This helps to enable as many users as possible to gain security and compliance insights.
Conclusion
As organizations continue to rely on increasingly diverse and complex data sources, the need for a standardized schema becomes paramount. While CEF, LEEF, CIM, and ECS have served important roles, their limitations—whether in scope, flexibility, or vendor-neutrality—make them less ideal for a comprehensive data lake strategy.
For me as a Principal Cybersecurity Data Analyst, OCSF has been transformative and represents the next evolution in standardization. With its vendor-agnostic, community-driven approach, OCSF offers the precision needed for detailed analysis while remaining flexible enough to accommodate the diverse and ever-evolving landscape of log data.
Read More
Political Advertising: the race to every screen moves fast
If there’s one thing that’s self-evident in this U.S. presidential election year, it’s this: political advertising moves fast.
"For agencies representing the vast number of campaigns across state and national campaigns, it’s not just about the sheer volume, but also about the need to respond quickly,” explains Justin Morgan, Sr. Director of Product Management for the CTS AdFusion team. “Just think about how the news cycle – in just the last few days alone – has necessitated rapid changes in messaging that needs to be handled accurately across every outreach effort.
According to GroupM, political advertising for 2024 is expected to surpass $16 billion – over four percent of the entire ad revenue for the year. As consumers we’re certainly aware that campaign messaging is constant, and constantly changing, but what does that mean behind the scenes from a technology standpoint?
“Political advertisers have to maintain a laser-focus on messaging and results,” explains Morgan. “This makes accuracy and agility even more crucial, so that agencies can make changes fast and trust that every spot is not just placed accurately, but also represents their candidate(s) in the best possible quality, no matter where or how that ad gets seen.”
For the AdFusion team, it’s important to make ad management simple and streamlined so our partners can focus on the most important goal, winning the race. That’s why we focus on three areas to help our clients get better, faster, and smarter:
Better: Precisely target the right voters with the right message by leveraging program-level automated traffic and delivery
Faster: Quickly shift campaign strategy as the race evolves by delivering revised ads and traffic in minutes
Smarter: Drive compliance with one-hour invoicing and in-platform payments.
Learn more about how AdFusion improves the technology foundation for driving political campaigns here.
Read More
The challenges of a converged security program
It’s commonplace these days to assume we can learn everything about someone from their digital activity – after all, people share so much on social media and over digital chats. However, advanced threats are more careful on digital. To catch advanced threats, therefore, combining insights from their actual activities in the world on a day-to-day basis with their digital communications and activity can provide a better sense if there’s an immediate and significant threat that needs to be addressed.
Let’s play out this insider threat scenario. While this scenario is in the financial services sector, with quick imagination, a security analyst could see applicability to other sectors. An investment banking analyst, Sarah, badges into a satellite office on a Saturday at 7 pm. Next, she logs onto a workstation, and prints 200 pages of materials. These activities alone, could look innocuous. But taken together, could there be something more going on?
As it turns out, Sarah tendered her resignation the prior Friday with 14 days notice. She leaves that Saturday night with two paper bags of confidential company printouts in tow to take to her next employer – a competing investment bank - to give her an edge.
A complete picture of her activity can be gleaned with logs from a few data sources:
HR data showing her status as pending termination, from a system like Workday or SAP
Badge reader logs
Sign in logs
Print logs
Video camera logs, from the entry and exit way of the building
While seemingly simple, piecing all this information together and taking steps to stop the employee’s actions or even recover the stolen materials is non-trivial. Today, companies are asking themselves, what type of technology is required to know that her behavior was immediately suspicious? And what type of security program can establish the objectives and parameters for quickly catching this type of insider threat?
What is a converged security program?
In the above scenario, sign-in logs and print logs alone aren’t necessarily suspicious. The suspicion level materially increases when you consider the combined context of her employment status with the choice of day and time to badge into the office. As such, converged security dataset analysis brings together physical security data points, such as logs from cameras or badge readers in the above example, and digital insights from activity on computers, computer systems or the internet. If these insights are normalized into the same dataset with clear consistency across user and device activity, they can be analyzed by physical security or cybersecurity analysts for faster threat detection. Furthermore, such collaboration can give way to physical and cybersecurity practitioners establishing a converged set of policies and procedures for incident response and recovery.
In his book, Roland Cloutier describes three important attributes of a converged security function:
One Table: everyone sitting together to discuss issues and create a sense of aligned missions, policies, and procedures for issue detection and response.
Interconnected Issue Problem Solving: identifying the problem as a shared mission and connecting resources in a way that resolves problems faster.
Link Analysis: bringing together data points about an issue or problem and correlating them to gain insights from data analytics.
Challenges of bringing together physical and information security
In today’s environment, the challenges of intertwining physical and digital security insights are substantial. Large international enterprises have campuses scattered across the world and a combination of in-office and remote workers. They may face challenges when employee data is fragmented across different physical and digital systems. Remote workers often don’t have physical security log information associated with their daily activity because they work from the confines of their homes, out of reach of corporate physical monitoring.
The modern workforce model further complicates managing physical and digital security as organizations contend with the:
Rise of remote work
Demise of the corporate network
Usage of personal mobile devices at work
Constant travel of business executives
A worker can no longer be tracked by movement in the building and on the corporate network. Instead, the person’s physical location and network connections change throughout the day. Beyond the technical challenges, organizations face hierarchical structure and human element challenges.
Many companies separate physical security from cybersecurity. One reason for this is that a different skillset may be required to stop the threats. Yet, there is value in the two security leaders developing an operating model for collaboration centers on a global data strategy with consistent and complete insights from the physical security and cybersecurity tools.
Consider a model to do so that revolves around 3 principles:
Common data aggregation and analysis across physical and cybersecurity toolsets
Resource alignment for problem solving and response in the physical security org and the cybersecurity group.
A common set of metrics for accountability across the converged security discipline
Diverse, disconnected tools
This is the problem cybersecurity faces, but this time on a wider scale. Each executive purchases tools for monitoring within their purview. They get data.
However, they either fail to gain insights from it or any insights they do achieve are limited to the problem the technology solves.
Access is a good example:
Identity and Access Management (IAM): sets controls that limit and manage how people interact with digital resources.
Employee badges: set control for what facilities and parts of facilities people can physically access.
Returning to the insider threat that Sarah Smith poses: the CISO’s information security organization has visibility into sign in and print logs but would have to collaborate with the CSO’s physical security group for badge logs. This process takes time and, depending on organizational politics, potentially requires convincing.
The siloed technologies could potentially create a security gap when the data remains uncorrelated:
The sign in and print logs alone may not sufficiently draw attention to Sarah’s activities in the CISO’s organization.
Badge in logs in the CSO’s organization may not draw an alert.
HR data, such as employment/termination status, may not be correlated in either the CISO’s or CSO’s available analytical datasets.
Without weaving the various data sources together into one story, Sarah’s behavior has a high risk of going completely undetected – by anyone.
In a converged program, IAM access and badge access would be correlated to improve visibility. In a converged program with high security data maturity, the datasets would provide a more complete picture with insights that correlate HR termination status, typical employee location, and more business context.
Resource constraints
The challenge of resource alignment often begins by analyzing constraints. Both physical and digital security costs money. Many companies view these functions as separate budgets, requiring separate sets of technologies, leadership, and resources.
Converged security contemplates synergies where possible overlaps can potentially reduce costs. For example:
Human Resources data: identifying all workforce members who should have physical and digital access.
IT system access: determining user access where HR is the source underlying Active Directory or IGA birthright provisioning and automatic access termination.
Building access: badges provisioning and terminating physical access according to HR status
The HR system, sign on system, and badge-in system each serve a separate recordation purpose, which can then provide monitoring functionality. However, by keeping insights from daily system usage separate, the data storage and analysis can grow redundant. As Cloutier notes, “siloed operations tend to drive confusion, frustration, and duplicative work streams that waste valuable resources and increase the load on any given functional area.” (24)
Instead, imagine if diverse recordation systems output data to a single location that parsed, correlated, and enriched data to create user profiles and user timelines so cross-functional teams with an interdisciplinary understanding of threat vectors could analyze it. In such an organization, this solution could
Reduce redundant storage.
Eliminate manual effort in correlating data sources from different systems.
Save analysts time by having all the data already in one spot (no need for gathering in the wake of an incident.
Allow for more rapid detection and response.
Metrics and accountability
Keeping physical security separate from cybersecurity can create the risk of disaggregated metrics and a lack of accountability. People must “compare notes” before making decisions, and the data may have discrepancies because everyone uses different technologies intended to measure different outcomes.
These data, tool, and operations silos can create an intricate, interconnected set of overlapping “blurred lines” across:
Personal/Professional technologies
Physical/Digital security functions
In the wake of a threat, the last thing people want to do is increase the time making decisions or argue over accountability, which can quickly spiral into conversations of blame.
Imagine, instead, a world in which the enterprise can make security a trackable metric. Being able to track an end goal – such as security, whether physical or digital – makes it easier to
Hold people accountable.
Make clear decisions.
Take appropriate action.
A trackable metric is only as good as the data that can back it up. Converged security centers around the concept of a global security data strategy that provides an open architecture for analyses that answer different questions while using a commonly accessed, unified data set that diverse security professionals accept as complete, valid, and the closet thing they can get to the “source of truth”.
Weaving together data for converged security with DataBee®
DataBee by Comcast Technology Solutions fuses together physical and digital security data into a data fabric architecture, then enriches it with additional business information, including:
Business policy context
Organizational hierarchy
Employment status
Authentication and endpoint activity logs
Physical bad and entrance logs
By weaving this data together, organizations achieve insights using a transformed dataset mapped to the DataBee-extended Open Cybersecurity Framework Scheme (OCSF). DataBee EntityViewsTM uses a patent-pending entity resolution software that automatically unifies disparate entity pieces across multiple sources of information. This enables many analytical use cases at speed and low cost. One poignant use case includes insider threat monitoring with a comprehensive timeline of user and devices activity, inside a building and when connected to networks.
The DataBee security data fabric architecture solves the Sarah problem, by weaving together in on timeline:
Her HID badge record from that Saturday’s office visit
The past several months of HR records from Workday showing her termination status.
Her Microsoft user sign-in to a workstation in the office
The HP print logs associated with her network ID and a time stamp.
DataBee empowers all security data users within the organization, including compliance, security, operations, and management. By creating a reliable, accurate dataset, people have fast, data-driven insights to answer questions quickly.
Read More
Vulnerabilities and misconfigurations: the CMDB's invasive species
“Knowledge is power.” Whether you attribute this to Sir Francis Bacon or Thomas Jefferson, you’ve probably heard it before. In the context of IT and security, knowing your assets, who owns them, and how they’re connected within your environment are fundamental first steps in understanding your environment and the battle against adversaries. You can’t place security controls around an asset if you don’t know it exists. You can’t effectively remediate vulnerabilities to an asset without insight into who owns it or how it affects your business.
Maintaining an up-to-date configuration management database (CMDB) is critical to these processes. However manually maintaining the CMDB is unrealistic and error-prone for the thousands of assets across the modern enterprise including cloud technologies, complex networks, and devices distributed across in-office and remote workforce users complicate this process. To add excitement to these challenges, the asset landscape is everchanging for entities like cloud assets, containers virtual machines, which can be ephemeral and become lost in the noise generated by the organization's hundreds of security tools. Additionally, most automation fails to link business users to the assets, and many asset tools struggle to prioritize assets correlating to security events, meaning that companies can easily lose visibility and lack the ability to prioritize asset risk.
Most asset management, IT service management (ITSM), and CMDBs focus on collecting data from the organization’s IT infrastructure. They ingest terabytes of data daily, yet this data remains siloed, preventing operations, security, and compliance teams from collaborating effectively.
With a security data fabric, organizations can break down data silos to create trustworthy, more accurate analytics that provide them with contextual and connected security insights.
The ever-expanding CMDB problem
The enterprise IT environment is a complex ecosystem consisting of on-premises and cloud-based technologies. Vulnerabilities and misconfigurations are an invasive species of the technology world.
In nature, a healthy ecosystem requires a delicate balance of plants and organisms who all support one another. An invasive species that disrupts this balance can destroy crops, contaminate food and water, spread disease, or hunt native species. Without controlling the spread of invasive species, the natural ecosystem is at risk of extinction.
Similarly, the rapid adoption of cloud technologies and remote work models expands the organization’s attack surface by introducing difficult-to-manage vulnerabilities and misconfigurations. Traditional CMDBs and their associated tools often fail to provide the necessary insights for mitigating risk, remediating issues, and maintaining compliance with internal controls.
In the average IT environment, the enterprise may combine any of the following tools:
IT Asset Management: identify technology assets, including physical devices and ephemeral assets like virtual machines, containers, or cell phones
ITSM: manage and track IT service delivery activities, like deployments, builds, and updates
Endpoint Management: manage and track patches, operation systems (OS) updates, and third-party installed software
Vulnerability scanner: scan networks to identify security risks embedded in software, firmware, and hardware
CMDB: store information about devices and software, including manufacturer, version, and current settings and configurations
Software-as-a-Service (SaaS) configuration management: monitor and document current SaaS settings and configurations
Meanwhile, various people throughout the organization need access to the information that these tools provide, including the following teams:
IT operations
Vulnerability management
Security
Compliance
As the IT environment expands and the organization collects more security data, the delicate balance between existing tools and people who need data becomes disrupted by newly identified vulnerabilities and cloud configuration drift.
Automatically updating the CMDB with enriched data
In nature, limiting an invasive species’ spread typically means implementing protective strategies for the environment that contain and control the non-native plant or organism. Monitoring, rapid response, public education, and detection and control measures are all ways that environmentalists work to protect the ecosystem.
In the IT ecosystem, organizations use similar activities to mitigate risks and threats arising from vulnerabilities and misconfigurations. However, the time-consuming manual tasks are error-prone and not cost-efficient.
Connect data and technologies
A security data fabric ingests data from security and IT tools, automating and normalizing the inputs so that the organization can gain correlated insights from across a typically disconnected infrastructure. With a vendor agnostic security data platform connecting data across the environment, organizations can break down silos created by various schemas and improve data’s integrity.
Improve data quality and reduce storage costs
By applying extract, transform, and load (ETL) pipelines to the data, the security data fabric enables organizations to store and load raw and optimized data. Flattening the data can reduce storage costs since companies can land it in their chosen data repository, like a data lake or warehouse. Further, the data transformation process identifies and can fix issues that lead to inaccurate analytics, like:
Data errors
Anomalies
Inconsistencies
Enrich CMDB with business information
Connecting asset data to real-world users and devices enables organizations to assign responsibility for configuration management. Organizations need to correlate their CMDB data with asset owners so that they can assign security issue remediation activities to the right people. By correlating business information, like organizational hierarchy data, with device, vulnerability scan, and ITSM data, organizations can streamline remediation processes and improve metrics.
Gain reliable insights with accurate analytics
Configuration management is a critical part of an organization’s compliance posture. Most security and data protection laws and frameworks incorporate configuration change management and security path updating. With clean data, organizations can build analytics models to help improve their compliance outcomes. To enhance corporate governance, organizations use their business intelligence tools, like Power BI or Tableau, to create visualizations so that senior leadership teams and directors can make data-driven decisions.
Maintain Your CMDB’s delicate ecosystem with DataBee®
DataBee from Comcast Technology Solutions is a security data fabric that ingests data from traditional sources and feeds then supplements that with business logic and people information. The security, risk, and compliance platform engages early and often throughout the data pipeline leveraging metadata to adaptively collect, parse, correlate, and transform security data to align it with the vendor-agnostic DataBee-extended Open Cybersecurity Framework Schema (OCSF).
Using Comcast’s patent-pending entity resolution technology, DataBee suggests potential asset owners by connecting asset data to real-world users or devices so organizations can assign security issue remediation actions to the right people. With a 360 view of assets and devices, vulnerability and remediation management teams can identify critical and low-priority entities to help improve mean time to detach (MTTD) and mean time to respond (MTTR) metrics. The User and Device tables supplement the organization’s existing CMDB and other tools, so everyone who needs answers has them right at their fingertips.
Read More
Continuous controls monitoring (CCM): Your secret weapon to navigating DORA
Financial institutions are a critical backbone of the local and geographical – and world – economy. As such the financial services industry is highly regulated and often faces new compliance mandates and requirements. Threat actors target the industry because it manages and processes valuable customer personally identifiable information (PII) such as account, transaction, and behavioural data.
Maintaining consistent operations is critical, especially in an interconnected, global economy. To standardise processes for achieving operational resilience, the European Parliament passed the Digital Operational Resilience Act (DORA).
What is DORA?
DORA is a regulation passed by the European Parliament in December of 2022. DORA applies to digital operational resilience for the financial sector. DORA entered into force in January of 2023, and it applies as of January 17, 2025.
Two sets of rules, or policy products, provide the regulatory and implementation details of DORA. The first set of rules under DORA were published on January 17, 2024, and consist of four Regulatory Technical Standards (RTS) and one Implementing Technical Standard (ITS). It is worth noting that not all the RTSes contain controls that financial entities need to implement. For example, JC 2023 83, the “Final Report on draft RTS on classification of major incidents and significant cyber threats,” provides criteria for entities to determine if a cybersecurity incident would be classified as a “major” incident according to DORA. The public consultation on the second batch of policy products is completed, and the feedback is being reviewed prior to publishing the final versions of the policies. Based on the feedback received from the public, the finalised documents will be submitted to the European Commission July 17, 2024.
What is Continuous Controls Monitoring (CCM), and how can it help?
DORA has a wide-ranging set of articles, many of which require the implementation and monitoring of controls. Organisations can use a continuous controls monitoring (CCM) solution, which is an emerging governance, risk and compliance technology, to automate controls monitoring and reduce audit cost and stress. When choosing a CCM solution for DORA, consider a data fabric platform that brings together data from enterprise IT and cybersecurity tools and enriches it with business data to help organisations apply data analytics for measuring and reporting on the effectiveness of internal controls and conformance to laws and regulations. The following are examples of how CCM could be used to support DORA compliance.
Continuous Monitoring:
Article 9 of DORA, Protection and prevention, explains that to adequately protect Information and Communication Technologies (ICT) systems and organise response measures, “financial entities shall continuously monitor and control the security and functioning of ICT systems.” Similarly, Article 16, Simplified ICT risk management framework, requires entities to “continuously monitor the security and functioning of all ICT systems.”
Additionally, Article 6 requires financial entities to “minimise the impact of ICT risk by deploying appropriate strategies, policies, procedures, ICT protocols and tools.” It goes on to require setting clear objectives for information security that include Key Performance Indicators (KPIs), and to implement preventative and detective controls. Reporting on the implementation of multiple controls, combining compliance data with organizational hierarchy, and reporting on KPIs are all tasks that CCM excels at. When choosing a CCM solution for DORA, consider one that supports uninterrupted oversight of multiple controls by automating the ingestion of data, formatting it, and then presenting it to users through the business intelligence solution of their choice.
The Articles of JC 2023 86 the “Final report on draft RTS on ICT Risk Management Framework and on simplified ICT Risk Management Framework” contain many ICT cybersecurity requirements that are a natural fit to be measured by CCM. Here are some examples of these controls:
Asset management: entities must keep records of a set of attributes for their assets, such as a unique identifier, the owner, business functions or services supported by the ICT asset, whether the asset is or might be exposed to external networks, including the internet, etc.
Cryptographic key management: entities need to keep a register of digital certificates and the devices that store them and must ensure that certificates are renewed prior to their expiration.
Data and system security: entities must select secure configuration baselines for their ICT assets as well as regularly verifying that the baselines are in place.
A CCM solution that is built on a platform that correlates technical and business data supports security, risk, and compliance teams for building accurate, reliable reports to help measure compliance. It provides consistent visibility into control status across multiple teams throughout the organisation. This reduces the need for reporting controls in spreadsheets and in multiple dashboards, helping business leaders make more immediate and data-driven governance decisions about their business.
Executive Oversight:
Financial entities are required to have internal governance to ensure the effective management of ICT risk (Article 5, Governance and organisation). CCM solutions that integrate with business intelligence solutions, like Power BI and Tableau, to build executive dashboards and data visualizations can provide an overview of multiple controls through a single display.
Roles and Responsibilities:
DORA Article 5(2)) requires management to “set clear roles and responsibilities for all ICT-related functions and establish appropriate governance arrangements to ensure effective and timely communication, cooperation and coordination among those functions.” A CCM solution that combines organisational hierarchy with control compliance data makes roles and responsibilities explicit, which helps improve accountability across risk, management, and operations teams. That is, a manager using CCM does not have to guess which assets or people that belong to their organisation are compliant with corporate policy, or regulations. Instead, they can easily view their compliance status.
CCM dashboards and detail views provide the specifics about any non-compliant assets such as the asset name, and details of the controls for which the asset is non-compliant. Similarly, CCM can communicate details about compliance for a manager’s staff, such as if mandatory training has been completed by its due date, or who has failed phishing simulation tests.
Coordination of multiple teams:
As the FS-ISAC DORA Implementation Guidance notes, “DORA introduces increased complexity and requires close cross-team collaboration. Many DORA requirements cut across teams and functions, such as resilience/business continuity, cybersecurity, risk management, third-party and supply chain management, threat and vulnerability management, incident management and reporting, resilience and security testing, scenario exercising, and regulatory compliance. As a result, analysing compliance and checking for gaps is challenging, particularly in large firms.”
CCM helps with cross-team collaboration by providing a common, accurate, and consistent view of compliance data, which can reduce overall compliance costs. That is, GRC teams are not tasked with creating and distributing multiple reports for various teams and trying to keep the reports consistent, and timely. Or business teams are no longer responsible for pulling their own reports, overcoming issues with inconsistent or inaccurate reporting from inexperience with the product creating the report, reports being run with different parameters or on different dates, or other differences or errors. CCM helps resolve this issue because it makes the same content, using consistent source data from the same point in time, available to all users.
5 ways how DataBee can help you navigate DORA
The requirements for DORA are organised under these five pillars. How does DataBee help enterprises to comply with each of the five?
1. Information and Communication Technologies (ICT) risk management requirements (which include ICT Asset Management, Vulnerability and patch management, etc.)
DataBee’s Continuous Controls Monitoring (CCM) delivers continuous risk scores and actionable risk mitigation, helping financial entities to prioritize remediation for at-risk resources.
2. ICT-related incident reporting
DORA identifies what qualifies as a “major incident” and must therefore be reported to competent authorities. This is interesting compared to cybersecurity incident reporting requirements from the U.S. Securities and Exchange Commission (SEC) which are based on materiality, but do not provide details about what is or is not material. DORA includes criteria to determine if the incident is “major.” Some examples are if more than 10% of all clients or more than 100,000 clients use the affected service, or if greater than 10% of the daily average number of transactions are affected. Additionally, if a major incident does need to be reported, DORA includes specific information that financial entities must provide. These include data fields such as date and time the incident was detected, the number of clients affected, and the duration of the incident. A security data fabric such as DataBee can help to provide many of the measurable data points needed for the incident report.
3. ICT third-party risk
DataBee for CCM provides dashboards to report on the controls used for the management and oversight of third-party service providers. These controls are implemented to manage and mitigate risk due to the use of third parties.
4. Digital operational resilience testing (Examples include, vulnerability assessments, open-source analyses, network security assessments, physical security reviews, source code reviews where feasible, end-to-end testing or penetration testing.)
DORA emphasizes digital operational resilience testing. DataBee supports this by aggregating and simplifying the reporting for control testing and validation. DataBee’s CCM dashboards provide reporting for multiple controls using an interface that is easily understood, and which business managers can use to readily assess their unit’s compliance with controls required by DORA.
5. Information sharing
As with incident reporting, the data fabric implemented by DataBee supports information sharing. DataBee can economically store logs and other contextual data for an extended period. DataBee makes this data searchable providing the ability to locate, and at the organization’s discretion, exchange cyber threat information and intelligence with other financial entities.
Read More
Channels are changing: Cloud-based origination and FAST
Linear television isn’t dead, but it certainly isn’t the same as it was even five years ago. As content delivery and consumption shift toward digital platforms, content owners and distributors find themselves navigating a rapidly evolving landscape. The rise of streaming services has further intensified competition for viewership, challenging traditional linear TV models to adapt or risk becoming obsolete.
In response, content owners and operators are rethinking their distribution strategies and exploring innovative approaches to engage audiences and adapt to end-user preferences. However, thriving in this new ecosystem demands more than just reactivity — it requires innovation, agility, and a keen understanding of emerging technologies and industry trends. Read on to learn the role that Managed Channel Origination (MCO) plays in the evolution of content delivery or watch experts Gregg Brown, Greg Forget, David Travis, and Jon Cohen dive deeper in the full webinar here.
Global reach in the digital media landscape
MCO, which leverages cloud-based technologies and advanced workflows to streamline and scale content delivery, is at the forefront of the content delivery revolution. With cloud infrastructure, service providers can seamlessly scale up or down based on demand, optimizing resource allocation and adapting to evolving consumer preferences and distribution platforms.
Thanks to the cloud, barriers and complexities that once hindered global distribution are either significantly reduced — or no longer exist at all — allowing businesses to operate at a global scale and deliver content across multiple regions and availability zones. This newfound flexibility enables providers to deploy channels easily in any region, eliminating the need for physical infrastructure and simplifying the distribution process.
Beyond flexibility, cloud solutions can also lower the total cost of ownership, accelerate deployment speed and time to market, and lower security and support costs. When this is considered with the greater levels of flexibility and scalability offered by cloud-based origination, it’s clear that with the right MCO solution, content providers can tap into emerging markets, capitalize on growing viewership trends, and drive revenue growth on a global scale.
Unlocking the value of live content
Live content, such as sporting events and concerts, remains a cornerstone of linear television, engaging audiences with unique viewing experiences and a sense of immediacy that other forms of media just can’t quite replicate. Yet, operators must strike a balance between the expense of acquiring live content, effectively monetizing it through linear channels, and delivering it to global markets. Cloud-based origination makes it easier to achieve this crucial balance and highlights the importance of collaboration and strategic partnerships in scalable, cost-effective delivery.
Within the context of MCO, live content translates to greater opportunities for regionalization and niche content catering to diverse viewer preferences. As the industry continues to evolve, leveraging technological advancements and embracing innovation will be essential for content owners to navigate the complexities of global distribution, advertising, and monetization while delivering compelling content experiences to audiences around the world.
FAST forward: Transforming the perception of ad-supported channels
FAST channels represent another way that linear is undergoing a transformation. These free, ad-supported channels offer content to viewers over the internet and cater to the growing demand for alternative viewing options. Yet content owners must carefully consider the role of FAST channels within their overall offerings.
The current state of the FAST market feels like halftime in a major sporting event, prompting stakeholders to reassess their strategies and make necessary adjustments to adapt to shifting viewer preferences and navigate the complexities of content distribution. By leveraging advanced technologies, cloud-based channel origination can act as a catalyst for the creation, curation, and monetization of FAST channels.
The simplicity of onboarding onto FAST platforms with pre-enabled implementations has made entry into the market more accessible for content owners. This accessibility, coupled with the growing consumer demand for high-quality content, underscores the importance of FAST channels meeting broadcast-quality standards to remain competitive and reliable.
There's a noticeable transition in the FAST channels landscape, where channels previously seen as lower value are now transforming to increase their sophistication and content quality. This shift reflects a growing demand for higher-quality FAST channels, signaling a departure from the perception of FAST channels as merely supporting acts to traditional linear TV.
With many content providers expanding their reach beyond regionalized playouts to cater to international markets, a cloud-based approach can help navigate the opportunities and challenges associated with expanding FAST channels into emerging markets, particularly concerning advertising revenue and economic viability in new territories.
Advertising within cloud-based origination is still evolving. Currently, distributors wield substantial data and control, managing ad insertion, networks, and sales. At the same time, FAST channels represent the next evolution of monetization, particularly with technologies like server-side ad insertion (SSAI) and AI making it easier to leverage data to create more personalized experiences. As the flow of open data increases, and the use of AI and ML expands, it will only serve to foster greater efficiency and innovation and ensure that ads reach the right global users at the right time.
Crafting a customized channel origination strategy
While channel origination offers content owners an innovative and cost-effective way to launch and manage linear channels more efficiently, it can’t be approached as a one-size-fits-all solution. For instance, does the business want to add a new channel or expand distribution? Is the content file-based, or does it include live events? If it's ad-supported, are there graphic requirements, or is live data needed?
To address these variabilities, content owners must create a scalable strategy that is flexible enough to adapt to regional requirements. By leveraging cloud-based solutions, content owners can effectively manage fluctuations in demand and adapt their channel offerings to meet the needs of global audiences.
In a marketplace where entertainment options abound, cloud-based origination helps businesses make the most impact, not just by increasing the number of channels, but also by ensuring that they have the right offerings in the right places. By adopting a holistic approach to channel origination that incorporates internet-based delivery services, content owners can maximize their reach, enhance viewer engagement, and position themselves for success in an ever-changing landscape.
Learn how Comcast Technology Solutions’ Managed Channel Origination can help your business expand your offerings or break into new markets in Europe, the Middle East, Africa, and beyond.
Read More
100% exciting, 100 people strong (and growing)
When you join an organization the size of Comcast, it’s easy for outsiders to see only “Fortune 29 Company!” or “huge enterprise!”. But tell that to the DataBee team – we’ve been in start-up land since October 2022. And it’s been an exciting ride!
DataBee® is a security, risk and compliance data fabric platform inspired by a platform created internally by Comcast’s global CISO, Noopur Davis, and her cybersecurity and GRC teams. They got such amazing results from the platform that they built and operated for 5 years—from cost savings to faster threat detection and compliance answers, to name a few—that Comcast executives saw the potential to create a business around this emerging technology space. What could provide a better product market fit than real business outcomes from a large, diversified global enterprise data point?
That was back in 2022, and the beginning of the DataBee business unit, which was built and created within Comcast Technology Solutions to bring this security data fabric platform to market. Initially funded with just enough to begin testing the market, it didn’t take long to realize that the opportunity for DataBee was auspicious, and substantial additional financing was provided by Comcast’s CEO in May 2023. One year after raising substantial additional financing from Comcast, I’m proud to say that DataBee has passed the 100-team-member milestone (and growing). I believe it's one of the most exciting cybersecurity start-ups out there. Times ten. (Ok, that’s my enthusiasm spilling over 😉.)
As challenging as fundraising is, staffing a team as large and talent diverse as DataBee has been no small feat, and we’ve done it relatively quickly - across three continents and five countries, no-less. Part of that challenge has been overcoming preconceived notions of who and what Comcast is, especially when you’re recruiting people from the cybersecurity industry. “Wait, what??? Comcast is in the enterprise security business??!!” Yes!
Here's what I’ve learned about hiring on the scale-up journey:
Focus on the mission. Your mission can be the most attractive thing about your business, especially when you’ve zeroed in on addressing an unmet need in the market. In the case of DataBee, our mission has been to connect security data so that it works for everyone. So many talents on the DataBee team have been compelled by the idea of solving the security data problem – too much data, too dispersed, in too many different formats to provide meaningful insights quickly to anyone who needs them. To play a role in fixing this problem? It’s intoxicating.
Look for people who seek the opportunity for constant learning and creativity. “Curiosity” appears at the top of the list of essential qualities in a successful leader, and I understand why. I think of curiosity as a hunger for constant learning and creativity, as well as the courage to explore uncharted territory, and these have been qualities I’ve been looking for as I staff up the DataBee team. These traits aren’t reserved for leaders only; I see them in practice across the whole team, from individual contributors all the way up to department leaders, and I know it’s having a big impact on our ability to rapidly innovate and build solutions that matter. (High energy doesn’t hurt either!)
Be a kind person. The number one thing I hear from my new hires is that they are taken aback by what a nice team I have. This makes me very happy and a little sad all at the same time: happy because I love knowing that the “be a kind person” rule is being put into practice across the DataBee team, but sad because it means that some of our new hires have not experienced kindness in previous jobs. Kindness and competency are not mutually exclusive things, and an environment of benevolence can foster creativity and success.
The funding we’ve received, and the strength, size, and rapid growth of the DataBee team is a testament to the fact that we’ve identified and are working to solve a very real problem being faced by many organizations – data chaos. It’s a problem that exists especially in security, but the rabid interest in artificial intelligence (AI) reminds us that we need to look beyond security data to all data, to ensure that good, clean, quality data is feeding AI systems. For AI to really work its magic, it needs quality data training so that it can deliver amazing experiences.
DataBee is 100 strong and growing, and our security data fabric platform, which is already helping customers manage costs and drive efficiencies, keeps evolving so it can help organizations continue the “shift left” to the very origins of their data. The goal is that quality data woven together, enriched, and delivered by DataBee will ultimately fuel AI systems and help business and technology leaders across the spectrum glean the insights they need for security, compliance, and long-term viability.
Read More
DataBee's guide to sweetening your RSAC experience
Noopur Davis, Comcast’s Chief Information Security Officer recently talked about bringing digital transformation to cybersecurity, where she stated, “The questions we dare to ask ourselves become more audacious each day.” In cybersecurity, this audacity is the spark that ignites innovation. By embracing bold questions answered by connected security data, we can unlock new ideas that transform how we defend our digital world.
This is particularly fitting as the theme for this year's RSA Conference is "The Art of Possible”. With data being so abundant, it’s essential to have immediate and continuous insights to be data-directed when making business decisions and staying ahead of today’s threats, to better enable you in foreseeing tomorrow’s challenges.
Organizations, however, still struggle to pull together and integrate security data into a common platform. This year Techstrong research surveyed cybersecurity professionals in a Pulsemeter report that shares a glimpse into security data challenges and how security data fabrics herald a new era in security data management. According to the survey, 44% of respondents have 0-50% of their security data integrated. This can result in:
Lack of visibility into security and compliance programs
Data explosions and silos
Challenges with evolving regulations and mandates
Difficulties performing analytics at scale
Exorbitant data storage and computing costs
DataBee® from Comcast Technology Solutions can help overcome these challenges and assist in regaining control over your security data. The DataBee Hive Platform has brought technology to the market that can help with automating data ingestion, validating data quality, and lowering processing costs. This can lead to:
Improved collaboration by working on the same data set
More immediate and connected security data insights
More consistent and accurate compliance dashboards
Better data to power your AI initiatives and capabilities
DataBee is excited to participate in RSAC 2024 and help you achieve these outcomes for your organization.
Where to meet DataBee in Moscone Center
We're thrilled to showcase the DataBee Hive; a security, risk, and compliance data fabric platform designed to deliver connected security data that works for everyone. Buzz by Moscone North Hall and visit our booth #5278. We’ll be showcasing:
How to cost-optimize your SIEM
Reduce security audit stress
Make your CMDB more complete
Get a 360-degree activity view of any asset or user
And more!
Is the expo floor too busy or noisy? You can meet us one-on-one at Marriott Marquis by reserving a more personal conversation with the team.
Make you and RSA Conference a reality
Here's the deal: we want you there! DataBee has two fantastic options to help you attend, claim your ticket here at the RSA Conference website:
🎟️ Free RSA Conference Expo Pass: 52ECMCDTABEXP
🎟️ Discounted Full RSA Conference Pass - Save $150 with code: 52FCDCMCDTABE
Snowflake, DataBee, and Comcast
But wait, there's more! Join us for an evening of conversations, networking, and relaxation at our reception on May 8th, co-hosted with Comcast Ventures, Comcast Business, and Snowflake.
Date: Wednesday, May 8th, 2024
Time: 5:00 pm - 7:00 pm
After you register, keep an eye out for your confirmation email, which will include all the insider details on where to find us. This is your chance to unwind, network with fellow attendees, and forge valuable connections in a more relaxed setting. Register here to secure your spot and receive the location details.
Let's make your RSAC experience the best yet!
The DataBee team is eager to meet you and explore ways we can contribute to your organization's security data maturity journey. We invite you to visit us at booth #5278, network with us at the private reception, grab a photo with our mascot, and snag some bee-themed giveaways! We look forward to seeing you there!
Read More
All SIEMs Go: stitching together related alerts from multiple SIEMs
Today’s security information and event management (SIEMs) platforms do more than log collection and management. They are a critical tool used by security analysts and teams to run advanced security analytics and deliver unified threat detection and search capabilities.
A SIEM’s original purpose was to unify security monitoring in a single location. Security operation centers (SOCs) could then correlate event information to detect and investigate incidents faster. SIEMs often require specialized skills to fine-tune logs and events for correlation and analysis that is written in vendor-specific and proprietary query languages and schemas.
For some enterprises and agencies, it is not uncommon to see multiple SIEM deployments that help meet unique aspects of their cybersecurity needs and organizational requirements. Organizations often need to:
federate alerts
connect related alerts
optimize their SIEM deployments
Managing multiple SIEMs can be a challenge even for the most well-funded and skilled security organizations.
The business case for multiple SIEMs
For a long time, SIEMs worked. However, cloud adoption and high-volume data sources like endpoint detection and response (EDR) tools threw traditional, on-premise SIEMs curveballs. Especially when considering the cost of ingestion-based pricing and on-premise SIEM storage.
While having one SIEM to rule them all (your security logs) is a nice-to-have, organizations often find themselves managing a multi-SIEM environment for various reasons, including:
Augmenting an on-premises SIEM with Software-as-a-Service (SaaS) solutions to manage high-volume logs.
Sharing a network infrastructure with a parent company managing its SIEM while needing visibility into subsidiaries managing their own.
Acquiring companies through mergers and acquisitions (M&A) that have their own SIEMs.
Storing log data’s personally identifiable information (PII) in a particular geographic region to comply with data sovereignty requirements.
Multiple SIEM aggregation with DataBee
Complexity can undermine security operations by causing missed alerts or security analyst alert fatigue. DataBee® from Comcast Technology Solutions ingests various data sources, including directly from SaaS and on-premises SIEMs, stitching together related event context or alerts to help streamline the identification and correlation process. Alerts are enriched with additional logs and data sources, including business context and asset and user details. With a consistent and actionable timeline across SIEMs, organizations can optimize their investments and mature their security programs.
Normalization of data
An organization with SaaS and on-premises SIEM deployments may want to federate alerts and access to the data through the multiple SIEMs. This can be difficult to aggregate notable events during investigations.
DataBee can receive data from multiple sources including SaaS and on-premises SIEMs. DataBee normalizes the data and translates the original schema into an extended Open Cybersecurity Schema Framework (OCSF) format before sending it to the company’s chosen data repository, like a security data lake. By standardizing the various schemas across multiple SIEMs, organizations can better manage their detection engineering resources by writing rules once to cover multiple environments.
Enhanced visibility and accountability
With proprietary and patent-pending entity resolution technology, DataBee EntityViews ties together the various identifiers for entities, like users and devices, then uses a single identifier for enrichment into the data lake. With entity resolution, organizations can automatically correlate activity across numerous sources together and use that both in entity timelines as well as UEBA models.
Cost optimization for high-volume data sources
The adoption of cloud technologies and endpoint detection and response (EDR) have considerably impacted SIEM systems, creating a notable challenge due to the sheer volume of data generated. This surge in data stems from EDR's comprehensive coverage of sub-techniques, as outlined in the MITRE ATT&CK framework. While EDR solutions offer heightened visibility into endpoint activities, this influx of data overwhelms traditional SIEM architectures. This leads to performance degradation and the inability to effectively process and analyze events. However, leveraging the cost-effective storage solutions offered by data lakes presents a viable solution to this conundrum.
By utilizing data lakes, organizations can store vast amounts of EDR data at scale. This approach can not only alleviate the strain on SIEM systems. but also facilitate deeper analysis and correlation of security events. This empowers organizations to extract actionable insights and bolster their cyber defense strategies. Thus, integrating EDR with data lakes emerges as a promising paradigm for managing the deluge of security data while maximizing its utility in threat detection and response.
Real-Time detection streams
DataBee uses vendor-agnostic Sigma rules that allow organizations to get alerts for data, including forwarded data like DNS records. If the SOC team wants to receive alerts without having to store the data in the SIEM, the detections can be output into the data lake or any SIEM or security orchestration, automation, and response (SOAR) solution, including the original SIEM forwarding the data. By building correlated timelines across user activity between multiple SIEMs, organizations can gain flexibility across SIEM vendors so they can swap between them or trade them out, choosing the best technologies for their use cases rather than the ones that integrate better with current deployments.
Ready to bring together related alerts from multiple SIEMs? Let's talk!
Read More
Seeing the big (security insights) picture with DataBee
The evolution of digital photography mimics the changes that many enterprise organizations face when trying to understand their cybersecurity controls and compliance posture. Since the late 1990s, technology has transformed photograph development from an analog, manual process into a digital, automated field. These images hold our memories, storing points in time that we can look back on and learn from. Cybersecurity, in turn, is experiencing a similar transformation.
When you consider the enterprise data pipeline problem that DataBee® from Comcast Technology Solutions aims to solve through the everyday lens of creating, storing, managing, and retrieving personal photos, the platform’s evolutionary process and value makes more sense.
Too many technologies generating too much historical data
Portable, disposable Kodak cameras were all the rage in the 1990’s; but it could be days or weeks before you could see what you snapped because films needed to be sent for processing.
Over the 2000s, however, these processes increasingly turned digital, accelerating results dramatically. While high-quality, professional-grade digital cameras aren’t in danger of becoming obsolete, once cell phones with integrated cameras hit the market, they became an easy on-the-go way to capture life’s historical moments even though the picture might not develop until days, weeks, or maybe never as they’re left on the roll of film. Today, people use their smartphone devices lending even more depth and quality as we capture from family gatherings to vacation selfies instantly.
At Comcast, we faced a similar enterprise technology and security data problem. Just as people handle different kinds of images and the technologies that produce them, we have vast amounts of technologies that generate security data. It’s a fragmented, complicated environment that needs to handle rapidly expanding data.
Across the enterprise, Comcast stores and accesses increasingly larger amounts of data, including:
8000 month-by-month scans
1.7 million IPS targeted monthly for vulnerability scanning
7 multiple clouds or hybrid cloud environment
10 petabytes worth of data in our cybersecurity data lake
109 billion monthly application transactions
Finding the right moment in time
Let’s play out a scenario: You let your friend in on a stage in your life where you had bright red hair. Their response? “Pics or it didn’t happen.” To track down the historic photo, it takes immense effort to:
Figure out the key context about where and when it was captured.
Find the source of where the photo could be – is it in a hard drive? A cloud photo album? A tagged image in your social media profile?
Identify the exact photo you need within the source (especially if it is not labeled).
Comcast faced a similar data organization and correlation problem in their audits and their threat hunting. While we were drowning in data, we found that at the same time we were starved for insights. We were trying to connect relevant data to help build a timeline of activity of a user or device but as the data kept growing and security tools kept changing, we found data was incomplete or took weeks' worth of work to normalize and correlate data.
We faced many challenges when trying to answer questions and fractured data sources compounded this problem. Some questions we were asking were – do all the employees have their EDR solution enabled? Is there a user with the highest number of security severities associated to them across all their devices? And on answering these questions quickly and accurately, such as:
People maintaining spreadsheets that become outdated as soon as they’re pulled.
People building Power BI or Tableau reports without having all the necessary data.
Reports that could only be accessed from inside an applications console, limiting the ability to connect them to other meaningful security telemetry and data.
Auditing complex questions can be unexpectedly expensive and time consuming because data is scattered across vast, siloed datasets.
Getting security insights
Going back to the scenario where pictures are stored on all these disparate devices, it initially seems like a reasonable solution to just consolidate everything on an external hard drive. But, to do that, you must know each device’s operating system and how to transfer the images over. They differ in file size, filetype, image quality, and naming convention. While one camera dates a photo as “Saturday January 1, 2000,” another uses “1 January 2000.” In some cases, the images contain more specific data, like hour, minute, and second. Consolidating the pictures in cloud-based storage platforms only solves the storage issues – you still have to manage the different file formats and attached metadata to organize them by the actual date a picture was taken rather than date a batch of photos were uploaded.
Translating this to the security data problem, many organizations find that they have too much data, in too many places, created at too many different times. And said data are in different file types, unique formats, and other proprietary ways of saying the same thing. Consolidating and sorting data becomes chaotic.
As a security, risk, and compliance data fabric platform, DataBee ingests, standardizes, and transforms the data generated by these different security and IT technologies into a single, connected dataset that’s ready for security and compliance insights. This is surprisingly like adding a picture to your “Favorite” folder for easy access. Organizations need to accurately and quickly answer questions about their security and compliance.
The objective at Comcast was to solve the challenge of incomplete and inaccurate insights caused by siloed data stores. DataBee provides the different security data consumers access to analytics-derived insights. The end result enables consistent, data-driven decision making across teams that need accurate information about data security and compliance, including:
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
Chief Technology Officer (CTO)
Chief Data Officer (CDO)
Governance, Risk, and Compliance (GRC) function
Business Information Security Officer (BISO)
While those people need the insights derived from the platform, we also recognized that the regular users would inhabit many roles:
Threat hunters
Data engineers
Data scientists
Security analytics and engineering teams
To achieve objectives, we started looking at the underlying issue - the data, its quality, and its accessibility. At its core, DataBee delivers ready-to-use content and is a transformation engine that ingests, autoparses, normalizes, and enriches security data with business context. DataBee’s ability to normalize the data and land it in a data lake enables organizations to use their existing business intelligence (BI) tools, like Tableau and Power BI, to leverage analytics and create visualizations.
Transforming data creates a common “language” across:
IT tools
Asset data
Organizational hierarchy data
Security semantics aren’t easy to learn – it can take years of hands-on knowledge on a variety of toolsets. DataBee has the advantage of leveraging learnings from Comcast to create proprietary technology that parses security data, mapping columns and values to references in the Open Cybersecurity Framework (OCSF) schema while also extending that schema to fill in currently existing gaps.
Between our internal learnings and working with customers, DataBee delivers pre-built dashboards that accelerate the security data maturity journey. Meanwhile, customers who already have dashboards can still use them for their purposes. For example, continuous controls monitoring (CCM) dashboards aligned to the Payment Card Industry Data Security Standard (PCI DSS) and National Institute of Technology and Standards Cybersecurity Framework (NIST CSF) offer a “quick start” for compliance insights.
DataBee can help customers achieve various security, compliance, and operational benefits, including:
Reduced security data storage costs by using the Snowflake and Databricks
Gaining insights and economic value by leveraging a time-series dataset
Real-time active detection streams with Sigma rules that optimize SIEM performance
Asset discovery and inventory enrichment to identify and suggest appropriate ownership
Weave together data for security and compliance with DataBee
Want to see DataBee in action and how we can help you supercharge your security, operations, and compliance initiatives? Request a custom demo today.
Read More
Bridging the GRC Gap
Misconceptions and miscommunications? Not with continuous controls monitoring from DataBee. Learn how you can reduce audit costs and improve security outcomes.
Read More
The DataBee Hive hits Orlando for the 2024 Gartner® Data & Analytics Summit
In a great blog on bringing digital transformation to cybersecurity, Comcast CISO Noopur Davis wrote, “Data is the currency of the 21st century — it helps you examine the past, react to the present, and predict the future.” Truer words were never written.
No doubt that’s why we think Gartner created the Gartner Data & Analytics Summit — to bring together data and analytics leaders “aspiring to transform their organizations through the power of data, analytics, and AI.”
DataBee® from Comcast Technology Solutions is all about facilitating that transformation. The DataBee Hive™, a cloud-native security, risk, and compliance data fabric platform, helps organizations transform how they collect, correlate, and enrich data to deliver the deep insights needed to drive efforts to remain competitive, secure, and compliant. It works by connecting disparate data sources and feeds to merge them with an organization’s data and business logic to create a shared and enhanced dataset that delivers continuous compliance assurance, proactive threat detection with near-limitless hunts, and improved AIOps — all while optimizing data costs.
This is why we’re incredibly excited to be participating in the 2024 Gartner Data & Analytics Summit for the first time. We’re looking forward to the conference sessions but even more to engaging with attendees to hear firsthand what their data, analytics, and AI challenges are and to share what we’ve learned on the path to creating the DataBee security data fabric platform.
Below are details on DataBee’s involvement in the conference — please pay us a visit at the show.
Speaking, exhibiting, and giving out the cutest plushie at the show.
The Gartner Data & Analytics Summit takes place March 11‒13, 2024, in Orlando at the Walt Disney World Swan and Dolphin Resort in Lake Buena Vista.
The Comcast Speaking Session
Comcast DataBee: Democratizing AI With a Foundation in Enterprise & Security Data
March 12 at 12:45‒1:05 pm
Location: Theater 4, Exhibit Showcase
Speaker: Rick Rioboli, Executive Vice President & Chief Technology Officer, Comcast Cable
Session description:
For more than a decade, Comcast has incorporated AI into our products and operations, going beyond personalized content to increase customer accessibility and protect against cyberattacks. To keep pace with industry disruptors and drive more AI innovations, Comcast launched AI Everywhere. The initiative aims to reduce friction when developing safe, secure, and scalable AI solutions throughout the business. Part of that is getting data AI-ready.
Join this session to learn three key components of bringing AI to more people and how combining security data can drive more value across the enterprise.
DataBee exhibit
Be sure to visit the Comcast Technology Solutions DataBee exhibit. We can be found in Booth 618, in the Data Management Tools and Technology Village. We’ll have some of our top “bees” on hand to talk data, analytics, AI, and how a security data fabric platform can help you connect security and compliance data for insights that work for everyone in your organization.
Plus, who doesn’t love a cute plushie to take home? We’ll be giving these away at our booth.
To learn more about DataBee in just a few short minutes, check out this quick and fun explainer video or download the DataBee Hive datasheet.
If you’d like to schedule a specific time to meet during the event, we’d love it. Contact us, and we’ll follow up right away to book a time.
We hope to see you in Orlando!
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. While Gartner is hosting the Gartner Conference, Gartner is not in any way affiliated with Exhibiting Company or this promotion, the selection of winners or the distribution of prizes. Gartner disclaims all responsibility for any claims that may arise hereunder.
Read More
Our 2024 forecast for media and advertising trends
As we approach the halfway point of the decade, events of the past year underscore just how appropriate the name the “Roaring 20s” really is for the current era of media and entertainment. From dramatic advances in AI and ML technology and crumbling cookie-first ad strategies, to the rise of ad-supported channels and labor strikes across the industry, 2023 was truly a year of disruption to the status quo. Are these trends passing fads or seismic shifts? Read on for insights into how these trends will impact media and advertising in the coming years, and watch the full webinar here.
Sports content and the quest to overcome fragmentation
Sports rights have become a crucial factor in driving consumer interest and platform sign-ups. At the same time, fans continue to change how they find and consume content. With the rise of streaming services and social media platforms, we now have access to a vast array of sports content at our fingertips.
Highlights:
Sports streaming services expected to reach $22.6 billion in 2027
Consumer access to content now spread across multiple streaming services
Shifting reliance on regional sports networks
This abundance of choice has created a new challenge: discoverability.
Viewers currently navigate a deeply fragmented landscape of different platforms and services, each with its own exclusive content and user interface.
This challenge becomes particularly pronounced as the regional sports network (RSN) model undergoes forced adaptation in the face of shifting consumer preferences. Moreover, the financial dynamics of sports rights are undergoing scrutiny, with tier two platforms reassessing the viability of their investments in sports content.
So, how can sports platforms and traditional entertainment provide seamless discoverability and ensure that subscribers can easily find the sports content they want without navigating multiple platforms and user interfaces?
It requires:
Collaboration between different stakeholders, including rights holders, operators, content providers, and advertisers
Providing opportunities for compelling, customizable, and immersive viewing experiences
One way to do this is with FAST channels.
Life in the FAST channel lane
FAST (Free Ad-Supported Streaming TV) channels are a type of streaming service that offers free access to a wide range of content, supported by advertisements. Unlike traditional cable or satellite TV, FAST channels are delivered over the internet, allowing viewers to watch when and how they want. These new channels, such as World Rally Championship’s 24x7 channel for rally fans, can feature a mix of live and on-demand content, creating incredibly immersive and personalized viewing experiences.
In the past year, these channels have become increasingly popular thanks to their accessibility, convenience, and affordability. They offer viewers a way to access a wide range of content without subscription fees, making them an attractive option for cord-cutters and budget-conscious consumers. Additionally, the targeted advertising capabilities of streaming platforms allow advertisers to reach specific audiences more effectively, making FAST channels a critical advertising platform and revenue generator.
Yet, even though FAST channels represent a growing segment of the streaming market — and are expected to continue expanding — market saturation will pose an ongoing challenge. Personalizing the FAST channel experience will be crucial to maintain viewer engagement and drive higher CPMs.
Bundle up: Indirect wholesale SVOD to exceed stand-alone SVOD
Research shows that the average U.S. household has eight+ streaming services. Content may still be king, but the way viewers access it is certainly changing.
As aggregation increases, content wars are giving way to bundling wars, signifying yet another fundamental shift in how entertainment and media are consumed and accessed. Bundling models, where multiple services or channels are packaged together for a single subscription fee, have emerged as a powerful strategy for attracting consumers. Convenient and cost-effective, bundling will continue to capture viewers, especially as the landscape becomes increasingly fragmented and streaming services proliferate.
The shift of streaming wars to bundling wars means:
More subscription video on demand (SVOD) sign-ups will come from the bundled segment than from the direct-via-streamer segment.
Super aggregation bundles are emerging with operators and broadband, most notably in EU markets.
Stand-alone SVOD will still dominate, but by net additions, indirect wholesale SVOD subscriptions will, for the first time, exceed direct retail subscriptions.
Aggregators play a central role in these bundling wars, and those that effectively leverage data analytics to understand consumer preferences and behavior are better equipped to “own the customer” by curating tailored content suggestions for their individual users. By offering personalized recommendations, savvy aggregators enhance the value proposition of bundled services, helping consumers discover new content that aligns with their interests and preferences. This personalized approach not only improves the overall consumer experience but also increases engagement and retention rates within bundled ecosystems.
Ad revenue continues to rise. Advanced advertising strategies support success with streamers.
Advertising is expected to reach 21% of revenue for hybrid models. Our experts predict that services will strive to reach more than 50% of their revenue through advertising to match the traditional pay-TV model.
With this trend in mind, how can brands ensure they get the most out of their advertising efforts within advertising video on demand (AVOD)/hybrid models, driving more targeted and focused advertising while also improving the viewer experiences?
Enter unified data management and contextual advertising. By unifying data management and analyzing metadata, brands can strategically place ads that align with the viewer’s interests and intent at that moment — all while staying on the right side of privacy regulations.
While we don’t expect AVOD to surpass SVOD in the near future, it is catching up. And as AVOD continues growing in popularity and contributes a bigger chunk of revenue for streamers than we've seen before, we can expect AVOD and SVOD to coexist alongside pay-TV.
AI acceleration: Use cases and business needs
Though we’ve only scratched the surface of its capabilities, the pace of advancement in generative AI is accelerating rapidly and shaping the next chapter of media and entertainment. Already, AI applications are unlocking new opportunities for greater personalization and targeting. We’re quickly shifting from “wanting” to incorporate AI to “needing” to incorporate the technology to keep up.
While many brands express apprehension around brand safety, the potential of AI to drive efficiencies and enhance creativity is undeniable. As content providers, operators, and advertisers seek ways to navigate the complexities of a fragmented media landscape, generative AI offers a promising solution by streamlining production workflows and enabling the creation of personalized, thematic content.
Already we’re seeing generative AI produce synopses and segments with expected growth over the next five years within these top use cases:
Audio generators: +60% growth
Video generators: +61% growth
Image generators: +66% growth
Code generators: +73% growth
What does this mean?
Companies that treat AI as table stakes will come out on top in the coming years. From generating clips and metadata to detecting sentiment and creating hyper-specific ad campaigns, AI will only continue to revolutionize how content is conceptualized, produced, and distributed.
To 2030 and beyond…
No matter what the next part of the decade holds, innovation and adaptation will continue to be key to accelerating growth in the media and entertainment landscape. Industry players must continue navigating evolving consumer preferences and embracing technological advancements to overcome challenges and embrace new opportunities to deliver compelling, personalized experiences to audiences.
Read More
3 Hot Takes on Cybersecurity: SIEMs, GenAI, and more
Missed our Cybersecurity Hot Takes webinar? Lucky for you, you can watch the on-demand recording now!
We dived into cybersecurity’s burning questions with Roland Cloutier, former CISO at Tiktok and ADP, Edward Asiedu, Senior Principal of Cybersecurity Solutions at DataBee, and Amy Heng, Director of Product Marketing at DataBee®. They shared their hot takes and insights related to tools, data, and people in the cybersecurity landscape. As they indulged in hot wings, the discussions ranged from the relevance of Security Information and Event Management (SIEM) tools to the transformative potential of Generative Artificial Intelligence (Gen AI).
Here’s what can expect to learn more about:
The Decline of SIEM Tools: Roland predicted the decline of traditional SIEM tools. He argued that the current SIEM solutions are becoming obsolete due to the evolving nature of cybersecurity. The complexity of cyber risks and privacy issues demands a more sophisticated approach that goes beyond the capabilities of conventional SIEM tools. Instead, Roland emphasized the need for a data fabric platform to fill the void left by traditional SIEMs.
Vendor-Specific Query Languages: To Learn or Not to Learn? Edward highlighted the drawbacks of forcing engineers to learn multiple query languages for different security tools. Both Edward and Roland expressed a preference for standardized approaches like SQL or Sigma rules, which makes detecting threats faster and closing the cybersecurity skills gap.
Gen AI and how to operationalize it for cybersecurity: Gen-AI is a rising start in the cybersecurity landscape. Contrary to the fear of a robot revolution, Roland envisioned Gen AI as a powerful tool for enhancing security risk and privacy operations. He emphasized its potential to automate tasks such as incident response, analytics, and investigation, significantly reducing the time and resources required for these activities.
Besides the hot wings being spicier with each question, it’s clear that the cybersecurity landscape is evolving rapidly. Traditional tools like SIEM are on the decline, making way for more adaptable solutions that bring security data insights to more people. The shift towards standardized query languages and the integration of AI, particularly Gen AI, promises a future where cybersecurity operations are more efficient, automated, and capable of handling the ever-growing volume of data.
Why not hear more hot takes? Catch the webinar on-demand today.
Read More
Squeaky Clean: Security Hygiene with DataBee
Enterprises have an ever-growing asset and user population. As organizations become more complex thanks to innovation in technology, it becomes increasingly difficult to track all assets in the environment. It's difficult to secure a user or device you may not be aware of. DataBee can augment and complement your configuration management database (CMDB) by enhancing the accuracy, relevance, and usability of your asset, device, and user inventory, unlocking 3 primary use cases for security teams:
Security Hygiene
Owner & Asset Discovery
Insider Threat Hunting
Security Hygiene
DataBee for Security Hygiene can help deliver more accurate insights into the assets in your environment while automatically keeping your asset inventory up-to-date and contextualized. Bringing more clarity about your users and devices in your environment, enables your business to enhance its security coverage, reduces manual processes, increases alerts’ accuracy, and more rapidly responds to incidents.
Security hygiene uses entity resolution, a patent pending technology from Comcast, to create a unique identifier from a single data source or across multiple data sources that refer to the same real-world entity, such as a user or device. This technique is performed by DataBee helps reduce the manual entity correlation efforts by analysts, and the unique identifiers can be used to discover assets and suggest missing owners.
DataBee supports ingestion from multiple data sources to help keep your security hygiene in check, including a variety of traditional sources for asset management such as your CMDB, directory services, and vulnerability scanner. It also can learn about your users and devices from non-traditional data sources such as network traffic, authentication logs, and other data streamed through DataBee that contains a reference to a user or device.
DataBee can also be used to exclude data sources and feeds from entity resolution that provide little fidelity for entity-related context and support feed source prioritizations. These inputs are used when there is collision or conflicting information. For example, if a device is first seen in network traffic, preliminary information about the device will be added to DataBee such as the hostname and IP. Another example is when your CMDB updates a device’s IP. DataBee will use this update to overwrite the associated IP if the CMDB feed is prioritized over the network traffic information.
Owner and Asset Discovery
Organizations often struggle to assign responsibility for resolving security issues on assets or devices, many times due to unclear ownership caused by gaps in the asset management process, such as Shadow IT and orphaned devices. DataBee can provide a starting point to identifying and validating the owner of a device. When a new device is discovered or the owner is not indicated in CMDB, DataBee will leverage the events streaming through the platform to make a suggestion of the potential owners of the asset. The system tracks who logs into the device for seven days after discovery and uses statistical analysis to suggest up to the three most likely owners.
The Potential Owners are listed in the Entity Details section of the Entity View page that can be expanded to quickly validate the ownership. Once security analysts are able to validate the owner, the system of record can be updated and DataBee will receive the update.
As organizations become more complex thanks to innovation in technology, it can become increasingly difficult to track all assets in the environment. DataBee provides User and Device tables to supplement existing tools. Orphaned assets remain unknown and often unpatched, creating unmonitored levels of increasing risk. Shadow IT is a growing problem as more cloud-based solutions become easily accessible. Entity resolution maintains an inventory of known assets in your organization. This enables continuous discovery of assets that would otherwise slip through the cracks based on events streamed through DataBee.
Insider Threat Hunting
Insider threats are people, such as employees or contractors, with authorized access to or knowledge of an organization’s resources, that can cause potential harm arising from purposeful or accidental misuse of authorized user access. This can have negative effects on the integrity, confidentiality, and availability of the organization, its data, personnel, or facilities. Insider threats are often able to hide in plain sight for many reasons. There is complexity in cross-correlating logs of an individual's activities across various tools and products. Further, the data is siloed, making it difficult for security analysts to see the full picture. DataBee’s security, risk, and compliance data fabric platform weaves together events as they stream through the business context needed to identify insider threats.
DataBee leverages entity resolution to create an authoritative and unique entity ID using data from across your environment that is mapped to real people and devices to enable hunting for insider threats in your organization. Entity resolution aggregates information from multiple data sources, merges duplicate entries, and suggests potential owners for devices and assets. By correlating and enriching the data before storing it in your data lake, DataBee creates an entity timeline, associating each event with the correct entity at the time of its activity.
These views enable insider threat hunting by allowing security analysts to see the activities conducted by a user and the related business context in a single view to identify potential malicious behavior. The interactive user experience is intended to make leveraging the Open CyberSecurity Framework (OCSF) formatted logs more accessible to all security professionals. Within the Event Timeline, security analysts can filter the events based on type to focus investigations. Clicking on the event will show the mission critical fields needed to decide if the event is interesting. Clicking on the magnifying glass icon allows you to inspect the full event. DataBee enables one-click pivoting to related entities to simplify diving deeper into the investigation. The views are powered by the data in the data lake. Therefore, the data is available in OCSF format for threat hunters to continue their investigations in traditional tools like Jupyter notebooks to meet hunters where they are with their data.
Take a look under the hood of DataBee v2.0 and DataBee for Security Hygiene
Are you ready for an enterprise-ready security, risk, and compliance data fabric? Request a custom demo to see how DataBee uses a unique identifier that can be used to augments your CMDB and help deliver more accurate insights into the assets in your environment.
Read More
Finding security threats with DataBee from Comcast Technology Solutions
Last week, DataBee® announced the general availability of DataBee v2.0. Alongside a new strategic technology partnership with Databricks, we released new cybersecurity and Payment Card Industry Data Security Standard (PCI DSS) 4.0 reporting capabilities.
In this blog, we’ll dive into the new security threat use cases that you can unlock with a security, risk, and compliance data fabric platform.
DataBee for security practitioners and analysts
In security operations, detecting incidents in a security information and event management (SIEM) tool is often described as looking for a needle in a haystack of logs. Another fun (or not-so-fun) SIEM metaphor is a leaky bucket.
In an ideal world, all security events and logs would be ingested, parsed, normalized, and enriched into the SIEM, and then the events would be cross-correlated using advanced analytics. Basically, logs stream into your bucket and the SIEM, and all the breaches would be detected.
In reality, there are holes in the bucket that allow for undetected breaches to persist. SIEMs can be difficult to manage and maintain. Organization-level customizations, combined with unique and ever-changing vendor formats, can lead to detection gaps between tools and missed opportunities to avert incidents. Additionally, for cost-conscious organizations, there are often trade-offs for high-volume sources that leave analysts unable to tap into valuable insights. All these small holes add up.
What if we could make the security value of data more accessible and understandable to security professionals of all levels? DataBee makes security data a shared language. As a cloud-native security, risk, and compliance data fabric platform, DataBee engages early in the data pipeline to ingest data from any source, then enriches, correlates, and normalizes it to an extended version of the Open Cybersecurity Schema Framework (OCSF) to your data lake for long-term storage and your SIEM for advanced analytics.
Revisiting the haystack metaphor, if hay can be removed from the stack, a SIEM will be more efficient and effective at finding needles. With DataBee, enterprises can efficiently divert data, the “hay,” from an often otherwise cost-prohibitive and overwhelmed SIEM. This enables enterprises to manage costs and improve the performance of mission-critical analytics in the SIEM. DataBee uses active detection streams to complement the SIEM, identifying threats through vendor-agnostic Sigma rules and detections. Detections are streamed with necessary business context to a SIEM, SOAR, or data lake. DataBee takes to market a platform inspired by security analysts to tackle use cases that large enterprises have long struggled with, such as:
SIEM cost optimization
Standardized detection coverage
Operationalizing security findings
SIEM cost optimization
Active detection streams from DataBee provide an easy-to-deploy solution that enables security teams to send their “needles” to their SIEM and their “hay” to a more cost-effective data lake. Data that would often otherwise be discarded can now be analyzed enroute. Enterprises need only retain the active detection stream findings and security logs needed for advanced analytics and reporting in the SIEM. By removing the “hay,” enterprises can reduce their SIEM operating costs.
The optimized cloud architecture enables security organizations to gain insights into logs that are too high volume or contain limited context to leverage in the SIEM. For example, DNS logs are often considered too verbose to store in the SIEM. They contain a high volume of low-value logs due to limited information retained in each event. The limited information makes the DNS logs difficult to cross-correlate with the disparate data sources needed to validate a security incident.
Another great log source example is Windows Event Logs. There are hundreds of validated open-source Sigma detections for Windows Event Logs to identify all kinds of malicious and suspicious behavior. Leveraging these detections has traditionally been difficult due to the scale required both for the number of detections and volume of data to compare it to. With DataBee’s cloud-native active detection streams, the analytics are applied as the data is normalized and enriched, allowing security teams new insights into the potential risks facing their organization. DataBee’s power and scale complement the SIEM’s capabilities, plugging some of the holes in our leaky bucket.
Analyst fatigue can be lessened by suppressing security findings for users or devices that can reduce reliability of a finding. With DataBee’s suppression capability, you can filter and take actions on security findings based on the situation. Selecting “Drop” for the action ignores the event, which is ideal for events that are known to be false positive in the organization. Alternatively, applying an “Informational” action reduces the severity and risk level of the finding to Info, still allowing the finding to be tracked for historical purposes. The Informational level is perfect for tuning that requires auditability long term. The scheduling option uses an innovative approach that gives you a way to account for recurring known events like change windows that might fire alerts or additional issues that could lead to false positives.
By applying the analytics and tuning to the enriched logs as they are streamed to more cost-effective long-term storage in the data lake, security teams can detect malicious behavior like PowerShell activity or DNS tunneling. Additionally, DataBee’s Entity Resolution not only enriches the logs but learns more about your organization from them, discovering assets that may be untracked or unknown in your network.
Standardized detection coverage
With the ever-evolving threat landscape, detection content is constantly updated to stay relevant. As such, security organizations have taken on more of a key role in content management between solutions. Compounded by the popularization of Sigma-formatted detections with both security researchers and vendors, many large enterprises are beginning their journey to migrate existing custom detections to open-source formats managed via GitHub. Sigma detection rules are imported and managed via GitHub to DataBee to quickly operationalize detection content. Security organizations can centralize and standardize content management for all security solutions, not just DataBee.
Active detection streams apply Sigma rules, an open-source signature format, over security data that is mapped to a DataBee-extended version of OCSF to integrate into the existing security ecosystem with minimal customizations. DataBee handles the translation from Sigma to OCSF to help lower the level of effort needed to adopt and support organizations on their journey to vendor-agnostic security operations. With Sigma-formatted detections leveraging OCSF in DataBee, organizations can swap out security vendors without needing to update log parsers or security detection content.
Operationalizing security findings
One of DataBee’s core principles is to meet you where you are with your data. The intent is to integrate into your existing workflows and tools and avoid amplifying the “swivel chair” effect that plagues every security analyst. In keeping with the vendor-agnostic approach, DataBee security findings generated by active detection streams can be output in OCSF format to S3 buckets. This format can be configured for ingestion to immediate use in major SIEM providers.
Leveraging active detection streams with Entity Resolution in DataBee enables organizations to identify threats with vendor-agnostic detections with all the necessary business context as the data streams toward its destination. DataBee used in conjunction with the SIEM allows security teams visibility out of the box into potential risks facing their organization without the noise.
Read More
DataBee and Databricks: Business-ready datasets for security, risk, and compliance
In today's fast-paced and data-driven world, businesses are constantly seeking ways to gain a competitive edge. One of the most valuable assets these businesses have is their data. By analyzing and deriving insights from their data, organizations can make informed decisions, manage organizational compliance, optimize resource allocation, and improve operational efficiency.
Better together: DataBee and Databricks
As part of DataBee® v2.0, we’re excited to announce a strategic partnership with Databricks that gives customers the flexibility to integrate with their data lake of choice.
DataBee is a security, risk, and compliance data fabric platform that transforms raw data into analysis-ready datasets, streamlining data analysis workflows, ensuring data quality and integrity, and fast-tracking organizations’ data lake development. In the medallion architecture, businesses and agencies organize their data in an incremental and progressive flow that allows them to achieve multiple advanced outcomes with their data. From the bronze layer, where raw data lands as is, to the silver layer, where data is minimally cleansed for some analytics, to the gold layer, where advanced analytics and models can be run on data for outcomes across the organization, let DataBee and Databricks get your data to gold.
In the past, creating gold-level datasets was a challenging and time-consuming process. Extracting valuable insights from raw data required extensive manual effort and expertise in data aggregation, transformation, and validation. Organizations had to invest significant resources in developing custom data processing pipelines and dealing with the complexities of handling large volumes of data. Lastly, legacy systems and traditional data processing tools struggled to keep up with the demands of big data analytics, resulting in slow and inefficient data preparation workflows. This hindered organizations' ability to derive timely insights from their data and make informed decisions.
DataBee's integration with Databricks empowers customers to take their gold-level datasets up a notch by leveraging advanced data transformation capabilities and sophisticated machine learning algorithms within Databricks. Regardless of whether the data is structured, semistructured, or unstructured, Databricks' unique lakehouse architecture provides organizations with a robust and scalable infrastructure to store and manage vast amounts of data and insights in SQL and non-SQL formats. The lakehouse architecture from Databricks allows businesses to leverage the flexibility of a data lake and the analysis efficiency of a data warehouse in a unified platform.
The integration between DataBee and Databricks involves two key components: the Databricks Unity Catalog and the Auto Loader job.
The Databricks Unity Catalog is a unified governance solution for data and AI assets within Databricks that serves as a centralized location for managing data and its access.
The Auto Loader automates the process of loading data from Unity Catalog-governed sources to the Delta Lake tables within Databricks. The Auto Loader job monitors the data source for new or updated data and copies it to the appropriate Delta Lake tables. This ensures that the data is always up to date and readily available for analysis within Databricks. When integrating DataBee with Databricks, the data is loaded from the Databricks Unity Catalog data source using the Auto Loader, ensuring that it is easily accessible and can be leveraged for analysis.
This seamless integration, combined with DataBee's support for major cloud platforms like AWS, Google Cloud, and Microsoft Azure, enables organizations to easily deploy and operate Databricks and DataBee in their preferred cloud environment, ensuring efficient data processing and analysis workflows.
Connecting security, risk, and compliance insights faster with DataBee
It’s time to start leveraging your security, risk, and compliance data with DataBee and Databricks.
DataBee joins large security and IT datasets and feeds close to the source, correlating with organizational data such as asset and user details and internal policies before normalizing it to the Open Cybersecurity Schema Framework (OCSF). The resulting integrated, time-series dataset is sent to the Databricks Data Intelligence Platform where it can be retained and accessible for an extended period. Empower your organization with DataBee and Databricks and stay ahead of the curve in the era of data-driven decision-making.
Read More
Five Benefits of a Data-Centric Continuous Controls Monitoring Solution
For as long as digital information has needed to be secured, security and risk management (SRM) leaders and governance, risk and compliance (GRC) leaders have asked: Are all of my controls working as expected? Are there any gaps in security coverage, and if so, where? Are we at risk of not meeting our compliance requirements? How can I collect and analyze data from across all my controls faster and better?
From “reactive” to “proactive”
Rather than assessing security controls at infrequent points in time, such as while preparing for an audit, a more useful approach is to implement continuous monitoring. However, it takes time to manually collect and report on data from a disparate set of security tools, making “continuous” a very challenging goal. How can SRM and GRC teams evolve from being “reactive” at audit time, to “proactive” all year long? Implement a data-centric continuous controls monitoring (CCM) solution.
According to Gartner®, CCM tools are described as follows:
“CCM tools offer SRM leaders and relevant IT operational teams a range of capabilities that enable the automation of CCM to reduce manual effort. They support activities during the control management life cycle, including collecting data from different sources, testing controls’ effectiveness, reporting the results, alerting stakeholders, and even triggering corrective actions in the event of ineffective controls or anomalies. Furthermore, the automation they support enables SRM leaders and IT operational teams to gain near real-time insights into controls’ effectiveness. This, in turn, improves situational awareness when monitoring security posture and detecting compliance gaps.” Gartner, Inc., Innovation Insight: Cybersecurity Continuous Control Monitoring, Jie Zhang, Pedro Pablo Perea de Duenas, Michael Kranawetter, 17 May 2023
The use of a CCM solution offers significant advantages over point-in-time reviews of multiple data sources and reports. This blog identifies five of the key benefits of using a CCM solution.
Share the same view of the data with all teams in the three lines of defense.
A shared and consistent view of data facilitates better coordination between operations teams that are accountable for compliance with organizational security policy, the process owners who manage the tools and data used to measure compliance, and the GRC team that oversees compliance.
A set of CCM dashboards can provide that common view. Without a shared view of compliance status, teams may be looking at different reports, or reports created similarly, but at different points in time, resulting in misunderstandings; in effect, a cybersecurity “Tower of Babel.” Consistent reporting based on a mutually recognized source of truth for compliance data is an essential first step.
Furthermore, without a consistent view of compliance data, it will be challenging to have a productive conversation about the quality of the data and its validity. If operations teams are pulling their own reports, or even if they are consuming reports provided by the process owners or GRC team, inconsistencies in data are likely to be attributed to differences in report formats, or differences in the dates when the reports were run. If all the teams are looking at the same set of CCM dashboards displaying the same data, it is easier to resolve noncompliance issues that may be assigned to the wrong team, or to find other errors, such as missing or incorrect data, that need to be fixed.
Bring clarity to roles and responsibilities.
Job descriptions may include tasks such as, “Ensure compliance with organizational cybersecurity policy.” But ultimately, what does that mean, especially to a business manager for whom cybersecurity is not their primary responsibility? In contrast, a set of CCM dashboards that an operations level manager can access to see what specifically is compliant or noncompliant for their department provides an easily understood view of that manager’s responsibilities. Managers do not need to spend unproductive time trying to guess what their role is, or trying to find the team that can provide them with information about what exactly is noncompliant for the people and assets in their purview.
Compliance documents and frameworks typically include requirements for documenting “roles and responsibilities,” for example, the n.1.2 controls (e.g., 1.1.2, 2.1.2, etc.) and 12.1.3 in PCI-DSS v4.0. Similarly, the “Policy and Procedures” controls, such as AC-01, AT-01, etc. in NIST SP 800-53 state that the policy “Addresses… roles, [and] responsibilities.”
Ultimately, roles and responsibilities for operations managers and teams can be presented to them in an understandable format by displaying compliant and noncompliant issues for the people and assets that they manage. This is not to say that cybersecurity related roles and responsibilities should not be listed in job descriptions. However, a display of what is or is not compliant for their department will complement their job description by making the manager’s responsibilities less abstract and more specific.
Making compliance and security a shared responsibility
Cybersecurity is Everyone’s Job according to the National Initiative for Cybersecurity Education (NICE), a subgroup on Workforce Management at the National Institute of Standards and Technology (NIST). At the operations level, a manager’s primary responsibility for the business may be to produce the product that the business sells, to sell the product, or something related to these objectives. But the work of the business needs to be done with cybersecurity in mind. Business operations managers and the staff that report to them have a responsibility to protect the organization’s intellectual property, and to protect confidential data about the organization’s customers. So, even if cybersecurity is not someone’s primary job responsibility, cybersecurity is in fact everyone’s job.
At times, business managers may take a stance that “cybersecurity is not my job,” and that it is the job of the CISO and their team to “make us secure.” Or business managers may accept that they do have cybersecurity responsibilities, but then struggle to find a team or a data source that can provide them with the specifics of what their responsibilities are.
A CCM solution can give business managers a clear understanding of what their cybersecurity “job” is without requiring them to track down the information about the security measures they should be taking, as the data alerts them to security gaps they need to address.
Enhance cybersecurity by ensuring compliance with regulations and internal policies
Compliance may not equal security, but the controls mandated by compliance documents are typically foundational requirements that, if ignored, are likely to leave the organization both noncompliant and insecure. An organization that has good coverage for basic cybersecurity hygiene is likely to be in a much better position to achieve compliance with any regulatory mandates to which they are subject. Or, conversely, if the organization has gaps in their existing cyber hygiene, working to achieve compliance with their regulatory requirements, or an industry recognized set of security controls, will provide a foundation on which the organization can build a more sophisticated, risk-based cybersecurity program.
The basics are the basics for a reason. Using a CCM tool to achieve consistent coverage for the basics when it comes to both compliance and cybersecurity provides a more substantial foundation for the cybersecurity program.
Creating a progressive and positive GRC feedback loop using CCM
A CCM solution does not take the place of or remove the need for a GRC team and a GRC program. But it is a tool that, if incorporated into a GRC program, can help by saving time formerly used to manually create reports, and by facilitating coordination and cooperation by providing teams a consistent view of their compliance “source of truth.” Implementing a CCM solution may uncover gaps in data (missing or erroneous data), or gaps in communication between teams, such as the business teams that are accountable for compliance, and the process owners who are managing the tools and data used to track compliance. Uncovering any such gaps provides the opportunity to resolve them and to make improvements to the program. As gaps in data, policy or processes are uncovered and resolved, the organization is positioned to make continuous improvement in its compliance posture.
If there are aspects of the organization’s current GRC program that have not achieved their intended level of maturity, a CCM solution like DataBee can help by providing a consistent view of compliance data that all teams can reference. CCM can be the focus that teams use to facilitate discussions about the current state, and how to move forward to a more compliant state. Over time, the organization can draw on additional sources of compliance data and display it through new dashboards to continue to build on their compliance and cybersecurity maturity.
Get started with DataBee CCM
For more insights into how a CCM solution can ease the burden of GRC teams while improving an organization’s security, risk and compliance posture, read the recent interview of Rob Rose, Sr. Manager on the Cybersecurity and Privacy Compliance team here at Comcast. Rob and the Comcast GRC team use the internally developed data fabric platform that DataBee is based on, and they’ve achieved some remarkable results.
The DataBee CCM offering delivers the five key benefits described here and more. If your organization would like to evolve its SRM and GRC programs from being “reactive” to “proactive” with continuous, year-round controls monitoring, be in touch and let us show you how DataBee can make a difference.
Download the CCM Solution Brief to learn more, or request a personalized demo.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Read More
DataBee is a Leader in Governance, Risk, and Compliance in Snowflake's The Next Generation of Cybersecurity Applications
Today, Snowflake recognized DataBee, part of Comcast Technology Solutions, as a Leader in the Governance, Risk & Compliance (GRC) Category in Snowflake’s The Next Generation of Cybersecurity Applications. As the Director of Strategic Sales and Go-to-Market Strategy, I am proud to help joint customers achieve fast, accurate, and data-driven compliance answers and resolutions that measure risks and controls effectiveness.
The inaugural, data-backed report identified five technology categories that security teams may consider when building their cybersecurity strategy. In addition to the GRC category, the other categories included: Security Information and Event Management (SIEM), Cloud Security, Data Enrichment and Threat Intelligence, and Emerging Segments.
DataBee puts your data at the center for dynamic, detail-rich compliance metrics and reports. The cloud-native security, risk and compliance data fabric platform weaves together security data sources with asset owner details and organizational hierarchy information, breaking down data silos and adding valuable context to cyber-risk reports and metrics.
By being a connected application Powered by Snowflake partner, DataBee makes continuous controls monitoring (CCM) a reality by enabling customers to securely and quickly access large, historical datasets in Snowflake while driving down costs and maintaining high performance. DataBee’s robust analytics enables teams across the organization to leverage the same dataset for high fidelity analysis, decisioning, response and assurance outcomes without worrying about retention limits. From executives to governance, risk and compliance (GRC) analysts, DataBee on Snowflake delivers a dynamic and reliable single source of truth.
Thank you to Snowflake for partnering with DataBee! As Nicole Bucala mentioned in our press release, DataBee makes it faster, easier and more cost effective for GRC teams to combine and share the security and business data and insights that their constituents need to stay compliant and mitigate risk. Our strategic partnership with Snowflake is an essential part of our solution, providing a powerful, flexible, and fully managed cloud data platform for our customers’ data regardless of the source.
Read More
DataBee appoints Ivan Foreman for EMEA expansion leadership
You may be surprised to know this, but the security data issues that challenge US-based security teams are issues felt ’round the world. Of course, I’m kidding: These challenges have been worked on, talked about, and written about for years and continue to eat up news cycles because it’s still too hard to correlate and analyze all of the security data generated by the tools and technologies that live in most enterprise security stacks.
DataBee® from Comcast Technology Solutions (CTS) has been created to help bring order, ease, and clarity to security data chaos in the enterprise. Having focused this first year of business on our US home turf, we’re very happy now to expand into EMEA with a cybersecurity veteran at the helm — Ivan Foreman. Based out of London with work and life experience in Israel and South Africa, Ivan brings to his new role as Executive Director and Head of EMEA Sales, DataBee a deep understanding of the unique needs of customers and partners from across EMEA, and a true passion for ensuring the security of both people and organizations.
Let’s learn a little bit more about Ivan and his background…
[LC]: Ivan, you’ve joined CTS DataBee to lead Sales and Business Operations in EMEA. Talk to us about your charter there and what you hope to accomplish in your first several months.
[IF]: I’m very excited to join CTS DataBee and the opportunity to build the business in EMEA. My first month was spent learning as much as possible about Comcast, the value a security data fabric has brought to the organization, and the commercialization of this innovation through DataBee. I was fortunate enough to travel to HQ in Philadelphia and meet the leadership team who have been building DataBee for the past year. My next couple of months will be spent building the DataBee brand in the EMEA region and helping organizations there get more from their security data. So far, the response has been fantastic, and almost everyone I’ve spoken to about DataBee is keen to learn more and seems to have an interest in putting me in touch with their colleagues.
[LC]: How did you learn about the opportunity with CTS DataBee, and what ultimately attracted you to this position?
[IF]: I worked with Nicole Bucala (VP & GM of DataBee) at a previous company, and during the summer, I saw her post on LinkedIn about DataBee and was intrigued. As I was in the process of looking for a new role, I reached out to learn more.
There were ultimately three things that attracted me to the role:
The company: DataBee is part of Comcast Technology Solutions, whose parent company, Comcast, is one of the largest companies in the US. Having worked for small cybersecurity startups in the past, I understand the challenges of building a brand from scratch; here, I have the support of the Comcast brand and an amazing internal use case that validates how well equipped we are to solve the enterprise security data problem.
The product: Listening to the story of how Noopur Davis, Comcast’s CISO, built a security data fabric internally to help her answer the very difficult questions asked by Comcast’s board and regulators, whilst at the same time saving money and improving the company’s security posture, was very compelling.
The people: It has always been very important for me to work with people I like and respect. Nicole has done an amazing job of hiring some of the best and brightest talent in the industry. Throughout my interviews, I was impressed by the quality of the people I met and was excited by the prospect of so many successful people all working together.
[LC]: What is it about DataBee the product that excites you and that you think will resonate with enterprises in EMEA?
[IF]: What is resonating is our Continuous Controls Monitoring (CCM) offering — the ability to see real-time dashboards relating to the company’s security, risk, and compliance posture. Every single company has different data sources and security metrics that they need to monitor, and our CCM capability provides both standard and customizable dashboards that make it possible for an organization to track their specific security controls and compliance requirements.
In EMEA in particular, keeping up with regulations is such a challenge, and every industry and country seems to have different cybersecurity-related regulations they need to adhere to. In the UK, the ‘network and information systems,’ or NIS, is the main framework to look out for. Telecommunications companies, or Telcos, are grappling with the UK’s Telecommunications (Security) Act, whose requirements need to be in place for Tier 1 Telcos by March 2024. PCI DSS 4.0, which applies globally to any organization that processes payment cards, is another one to review. DataBee can ease the challenge of keeping up with these and other regulations by giving CISOs and GRC teams an easier way to continuously monitor their controls and keep ahead of their annual audits.
[LC]: Tell us a bit about your background in cybersecurity — you’ve been in this industry for most, if not all, of your career, correct? Share with us some of your experiences and what’s kept you hooked on the security space.
[IF]: I grew up in South Africa and graduated from university in Durban, but my professional cybersecurity experience began when I went to live in Israel. There, I worked for a company called Aladdin, which, at that time, was the market leader in combatting software piracy.
Eventually I moved to the UK, and other roles there included:
Business development manager for Softwrap, which had a very innovative secure envelope solution for helping to securely distribute software online (long before there were App Stores).
Progressive channel management and leadership roles at ScanSafe, pioneer in SaaS web security. I was one of the first employees and helped the company grow and expand until it was sold to Cisco in 2009 for $183 million. I stayed with Cisco for another four years and was promoted to lead the company’s security business in the UK, selling its full security portfolio (firewalls, IDS, email security, web security, VPN, identity services, etc.).
VP of sales EMEA and VP of sales Asia Pacific for Wandera, where I was reunited with the original founders of ScanSafe, who were focused this time on enterprise mobility security and data management.
VP of sales EMEA for Illusive Networks, an Israeli deception security company. I started as their first EMEA hire, helping them grow and expand the business there.
Senior director of global channel sales for Nozomi Networks (an OT security company), where I led their global channel business and was purely focused on developing relationships with hundreds of partners around the world.
Whilst working in the security industry, it is not just about selling products; you actually feel as if you are positively contributing to society by helping to keep companies and people safe from bad actors. I guess that’s what has really kept me interested in this space and why I believe I’ll probably stay in cybersecurity until the end of my career.
[LC]: What are some of the key data and/or cybersecurity challenges that are unique to enterprises in EMEA?
[IF]: One of the most interesting challenges I have seen in the UK specifically is the very short tenure of CISOs. I recently read a Forrester report, which highlighted that the average tenure for a UK CISO (working for the FTSE 100 companies) was 2.8 years. This means that they are not likely able to invest in long-term projects, but rather focus on short-term wins before they move on to a new challenge. It’s therefore critical to understand where the CISO is in their tenure as a key success factor, which may make or break a potential sale.
[LC]: Looking ahead into 2024, what are some of your security and/or security business predictions for the year ahead in EMEA? Any threats/challenges/opportunities you see on the near-term horizon?
[IF]: No doubt AI and ML is going to play a huge role in 2024 and beyond. Ensuring these technologies are used correctly and morally is going to be a huge challenge as bad actors and malicious hackers can also use them to attack enterprises and states.
The other challenge I see is finding skilled cybersecurity professionals who are available to help implement policies and keep companies safe. As reported by ISC2 in their 2023 Cybersecurity Workforce Study, there are roughly four million empty cybersecurity positions in companies and organizations globally. The people who work in the industry need to find a way to ensure children at school learn about the importance of these jobs and are encouraged to consider careers in this field.
[LC]: Will you be working to build channel partnerships in EMEA? If so, what types of partners are you hoping to create relationships with?
[IF]: Yes, DataBee is a channel-friendly organization, and we love working with our partners to help our customers achieve fast time-to-value. The only way to really grow and scale our business quickly throughout EMEA is to embrace the channel. I believe, however, that it is critical to focus on a few key strategic partners; it’s not quantity, it’s quality, and ensuring that there’s a good overlap of our target customers and the customers served by our partners.
I’ve already started discussions with a few strategic partners who have expertise in this space and see the value of what DataBee is bringing to market. The most critical element from my perspective is finding partners who can help deliver the professional services that will ensure a successful DataBee implementation and faster time-to-value.
[LC]: Who is resonating with the DataBee story and value proposition right now?
[IF]: Initially, anyone involved in security, risk, and compliance management. Our CCM solution is ideal for CISOs and GRC and compliance executives because of the real-time reporting it can provide, and it’s great for GRC analysts and audit teams who need that ‘single source of truth’ — connected and enriched data.
We have an aggressive product roadmap for the DataBee data fabric platform that we hope will make it very relevant and important to other cybersecurity, privacy, data management, and business intelligence roles early in 2024 and beyond. Within Comcast, the data fabric platform that DataBee is based on is being used by everyone from the Comcast CISO and CTO to GRC analysts, data scientists, data engineers, threat hunters, security analysts, and more.
[LC]: Where are you based, and what’s the easiest way for people to reach you?
[IF]: I’m based in London. It is best to reach me via email ivan_foreman@comcast.com or via LinkedIn.
Additional information
We believe that DataBee is truly unique, providing a comprehensive approach to bringing together security and enterprise IT data in a way that improves an organization’s security, risk, and compliance posture.
As Comcast CISO Noopur Davis has said, “Data is the currency of the 21st century — it helps you examine the past, react to the present, and predict the future.” It is a universal currency that all organizations should be able to use, whether for deep security insights and improved protection, or to propel the business forward with a better understanding of customer needs.
Learn how your organization can take full advantage of its security data by requesting a personalized demo of DataBee or reaching out to Ivan. He can’t wait to talk to you.
Read More
Putting your business data to work: threat hunting edition
Detectives, bounty hunters, investigative reporters, threat hunters. They all share something in common: When they’re hot on a scent, they’re going to follow it. In the world of cybersecurity, threat hunters can use artifacts left behind by a bad actor or even a general hunch to start an investigation. Threat hunting, as a practice, is a proactive approach to finding and isolating cyberthreats that evade conventional threat detection methods.
Today’s threat hunters are technologists. They are using an arsenal of tools and triaging alerts to pinpoint nefarious behaviors. However, technology can also be a barrier. Pivoting between tools, deciphering noisy datasets and duplicative fields, assessing true positives from alerts, and waiting to access cold data repositories can slow down hunts during critical events.
Threat hunters that I have worked with here at Comcast and at other organizations have shared that data, when enriched and connected, can be a crucial advantage. Data helps paint a picture about users, devices, and systems, and the expansive lens enables threat hunters to have a more accurate investigation and response plan. However, data is expensive to store long term, and large, disparate datasets can be overwhelming to sift through to find threat signals.
Threat hunting in the AI age
The broad adoption of artificial intelligence (AI) and machine learning (ML) opens the door to data-centric threat hunting, where a new generation of hunters can execute more comprehensive and investigative hunts based on the continuous, automated review of massive data. Threat hunters can collaborate with data engineers and analysts to build AI/ML models that can quickly and intelligently inspect millions of files and datasets with the accuracy, scale, and pace that manual efforts cannot match.
When companies are generating terabytes and petabytes of data every day, using AI/ML can help security teams:
Collect data from multiple security tools and aggregate it with non-security insights.
Scrutinize network traffic data and logs for indicators of compromise.
Detect unknown threats or stealthy attacks, including the exploitation of zero-day vulnerabilities and lateral movement activities.
Alert on multiple failed log-in attempts or brute force activity and identify unauthorized access.
At Comcast, having clean, integrated data allows AI/ML to improve operational efficiency and fidelity. For the cybersecurity team, operationalizing AI/ML to scrub large datasets led to a 200% reduction in false positives; for the IT team, AI/ML highlighted single-use and point solutions that could be reduced or eliminated, leading to a $10 million cost avoidance.
Creating more effective threat hunting programs with your data
Threat hunters want access to data and logs — the more the merrier. This is because clever malware developers are deleting or modifying artifacts like clearing Windows Events Log or deleting files to evade detection, but fortunately for us, threat hunters know packets don’t lie.
Analyzing all that data can quickly become a challenging task. DataBee® takes on the security data problem early in the data pipeline to give data engineers and security analysts a single source of truth with cleaner, enriched time-series datasets that can accelerate AI operations. This enables them to utilize their data to build AI/ML models that can not only automate and augment the review of data but also achieve:
Speed and scale: Security data from different tools that have duplicative information and no common schemas can now be analyzed quickly and at scale. DataBee parses and deduplicates multiple datasets before analysis. This gives data engineers clean data to build effective AI/ML models directly sourced from the business, increasing visibility and early detection across the threat landscape.
Business context: Threat hunting needs more than just security data. Security events without business context require hours of event triaging and prioritization. DataBee weaves security data with business context, including org chart data and asset owner details, so data engineers and threat hunters can create more accurate models and queries. For Comcast, employing this model has led to more informed decision-making and fewer false positives.
Data and cost optimization: The time between when a security event is detected and when a bad actor gains access to the environment may be days, months, or years. This makes data retention important — but expensive. Traditional analytical methods and SIEMs put tremendous pressure on CIO and CISO budgets. DataBee optimizes data, retaining its quality and integrity, so it can be stored long term and cost-effectively in a data lake. Data is highly accessible, allowing threat hunters to conduct multiple compute-intensive queries on demand that can better protect their organization.
Bad actors are evolving. They’re using advanced methods and AI/ML to improve their success rates. But cybersecurity teams are smart. Advanced threat hunters are expanding outside of generic out-of-the-box detections and using AI/ML to improve their success rate and operational efficiency. Plus, using AI/ML effectively also saves money by enabling threat hunting teams to scale, doing more hunts within the same set of resources in the same time frame.
Take your interest into practice and download the data-centric threat hunting guide that was created through interviews and insights shared by Comcast’s cybersecurity team.
Read More
Trick or threat: 5 tips to discovering — and thwarting — lateral movement with data
We know ghouls and ghosts aren’t the only things keeping you up this spooky season. Bad actors are getting smarter with their attacks, using tactics and techniques that baffle even the most seasoned cyber professionals.
Discovering — and thwarting — lateral movement can be particularly difficult because of disjointed but established software security tools that cannot always identify unwarranted access or privilege escalation. Many behaviors, like pivoting between computer systems, devices and applications, can appear as if they’re from a legitimate user, allowing bad actors to go undetected in environments.
Threat hunters are critical to exposing lateral movement activities. But much like hunting monsters in the dark, threat hunting using manual detection processes against large datasets is a scary task — one that is time-consuming and tedious. With the help of advanced tools like AI and machine learning (ML), hunters can analyze massive amounts of data quickly to pick up the faintest signals of nefarious activities. Data breach lifecycles have proven to be up to 108 days shorter compared to organizations that do not use some form of AI/ML in their practice. 1
Best practices for using AI/ML to detect lateral movement
At the end of the day, your threat hunters can still have the advantage. No one knows your environment better than you do. By building AI/ML models fueled by data from your environment, your threat hunters can detect — and ultimately thwart — lateral movement before the bad actors escalate further in the cyber kill chain.
Models, processes, and procedures are often bespoke, but a few time-tested best practices can accelerate threat detections and response. For lateral movement, this might look like using data about your users, their assets, and their business tool access to identify activities that indicate data exfiltration and espionage. Let's take a look at these best practices in the context of a lateral movement use case:
Store as much relevant data as possible for as long as possible. Investigating and finding evidence of lateral movement may require analyzing months or years of data because adversaries can be present but undetected for days, months — or even years. Raw and processed data, which has been deduplicated and contextualized, should be stored in an accessible, cost-effective data storage repository for threat hunters to run their queries.
Create baselines based on business facts and historical actions. Data scientists who work with business data should collaborate with threat hunters to develop and define baselines based on the hypothesis for a given use case. Typically, this means describing the environment or situation ‘right now’ and searching for deviations to indicate malicious activity. Creating proper baselines requires expertise to know what attributes and data points to use and how to use them. Regarding lateral movement, baselines should be based on factual and historical data reflecting business goals, past scenarios, hypotheses or triggers, and infrastructure conditions. Baselines created without context are meaningless.
Use the data with the best tools. Even with AI/ML, human interaction and judgment are still required. But data analysis doesn’t happen by itself. Data is often compiled and aggregated in a data lake, only to be ignored or underutilized. SIEMs can provide short-term storage and analysis of security data, but when you are threat hunting, you need more than just noisy security data. To get the best of both worlds, data transformation needs to be performed early in the pipeline so threat hunters have clean, enriched data they can trust and tools they are familiar with.
Produce accurate, data-driven reports. Producing meaningful KPIs and reports helps executive sponsors find value in threat hunting activities and encourage ongoing program investment. KPIs also help validate the efficacy of hunts even if nothing is found. For example, investigating a suspected lateral movement breach may have found no bad actor activity. The proper reporting underscores and validates the hunt was done soundly and backed up the baselines and KPIs.
Allocate a budget. Threat hunting can be an expensive and active cyber defense activity. When a trail is hot, hunters want to follow it. It’s important to allocate a budget for data storage, internal and outsourced resources, and multiple, compute-intensive queries. Creating a budget ensures that security teams have the resources they need when they need it most. “After the fact” prioritization once a breach or lateral movement has been detected will not only leave the organization at risk but will likely be a slow process or provide inaccurate findings. So, planning, as with any cyber security initiative, pays off.
Read More
Expert Insights: GRC and the Role of Data
Since I joined Comcast Technology Solutions (CTS) and the DataBee® team back in late March, I’ve been awed and challenged by how many different roles the DataBee data fabric platform is relevant to. Is it for security analysts? Threat hunters? Data scientists? GRC professionals? Yes, yes, yes and yes… essentially, DataBee is relevant to anyone in an organization who needs data to understand, protect and evolve the business.
Let’s get to know some of the amazing people who rely on data every day to do their jobs and help their organization be successful!
Governance, Risk & Compliance (GRC)
The function of GRC is critical – when it is correctly implemented, it is a business enabler and revenue-enhancer; when it is poorly managed or even non-existent, it can be a business inhibitor, leaving an organization vulnerable to compliance violations and increased cyberthreats.
GRC programs are set of policies and technologies that align IT, privacy, and cybersecurity strategies to their business objectives.
I recently had the good fortune of meeting Rob Rose, a Manager on the Cybersecurity and Privacy Compliance team here at Comcast, and I enjoyed my conversation with him so much that I immediately hit him up to be one of our spotlight experts.
Working to achieve a more secure privacy and cybersecurity risk posture
[LC] Rob, tell us about what you do as Manager, Cybersecurity and Privacy Compliance here at Comcast.
[RR] At the highest level, my role at Comcast is to help the company achieve a more secure privacy and cybersecurity risk posture. This takes shape as leading an initiative called ‘Controls Compliance Framework’ (CCF) which is broken down into two sister programs: ‘Security Controls Framework’ (SCF) and ‘Privacy Controls Framework’ (PCF). The team I lead creates a continuous controls monitoring (CCM) product that business units across Comcast can use to monitor their adherence to privacy and cybersecurity-related controls.
Creating, and maintaining, this product includes collaboration with multiple different teams in the company, starting with the process owners of each control activity that helps to mitigate risk (e.g., the Corporate User Access Review team):
Collaboration with process owners: My team first works with the process owners to understand how the process is designed and should operate, and what actions the various Business Units across Comcast are expected to complete.
Document requirements: We then learn from process owners how and where they store the data to support their control (e.g., Oracle Databases, ServiceNow, etc.) and from there we document requirements to bring to our development team.
Report on privacy and security posture: Once our development team has ingested all the disparate data from multiple process owners and developed our product based on the requirements we documented, we bring this product to Business Units, with the goal of providing them a single pane of glass view into their overall privacy and security posture.
GRC Challenges
[LC] What would you say are the 3 biggest challenges faced by GRC and compliance experts?
[RR] There are a few keys challenges that we run into as a GRC function, and fortunately, compliance data leveraged in the CCF program can address them.
Clean Data – the first issue is the cleanliness and usability of the data. As a GRC function, we rely on process owners throughout the company to provide us with data. Frequently, however, we see issues with both data cleanliness and ownership in the data that is provided to us. When we bring this data to Business Units, who work day in and day out with the assets they own, they often provide the feedback that there is something amiss in the data. This can be turned into a positive as we can bring this feedback back to the process owners, who can then clean up the data on their end, making the data quality better for the entire company.
Awareness – Often times when we alert Business Units of compliance actions that need their attention, we discover that they were unaware of these requirements. This awareness creates some ‘knowledge transfers’ that we must do to inform Business Units of the need for the actions, and the importance of the actions. This helps to increase the overall cybersecurity and privacy awareness of the Company.
Prioritization – Akin to point B, since Business Units are often not aware of the privacy and security related actions they need to take, they have not planned the resource capacity into their roadmap to complete those activities. Helping to prioritize which actions are ‘must do’ right now, as opposed to which actions can wait, has been something we’ve been working on with our program. To help with this, we’ve been driving towards risk-based compliance, noting that while all systems and assets need to meet cybersecurity requirements, certain systems and assets are a higher priority to meet these requirements based on the types of data that the system utilizes.
The rewards of the job
[LC] What are some of the most rewarding aspects of your job?
[RR] We’ve had some real success stories over the past few quarters where Business Units have gone from a non-compliant state to a highly compliant state. This has come from a combination of presenting Business Units with insights through data so they are aware of where they have gaps, and from helping them understand what actions they need to take. Seeing a Business Unit make this transition from non-compliant to compliant is incredibly rewarding, as we know that we’ve helped make the company more secure.
Collaboration
[LC] What other teams or roles do you interact with the most as you go about your job day-to-day?
[RR] As a GRC professional, we’re in a unique position where we interact with individuals at all levels of the business. This includes the first line of defense (Business Units), the second line of defense (process owners, Legal, other teams within Comcast Cybersecurity), and the third line of defense (Comcast Global Audit). We have the benefit of working with both very technical teams of developers and system architects, as well as with very process driven teams who define what requirements the company should be meeting. In addition, in our position, our team works with top level executives from a reporting standpoint, as well as the front line workers who are actually implementing changes to systems and applications to make the company more secure!
The role of data in GRC
[LC] How do you use data in your job, and what type of data do you rely on?
[RR] As a GRC professional, the ability to have clean data provided to us is the keystone to our success. The Controls Compliance Framework product my team and I work on is dependent on data coming in from disparate data sources so we can cleanse and aggregate it to provide meaningful insights to Business Units.
At the highest level, you can break down the types of data that we need into a few different categories:
Application data: What applications exist in the environment, who owns those applications, and what level of risk does the application present -- e.g., does the application use customer data, proprietary Comcast data, etc? Does the application go through User Access Reviews on a set frequency, and has it been assessed to confirm it was developed in a secure way?
Infrastructure: What underlying assets or infrastructure support those applications? What servers are used to run the application? Are they in the cloud or on-prem? What operating system is the server running? Is the server hardened, scanned for vulnerabilities, and does the server have the necessary endpoint agents on it (e.g., EDR)?
Vendor information: What vendors do we engage with as an aspect of our business, what data do we share with the vendor, and what assessments have been completed to confirm the vendor will handle that data securely?
For each of these types of data, we also need data to support the completion of the processes around that data. Have all assessments been done on the application, asset, vendor, and do they meet all the required controls?
As you can imagine, these data sources are all in different locations. One of the key aspects of the program we run is to pull all of this data into one location and provide it in a single pane of glass view for business units so they can have one easy location to go to to understand their risk posture.
The GRC questions that data helps answer
[LC] Can you give us an example of the kinds of questions you’re looking to answer [or problems you’re looking to solve] with data?
[RR] The biggest question I’m looking to answer as a risk professional is what the overall risk posture of the company is. Do we have a lot of unmitigated risk that could expose us to issues, or are we generally covered? This question is most easily answered with the data that was described above (the role of data in GRC).
GRC vs. Compliance
[LC] How would you describe the difference between GRC and compliance? Or are they one-and-the-same?
[RR] The ‘C’ in ‘GRC’ stands for Compliance. Compliance is a huge piece of what I do as a GRC professional. Making sure that the company is complying with the necessary standards and policies (compliance) and lessening the exposure we have as a company and the impact of that exposure (risk) is pretty much what my entire day is filled with! We then present these insights to executives so they can be aware and take action to adhere to standards and policies as needed (governance).
A random data point about our GRC expert
[LC] Rob, if you could go back in time and meet any historical figure, who would it be and why?
[RR] Wow this is a tough one that I think could change day by day. I recently took a trip to Rome and we visited the Sistine Chapel while we were there. I was in such awe of the work that Michelangelo did. I think I’d like to meet someone like Michelangelo as the arts are an area that is so foreign to me and how I think. I’d love to pick his brain to find out if he knew what he was creating would be a work that was revered and visited by millions of people for centuries to come, to understand his artistic thought process, and to learn more about his life.
Parting words of wisdom
[LC] Any other words of wisdom to share?
[RR] To my other GRC colleagues out there, I am sure you run into the same struggles of trying to get Business Units to comply with internal company policies and standards. The implementation of data-driven tools have made it significantly easier to assist BUs in becoming compliant. The feedback that we’ve gotten is that utilizing data, and putting it in a simple and straightforward tool, has helped to ‘make compliance easy’. Let’s all keep striving to find ways to make it as easy on the people whose main job is not compliance to fit compliance into their work!
DataBee for GRC
As Rob mentions, the Comcast GRC team is charged with pulling all kinds of data – application, infrastructure and vendor data -- into one location, essentially providing a “single pane of glass view” to Business Units, making it easy for these Business Units to see and understand their risk posture. The GRC team uses the internally developed data fabric platform that DataBee is based on to do this, and its use has helped to drive those improved compliance rates that Rob mentioned, as well as other improvements in Comcast’s overall compliance and security posture. (An aside – I think that’s pretty cool.)
DataBee v1.5 was recently launched, and with it a continuous controls monitoring (CCM) capability that provides deep insights into the performance of controls across the organization, identifying control gaps and offering actionable remediation guidance. Check out the CCM Solution Brief to learn more.
Thanks to Rob for a great interview!
Read More
Terrestrial Distribution for MVPDs?
Allison Olien, Vice President and General Manager for Comcast Technology Solutions (CTS), is witnessing firsthand how the changes, challenges, and new opportunities for MVPDs and content providers are evolving the industry faster than ever. She took some time to reflect on how the 2020’s have impacted companies, and on how new technological approaches are opening new doors to growth and profitability.
The 2020s have already proven to be a decade of sweeping change for MVPDs and cable operators around the U.S., starting with the repurposing of C-band spectrum for 5G services. We’re three years in – what’s it like now?
(AO) Well, the C-band reallocation had a definite material impact for satellite-based services; I mean, how could it not? Many of them had to physically transition to a new satellite - for example, our services that had operated from the SES-11 satellite had to migrate to the SES-21 satellite in response. That said, it wasn’t the only thing changing for providers; it was more of a catalyst for a hard look at a) how technology was evolving, b) how device improvements, high-def video and subscription models were changing the competitive landscape, and c) how to compete more effectively in a more dynamic market.
Are these the reasons why CTS recently introduced a new terrestrial distribution model?
(AO) Precisely. MVPDs don’t operate in a bubble; they’ve paid attention to the ways the competitive landscape has changed – changes that have happened on both sides of the screen. For CTS, these are partners we’ve worked with for a long, long time. Satellite delivery is trusted and reliable, but it has limitations, it takes a lot of equipment – and, most importantly, it used to be the only option for many of them. Managed Terrestrial Distribution, or MTD, replaces most (or all) satellite feeds to a cable plant with broadband IP networking. MTD uses a system’s existing internet connection and doesn’t require a dedicated point-to-point-connection to any specific PoP. This not only liberates MVPDs from the limitations of a fixed transponder bandwidth, but also paves the way for the entire content-to-consumer pathway to evolve into full IP delivery.
So, the net result essentially gives these businesses a way to improve services and attract more subscribers in two ways – the ability to offer more services, but also more HD content?
(AO) Ultimately, it’s an opportunity for MVPDs to reimagine the way consumers can engage with their content and clear the way for an evolution to full IP delivery to customers. To your point, yes – with the managed terrestrial distribution we’ve rolled out, a distributor can effectively double the amount of channels they offer and triple the amount of MPEG-4 HD content; but the actual hardware investment is drastically reduced as well. For our current customers as well as other providers looking for a new technology foundation to grow on, we think it’s worth having a conversation about.
Click here to learn more about Managed Terrestrial Distribution.
Continue reading the rest of ICN Issue 10 - September 2023 here.
Read More
Making cybersecurity continuous controls monitoring (CCM) a reality with DataBee 1.5
Exciting innovations are a-buzz! Today, DataBee® v1.5 is now generally available and features a host of new continuous controls monitoring (CCM) capabilities on top of the DataBee security data fabric that puts your data at the center for dynamic, detail-rich compliance metrics and reports.
As more businesses become digital-first, business leaders are leaning in and placing more importance on cyber-risk programs. In addition to pressure from regulators, internal auditors and KPIs are keeping security and risk management teams up at night. The lack of reporting capabilities that can show real-time compliance trends over time, on a consistent data set, have analysts scrambling to collect data and test controls that will only be evaluated in the latest audit, instead of focusing on sustainable programs and insights.
Putting continuous in continuous controls monitoring
DataBee 1.5 introduces a data-centric approach to continuous controls monitoring (CCM) for the security, risk, and compliance data fabric platform. By focusing upstream on the data pipeline, DataBee weaves together security data sources with asset owner details and organizational hierarchy information, breaking down data siloes and adding valuable context to cyber-risk reports and metrics.
From executives to governance, risk, and compliance (GRC) analysts, DataBee delivers a dynamic and reliable single source of truth by connecting and enriching data insights to measure CCM outcomes. Comcast has experienced first-hand the security data fabric journey, and DataBee 1.5 brings to market the innovations from our internal tool – including feeds, dashboards, and visualizations – to your organization so you can scale your continuous controls services program.
In this example, Will Hollis, an Executive VP of ACME Studios, views the security posture of his organization using DataBee’s Executive KPI Dashboard.
Verifiable data trust
The robust platform features 14 pre-built CCM dashboards, aligned to the NIST Cybersecurity Framework, and the ability to self-defined KPI values – or use DataBee recommended values. Risk scores are populated using underlying data sources collected and enriched by DataBee. Users can see in detail where and how the data is used when hovering over the score. The completeness, accuracy, and timeline in the dashboards builds trust in CCM reporting and leads to accountability amongst business leaders. Afterall, data trust gives you wider adoption of your cyber-risk program throughout the business.
DataBee gives you insights into how the scores are provided
Operational efficiency
A benefit of having data at the center of your CCM program is that it streamlines engagement models with control owners. Previously GRC teams had the tedious task of scanning a variety of data sources – nearly all of which they did not have control over – and having fragmented conversations with different stakeholders. With DataBee, instead of hearing from your GRC teams infrequently and during urgent events, there is a continuous feedback loop built on the quality of data and actionable insights. Another benefit is the ability to measure the effectiveness of cyber-risk investments and programs.
Proactive risk management
The ever-evolving threat landscape and regulatory constraints are a nightmare to deal with for any GRC team at any scale. DataBee’s CCM capabilities deliver deeper insights about risks and inefficiencies, providing recommendations for resolution and hierarchy information to proactively reach out to control and asset owners. In the screenshot below, users can drill down to the granular details from their vulnerability management dashboards and find solution recommendations to resolve issues quickly. Teams throughout the business can focus on closing gaps instead of finding them, enabling the business to remain in compliance with internal and external requirements.
Drill into the vulnerability management details to find out how to resolve issues.
Get started with DataBee
Continuous controls monitoring is a game changer for security transformation and outcomes. DataBee, from Comcast Technology Solutions, is thrilled to deliver data-centric CCM capabilities that scales for businesses of all sizes. Watch our DataBee 1.5 announcement to hear from Nicole Bucala, Yasmine Abdillahi, and Erin Hamm about the product journey. Want to get started right away? Email us at CTS-Cyber@comcast.com and check out our AWS Marketplace Listing.
Read More
Elevating Ad Management: 5 Strategies for Success
A successful advertising campaign is more than the sum of its parts. That’s the goal, right? Campaigns need to justify themselves by delivering measurable results. That said, there are a lot of moving parts to a modern campaign, and they don’t all move at the same speed. Even here in the 2020s, there are still manual processes in some workflows that have not been kissed by innovation (yes, spreadsheets still exist). Coordination and control across a complex ad ecosystem can be achieved, but the speed and accuracy needed from today’s advertising operations require a centralized, unified, fully automated approach. Imagine your advertising operations as a terminal, from where you move your content to destinations all over the globe, often in real time. Is there an easier way to achieve a better, faster, smarter ad creative process at scale?
Let’s take a look at some of the key elements that will enable advertisers to uplevel ad management in the second half of this decade (and beyond).
#1: The unified ad platform “really ties the room together”
Advertising is a dynamic exercise where time is almost always of the essence. Efficiency in workflow management makes or breaks the success of a campaign. One of the key strategies for streamlining workflows at scale is the adoption of a unified platform that can act as a centralized hub.
With a centralized architecture, advertisers can oversee the entire campaign lifecycle without switching between different interfaces. From final media buys to traffic instruction creation and asset delivery, previously disjointed tasks become a single cohesive process. Every step benefits from the integration of tools and data within a single ecosystem.
A unified platform also brings teams into harmony. Collaboration among team members is more organic, drawing teams, departments, and stakeholders into greater alignment. When everyone can access the same resources and insights, real-time sharing minimizes miscommunication and reduces errors. Think of it as having your whole team playing on the same field — response to change is more agile, assets can be optimized on the fly, and more resources can be focused on campaign effectiveness instead of just “completion.”
#2: Safeguarding content: Automated usage rights integration
Images, music, and other creative assets bring the magic — but each asset requires careful consideration of usage rights and permissions. Violating these rights is a red flag that can lead to legal complications and reputational damage, not to mention costly fines. A better, faster, and smarter ad creative process at scale must incorporate automated and integrated usage rights data.
Incorporating usage rights information into the ad creative workflow helps to ensure that every piece of content used is compliant with legal and contractual obligations. Automated systems can cross-reference assets against a database of usage rights, preventing the inadvertent use of content that hasn't been properly licensed. This not only mitigates legal risks but also saves time by eliminating manual rights checks. Ultimately, usage rights management is essential; it helps to ensure efficiency and process/policy adherence, elevating the consistency — and results — of the creative process.
#3: Ensuring consistency across destinations at scale
With centralized command-and-control, advertisers can more effectively manage creative distribution across a staggering number of media destinations. One of the biggest benefits to this approach is that more focus can be applied to optimizing results and the refinement of campaign components to maximize impact at the screen level. Advertisers can improve outcomes and adjust objectives while still adhering to brand guidelines across destinations.
Speaking of brand guidelines, enter the automation of industry standard identification. Automating the use of these unique identifiers streamlines tracking and distribution of creative assets. Standardized identifiers act as “digital passports” for creative elements, facilitating their seamless movement across platforms, regions, and languages. Advertisers can leverage this additional layer of automation to further reenforce data consistency and accuracy throughout the global delivery process. This consistency not only bolsters brand integrity but also expedites the localization of content, as each asset's specifications and requirements are readily accessible.
#4: Drawing a straighter, shorter line between ad creative and ROI
The goal “make the most of every ad dollar across every media experience” is a common one, whether you’re on the advertising side or you’re a media destination looking to attract more viewers and more ad revenue.
A platform that combines real-time creative conditioning and asset management, along with APIs to major ad decisioning engines, offers content owners and distributors a path to addressability and targeting across TV, digital, and social channels, and again, reduces the amount of time required to place campaigns from days to minutes, so quality can be reestablished as job one.
For instance, when the in-house advertising team at Lowe’s, the home improvement chain, wanted to expand audience reach and reap the benefits of top-quality content across platforms and ad ecosystems, they knew they would have to find innovative ways to execute campaigns with increased efficiency across both spot creation and distribution.
#5: AdFusion: Solving advertising’s biggest challenges at scale
Achieving a better, faster, and smarter ad creative process at scale demands a paradigm shift. From streamlined workflows and automated usage rights management to the automation of standardized creative data through industry IDs, advertisers need a platform that does it all, empowering them to overcome the logistical and geographical complexities of global mobile media consumption.
This is the challenge that Comcast Technology Solutions' AdFusion™ was built to answer. AdFusion is a unified platform that centralizes data and automates processes upstream and downstream. The platform is the result of Comcast’s dedicated research and investment and ensures consistency and accuracy across the entire ad process, making it easier to find and place the right creative, ensure that usage rights are in compliance, and track campaigns across broadcast, digital, and radio.
Justin Morgan, the head of product for AdFusion, expressed his excitement about the platform's potential: "In an industry that's constantly accelerating, AdFusion provides a technology approach that evolves and scales to meet companies' needs. It saves time and resources while enhancing data accuracy, ultimately helping our clients achieve better results in their campaigns."
Discover how AdFusion can support you in your quest to make the ad lifecycle better, faster, smarter at scale here.
Read More
Comcast (DataBee) at Black Hat? Yes!
The DataBee® team can’t help but have a little fun with the fact that Comcast is not exactly one of the first companies you think of when you think “cybersecurity”. Comcast has attended Black Hat in the past, but this is the first time we are debuting a cybersecurity solution for large enterprises – the DataBee security data fabric platform, which is poised to transform the way enterprises currently collect, correlate and enrich security and compliance for the better.
This was the 26th year of Black Hat USA but despite how many security vendors there are serving the market – a reported 3,500 in the US alone, 300 of whom were on the show floor – “security data chaos” (a term we love to use because it’s such an accurate description) remains a very real and difficult problem. Our discussions with booth visitors validated that it’s still very labor and cost-intensive to bring together the security data teams need to understand the threats that might be imminent or already wreaking havoc. When we would tell the DataBee story, there was a lot of head nodding.
From booth discussions to participation in a major industry announcement and the Dark Reading News Desk, the DataBee team took advantage of being at Black Hat to raise awareness of the security data problem and how we’re uniquely addressing it. A few highlights include:
The Open Cybersecurity Schema Framework (OCSF) announcement
On Tuesday, August 8, DataBee was included in the announcement, OCSF Celebrates First Anniversary with the Launch of a New Open Data Schema:
The Open Cybersecurity Schema Framework (OCSF), an open-source project established to remove security data silos and standardize event formats across vendors and applications, announced today the general availability of its vendor-agnostic security schema. OCSF delivers an open and extensible framework that organizations can integrate into any environment, application or solution to complement existing security standards and processes. Security solutions that utilize the OCSF schema produce data in the same consistent format, so security teams can save time and effort on normalizing the data and get to analyzing it sooner, accelerating time-to-detection.
OCSF is a schema that DataBee has standardized on to make data inherently more usable to a security analyst. It also enables out-of-the-box relationships and correlations within a customer’s preferred visualization tool, such as Power BI or Tableau. (For more on this, check out the DataBee product sheet.)
Matt Tharp, who leads field architecture for DataBee, has contributed to the OCSF framework and was quoted in the announcement alongside leaders from Splunk, AWS and IBM, among others. Coverage of the announcement included this piece in Forbes.
Dark Reading News Desk
At the Dark Reading News Desk , Matt was joined by Noopur Davis, EVP and Chief Information Security & Product Privacy Officer at Comcast, for a great discussion with contributing editor Terry Sweeney on the topic of the big data challenge in security. Noopur and her cybersecurity team developed the security data fabric platform that DataBee is based on, and Matt—as an architect of DataBee—is part of the team bringing the commercial solution to market.
They discussed topics including: the challenge that big data creates for security teams; how Comcast has gone about addressing this issue; what a security data fabric is and how this approach differs from other solutions such as security information and event management (SIEM) systems; where and how a security data fabric and a data lake intersect; and what the customer response to DataBee has been so far.
The video of this discussion is a great way to understand DataBee’s origin story and the very real benefits that Comcast has gotten from building and using a security data fabric platform. Following in the internal solution’s footsteps, DataBee—unlike other security products—is designed to handle environments on the scale of large enterprises like Comcast.
DataBee in a minute and 39 seconds
The concept of a security data fabric platform is new and it’s a little on the complex side. So leading up to the Black Hat show, the DataBee team created an animated “explainer” video that brings to life what DataBee is, how it works and the key benefits it brings to different roles:
GRC teams can validate security controls and address non-compliance
Data teams can accelerate AI initiatives and unlock business insights
Security teams can quickly discover and stop threats
If you were at Black Hat and need a refresher, or if you’re learning about DataBee for the first time, this short video provides a great high-level introduction.
While DataBee has other use cases besides security, this is a market and critical capability in need of a better way to manage all of the data that’s relevant to understanding an organization’s real security, risk and compliance posture.
Will we be back at Black Hat in 2024? You betcha.
In the meantime, learn more or schedule a customized demo of DataBee today.
Additional resources:
Read Noopur’s blog It’s Time to Bring Digital Transformation to Cybersecurity
See how DataBee can be used for continuous controls assurance
Check out the DataBee website
Read More
It’s Time to Bring Digital Transformation to Cybersecurity
Recently, I’ve been thinking a lot about cybersecurity in the age of digital transformation.
As enterprises have implemented digital transformation initiatives, one of the great—albeit challenging—outcomes has been data: tons and tons of data, often referred to as “big data”. Storing, managing and making all of that data accessible to many different users can be expensive and non-trivial, but all that big data is gold. I know first-hand how valuable big data is to understand the health of the business; the insights provided by all this data enables an enterprise to continually adapt as needed to meet customer needs, to remain competitive, and to innovate.
Security has been left behind
Security has largely been left behind when it comes to digital transformation. While our counterparts in other areas of the business are using data lakes and mining rich data sets for actionable intelligence, security leaders and teams are still having to work way too hard to piece together a comprehensive view of threats across the organization. Data is the currency of the 21st century – it helps you examine the past, react to the present, and predict the future. Yet, too many security products are producing too much security data in silos; it’s difficult, at best, to bring all of this data together for a unified view of what’s really happening. Because of the constantly-changing threat landscape—exacerbated by everything from the global pandemic to the Russian/Ukraine war to new technology developments such as generative AI (which can also be used for good)—new security tools and capabilities to address the latest threats just keep getting added to the mix, like band aids on new wounds.
If you do a search for “the average number of security products in the enterprise security stack”, you’ll get answers that range from 45, to 76, to 130 – the number is large and the tools are many. (Tempting as it is, I won’t share how many externally developed security tools we’re using in the Comcast environment.) There are products for data protection, risk and compliance, identity management, application security, security operations, network, endpoint and data center security, cloud security, IoT and more. (While a few years old, the Optiv cybersecurity technology map provides a glimpse at how big and daunting this space is.) A security information and event management (SIEM) solution can help by collecting and analyzing the log and event data generated by many of these security tools. SIEMs are wonderful and provide essential functions, but they are expensive, not ideal for simultaneous, parallel compute, and do not really “set the data free” for long term storage, elastic expansion, and use by multiple personas.
This is the age of digital transformation for cybersecurity
“Digitalisation is not, as is commonly suggested, simply the implementation of more technology systems. A genuine digital transformation project involves fundamentally rethinking business models and processes, rather than tinkering with or enhancing traditional methods.” (ZDNet)
No more security data silos… it’s time for us to apply the tenets of digital transformation to security. We tackled that challenge here at Comcast and the results have been impressive. With a workforce of over 150,000, tens of millions of customers, and critical infrastructure at scale, we have a lot to secure. We also have millions of sensors deployed through our ecosystem, providing potentially rich insights. As digital transformation demands, we took a fresh look at our cybersecurity program and came up with a new approach to consolidating, analyzing and managing our security data that has resulted in millions of dollars in cost savings and, even more importantly, the ability to bring together vast amounts of clean, actionable and easy-to-use security data. And not just security data – we enrich security data with lots of other enterprise data to enable even deeper insights.
Applying the data fabric approach to security
How did we do it? We built on the idea of a data fabric – “an emerging data management design for attaining flexible, reusable and augmented data integration pipelines, services and semantics.” The outcome is a cloud-native security data fabric that has integrated security data from across our security stack and millions of sensors, enriched by enterprise data, enabling us to cost-effectively store over 10 petabytes of data with hot retention for over a year and allowing us to provide a ‘single source of truth’ to all of the functions that need access to this data for security, risk and compliance management.
“By 2024, data fabric deployments will quadruple efficiency in data utilization, while cutting human-driven data management tasks in half.”
Source: Gartner® e-book, Understand the Role of Data Fabric, Guide 4 of 5, 2022
Comcast’s security data fabric solution ingests data from multiple feeds, then aggregates, compresses, standardizes, enriches, correlates, and normalizes that data before transferring a full time-series datasets to our security data lake built on Snowflake. Once that enriched data is available, the use cases are endless: Continuous controls assurance, threat hunting, new detections, understanding and baselining behaviors, useful risk models, asset discovery, AI/ML models, and much more. The questions we dare to ask ourselves become more audacious each day.
My dream was to have vast amounts of relevant security data easily actionable within minutes and hours—not days or weeks—and we’ve achieved that using a security data fabric. We’re able to do continual “health checks” on our business and security posture and adapt quickly as threats and business conditions change. Security is a journey, and we are always looking for ways to improve – our security data fabric helps us in that journey. And in doing so, it has made us a part of the digital transformation journey of our entire corporation.
Fast-track the transformation of your security program
The security data fabric that has proven so beneficial to Comcast’s cybersecurity program is being commercialized and offered to large enterprises in a solution we call DataBee®, available through Comcast Technology Solutions. If you’re interested in taking a fresh look at your cybersecurity program with an eye towards digital transformation, check out DataBee and consider using a security data fabric to deliver the big data insights you need to stay ahead of the ever-evolving threat and compliance landscape.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Read More
DataBee is growing fast – join us!
Let’s get right to the point: we’re growing fast and looking for great minds to come with us.
In April, CTS launched DataBee® – Comcast’s innovative cloud-native security, risk and compliance data fabric platform. The original concept of DataBee was designed by Comcast’s CISO organization to gain more security insights using data generated throughout the business and validate the efficacy of our security programs. And we’re excited to share the benefits DataBee with other organizations. As Nicole Bucala, our VP and GM of Cybersecurity said in this blog, DataBee “is solving a problem I’d seen vendors try to address, unsuccessfully, for years: the desire to centralize security, compliance, and business data neatly, sorted and combined elegantly in a single place, from which insights could be derived and actions to be taken.”
Since launch, not only has DataBee gotten a lot of buzz (pun intended), but we continue to identify more use cases that can benefit from it. As a result, Comcast is investing heavily into the future of DataBee, and we’re inviting you to explore a place on our expanding team.
Comcast: a career – and company – you can be proud of
Comcast Technology Solutions is a fantastic place for bright minds and forward-thinkers looking to innovate and evolve with a company they can be proud of. As part of Comcast NBCUniversal, you’ll be joining one of the strongest, most forward-thinking companies in the world, with a multitude of ways to collaborate and evolve your own career path. We’re not just resting on our laurels as a Fortune 40 company – Comcast NBCUniversal has a proud heritage of employee satisfaction, environmental action, and community support:
93% of our employees rate us as a Great Place to Work.
2023 is our tenth consecutive year on the Points of Light Civic 50 list of community-minded companies.
DiversityInc. has recognized us as a Top 20 company for inclusion.
LinkedIn ranks us as #13 in their list of top companies to work for.
Open positions: just a start
Creative thinkers and great “people people” are expanding DataBee’s reach across complex industries that need a better way to not only protect themselves, but to do more with their data. Our engineering and sales teams are thriving with professionals who are building the future of data.
Click here to check out our current open positions.
Strategic Account Executive (Virtual)
Cybersecurity Field GRC Architect (Colorado, Virtual)
Cybersecurity Software Developer (Virtual)
Principal DevOps Engineer (Virtual)
Principal DevOps Engineer (Virtual)
UI Full Stack Developer (Virtual)
UI Full Stack Developer (Virtual)
Cybersecurity Platform Engineer (Virtual)
Sr. Manager Development Operations (Virtual)
Data Solutions Engineer (Virtual)
Cybersecurity Principal SaaS Architect (Virtual)
Cybersecurity Data Scientist Manager (Virtual)
Cybersecurity Data Analysis Manager (Virtual)
Cybersecurity Dataflow Software Engineering Manager (Virtual)
Senior Director Product Manager, Data Fabric Platform (Virtual)
Director Product Manager, Data Fabric (Virtual)
Please don’t hesitate to reach out to me personally on LinkedIn – we can talk about DataBee’s plans, your career goals, and where we might find a place to work together. There’s a lot more to come; DataBee is just starting to take flight.
Read More
Introducing DataBee: Sweetening the Security, Risk and Compliance Challenges of the Large Enterprise
Today, the Comcast Technology Solutions (CTS) cybersecurity business unit announced DataBee®, a cloud native security, risk and compliance data fabric platform. DataBee marks the first “home grown” product from this business unit and – like the other great products and platforms offered across the CTS suite – brings to market a solution originally developed for Comcast’s own use.
A security data fabric built by security professionals for security professionals
At its essence, DataBee weaves together disparate security data from across your technology stack into a single fabric where it is standardized, sharable, and searchable for analyses, monitoring, and reporting at scale. While the concept of a data fabric has been around for some time, you may not have yet come across a “security data fabric.” That’s because it’s time to bring security into the data fabric equation.
DataBee was inspired by Comcast’s internal security and compliance teams. The proliferation of cybersecurity tools and voluminous amounts of data made it difficult to combine for a unified view, creating silos while being costly to store and analyze. In much the same way a data fabric is used for streamlining access to and sharing data in distributed data environments, DataBee security data fabric combines data sources, data sets, and controls from various security tools to bring security data to the organization’s global data strategy.
After data is taken in from all the various feeds, DataBee aggregates, compresses, standardizes, enriches, correlates and normalizes it before transferring a full historical, time-series dataset to a data lake where data is stored. Enter Snowflake…
DataBee delivers a security data fabric for customers on the Snowflake Data Cloud
With Comcast Technology Solutions’ launch of DataBee, we’re proud to announce a strategic partnership with Snowflake, the Data Cloud company. DataBee is integrated with Snowflake, enabling customers to quickly and easily connect DataBee to their Snowflake instance where data is stored and processed.
The unique architecture of the Snowflake platform separates “compute” from “storage”, enabling organizations to scale up or down as needed, and store and analyze large volumes of data in a cost-effective way. This not only reduces costs but also provides flexibility, speed, and scalability, making it an ideal choice for storing security data.
After DataBee parses, flattens, and normalizes data for analysis, Snowflake’s platform is able to store substantial volumes of data for an extended period of time—historically a big challenge for cybersecurity solution providers—while driving down costs and maintaining high performance. The robust analytics enables teams across the organization to leverage the same dataset for high fidelity analysis, decisioning, response, and assurance outcomes without worrying about retention limits.
Normalized and enriched data from Snowflake can be exported into a customer’s business intelligence (BI) tool such as Tableau or PowerBI, generating more actionable reporting and metrics. Threat hunters also experience enhanced capabilities by using the same, clean data with tools of their choice, such as Jupyter Notebooks, enabling them to identify real threats faster as they conduct their investigation across large-scale datasets. Further, the enriched data from DataBee can be joined with additional datasets from the Snowflake Marketplace to derive additional insights.
DataBee provides security, risk and compliance capabilities for customers looking to create a security data lake strategy with their cloud data platform.
Cloud-native security and compliance data fabric at scale
Enabling a unified global data strategy with DataBee
DataBee combines the business context needed by security, risk and compliance teams to protect an organization’s people and assets. These teams include threat hunters, data scientists, security operations center (SOC) analysts, compliance and audit specialists, and incident responders. This unified view of critical security data with business context enables people in these roles to rapidly identify real threats and manage compliance.
Some use cases include:
Compliance: For continuous controls assurance for security controls such as Endpoint Detection and Response (EDR) coverage, asset management, vulnerability management, and more. DataBee provides near real-time visibility into an organization’s compliance and risk posture.
Threat Hunting: Designed for faster time-to-detection by enabling threat hunters to conduct automated and deeper searches with the ability to run multiple hunts at once.
Data Modeling: For supporting and building machine learning models. DataBee provides threat detection teams with time-series analytics to create machine-learning based detection.
SIEM Decoupling: Separate the storage of data that typically goes into a SIEM solution from your analytical layer. Cleansing data at the upstream results in SIEM cost reduction and highly performative analysis.
Behavior Baselining with Anomaly Detection: With your data in a clean, sharable, and usable format, security teams can easily understand user and device behavior and to rapidly detect and take action on any anomalies.
The real-world benefits of a security data fabric
The security data fabric architecture built and implemented by Comcast’s security team has yielded impressive results for the broader security, risk and compliance teams across the organization. In our own use we saw:
Daily data throughput reductions in our SIEM resulting in a 30% decrease in the cost of our security operations
3x faster threat detection
35% noise reduction in the data sets users work with
Faster compliance answers as a result of streamlined compliance reporting and automated queries
These results validate the very positive impact that bridging the worlds of data and security can have on an organization, and that we want other enterprises to benefit from through DataBee. When organizations have clean, sharable data to leverage that adds business context to security events, security teams can identify and detect real threats quickly and compliance teams can validate and achieve continuous compliance assurance while reducing costs for data storage and SIEM throughput. By bringing data fabric to the enterprise security tool chest, DataBee improves their security, risk and compliance posture.
Indeed, security is now all about the data. Businesses have made significant investments in their security teams and the solutions they use to protect the business. However, if all of these tools are working in silos and independent of the larger business context, they will still be inadequate at detecting and protecting an organization from cyberthreats.
By bringing security under their global data strategy, organizations will have more actionable insights, reduced false positive findings, the ability to conduct threat hunting across large-scale data sets, and achieve near real-time visibility into their compliance and risk posture.
Meet DataBee at RSA
The DataBee team will be hosting exclusive events and meetings during the RSA Conference in San Francisco. Check out our itinerary:
A Recipe for Security Data Lake Success, a breakfast event on April 26, 2023 at 8:30am
Request a meeting with the DataBee executive team
Learn more about DataBee and download our data sheet
The Future of Cybersecurity Brought Me to Comcast, a blog by Nicole Bucala, VP & GM, Cybersecurity Suite
Read More
The Future of Cybersecurity Brought Me to Comcast
Comcast Technology Solutions is proud to announce that Nicole Bucala has been brought on board to lead our new Cybersecurity business unit as its Vice President and General Manager. We asked Nicole to share the reasons why, as a cybersecurity leader, she made the choice to join us – and to set the stage as we bring Comcast’s scale and innovation to bear for the security needs of industries around the world.
When I was approached to join Comcast last summer to lead its new SaaS enterprise cybersecurity business unit, I was both skeptical and intrigued. Skeptical for the obvious reasons: there aren’t many stellar success stories about a diversified global conglomerate getting into enterprise cybersecurity, which is a complex and challenging market. Yet at the same time, Comcast was approaching its foray into enterprise cybersecurity in a way I hadn’t seen anyone try before. Comcast was, in fact, choosing to bring its own chief information security officer’s (CISO) inventions to market. This meant that Comcast was commercializing proven, accepted product concepts in use by experts, internally, and at scale.
Why did this scream out to me as powerful? Well, for those of us who’ve sold security to Comcast before, we know their security organization is staunch. The stakes are high when trying to secure critical infrastructure at scale. Staffed with over a thousand security professionals, Comcast builds many of its own security technologies to cater to its size and scale, as well as to address the advanced threats and increasing regulatory scrutiny it must withstand. In addition, Comcast is a Fortune 30, highly profitable company that has a substantial, wisely allocated security investment profile. So, anything built internally has to improve security efficacy, work at Comcast scale AND be cost-effective enough to save the organization money overall – a key value proposition that can immediately differentiate a commercial security offering.
At the time, I was employed at Zscaler, in charge of building innovative partnerships with the largest global conglomerates, which entailed designing and launching Zscaler’s zero trust innovations to the forefront of the next major adjacencies: 5G, operational technology (OT), IoT, and application security. I was working at the fastest growing public company in cyber, leading a phenomenal team, working for incredibly inspiring executives, and I had zero intention of leaving. Yet, to the surprise (and perhaps caution) of many around me, I made the leap, and I don’t regret it.
Here are five reasons why Comcast is poised to win in its new adventure, and I consider myself fortunate to be a leader of this amazing team:
The solutions we are commercializing are imagined, designed, built and used by Comcast itself. I have been surprised to learn that this translates to a solution that both saves an organization money, and can be sold in a differentiated manner. Comcast is a financially driven company with a high bar for the effectiveness of its technologies; so for any internally built solution to survive, it has to be efficient and cost effective at scale. In addition, from a commercialization perspective I’ve found that practitioner-to-practitioner conversations are naturally laden with inherent trust. When we are talking with the CISO of another Fortune 100 company about our solution, we are able to get to a much more ‘real’ level of talk, faster, when compared generally to customer calls I attended as an employee of a pure-play cybersecurity vendor. There’s a simple reason for this inherent trust: When you’re a practitioner looking to help other practitioners, you have a grounded level of understanding in the pros and cons of the solution you’re discussing. You know the essence of the pain points it addresses. You know, with precision, the outcomes you saw. And yes, you have experienced issues with the solution, because nothing is perfect – and you have resolutions to them that worked. What this means, is that other practitioners can trust that your knowledge of the solution is accurate and pertinent.
Comcast’s solution is a novel architectural approach that solves a long-standing pain point for security and compliance teams. The proposed solution is solving a problem I’d seen vendors try to address, unsuccessfully, for years: the desire to centralize security, compliance, and business data neatly, sorted and combined elegantly in a single place, from which insights could be derived and actions to be taken. Back in 2019, RSA Security’s annual industry conference had a theme of Business-Driven Security. That was also the first year that Extended Detection and Response (XDR) became a “real thing.” Vendors had dreams of combining enterprise and security data and controls to create a single source of truth and single pane of glass for investigations and compliance assurance, replete with business-oriented risk analytics for C-suites and boards. It’s now 2022, and XDR still hasn’t caught on the way practitioners hoped. I had seen other companies try to create similar solutions to the problem but they were unable to identify an architecture that didn’t skyrocket the storage costs, or were unable to make the product reliably scale for Fortune 50 accounts, or the system was “closed” such that only certain security personas could use it in prescribed ways. And so, I was delighted to see that Comcast had figured out an elegant new scalable, open architecture that saved money, while also being built on all the latest cloud-based tech.
This cloud-native platform was a new class of tech: a security, risk and compliance data fabric platform. It sat between a customer’s data sources, data lake, and analytical tools, achieving a true decoupling of security & compliance data ingest, from data storage, from the analytical layer. It ingests and transforms data such that a compressed, normalized, enriched, and time-series dataset is stored in a single data lake, in a way that reduces your SIEM costs by at least 30% and yields detections 3x faster. With approximately 100 data feeds from the top security and compliance tools today, protecting an enterprise with ~150K workforce, millions of endpoints and saving $15M+ in costs, the Comcast solution was proven to have profound impact in several ways. For a company with a huge amount of data to contend with, this single invention has changed how Comcast does security, for the better.
Comcast offers a proven model for commercializing in-house inventions. The new cybersecurity business unit is housed in Comcast Technology Solutions (CTS), the division within Comcast that takes internally developed technology and makes it available to other enterprises. CTS has successfully commercialized several other internal inventions, including those in content and streaming, advertising, communications infrastructure, and other technology. CTS offers its business units much flexibility and autonomy, which means that as the leader of the cybersecurity BU, I’d have the ability to set up new sales channels, billing and payment methods, and bring on new types of roles and talent.
Comcast offers excellent security career development. Believe it or not, I considered the chance to work with a large team of security practitioners to be a rare learning opportunity for someone whose career was largely on the commercial security vendor side. For years, I’d worked at security vendors who’d revered CISOs, trying to understand their motives, desires, anxieties and doubts… but never had I worked directly with one or one’s team closely over an extended period of time. I also had hired a few leaders in the past who’d worked as SOC analysts early in their careers, and saw they had a really grounded orientation around security products that everyone else on the security vendor side seemed to lack. By being embedded with practitioners, you simply get a better innate sense of the real problems, and can discern which security innovations will provide lasting value. It eliminates the element of guesswork that, no matter how many customer conversations you have, is otherwise unavoidably part of the product development process at traditional security vendors, unless you are also a stereotypical customer yourself.
Passion, and the benefit of being naive. The employees at Comcast who are working on commercializing this offering are without a doubt, by and large, some of the most passionate people I have worked with in my career. There is often a stereotype that employees of big companies work 9-5, but I have found the opposite to be true at Comcast: everyone is routinely working at all hours of the day to make this program a success. I can’t remember who said it, but there’s a famous saying out there that passionate people almost always succeed at their mission. They almost always achieve more than someone who lacks passion. Furthermore, the fact that Comcast is newish to selling SaaS security solutions to large enterprises is, in many ways, a blessing. The battle scars of “experience” can sometimes be a negative: they can cloud judgment, cause risk intolerance, and damper innovation. Grit, determination, and a willingness to learn can almost always make up for a lack of experience.
Over the next year, we have set some really exciting and daunting goals. We are launching a Generally Available first version of DataBee®, along with iterative releases every few months. We are going to build our brand in enterprise security. We are going to sign some fantastic strategic design partners. We will build relationships with strategic technology and go-to-market partners. I’ll be building out my team of product development, marketing, sales, business development and operations professionals, and I would be very excited to bring along some of the top talent in cyber. Feel free to contact me if you want to learn more about how to become part of the amazing team here at Comcast!
Upcoming virtual event: Top 5 Predictions For the Future of Security Data Lakes Webinar
In the past year, security teams have had to endure many new cybersecurity challenges—from managing hybrid work environments to sustaining major ransomware attacks, catastrophic vulnerabilities, and supply chain risks. As an industry, we must take a step back and look at these challenges from a different angle. We should rethink the technologies and data architectures that are in place today to understand why they are no longer serving their purpose.
Register here.
Interested in joining the CTS Cybersecurity team? We are hiring for the following roles.
United States Based Positions
India Based Positions
Sr Manager, Marketing (Brand & Thought Leadership)
QA Manager
BluVector (Federal Government) ATD/ATH Customer Support & Professional Services Lead
Software Development Manager:
Manager, Software Development
Manager, Software Development
Development Engineer 4:
Development Engineer 4
Development Engineer 4
Development Engineer 4
Development Engineer 4
Development Engineer 3:
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
Development Engineer 3
SDET Engineer 3:
Quality & Automation Engineer 3
Quality & Automation Engineer 3
Quality & Automation Engineer 3
Quality & Automation Engineer 3
Learn more about the DataBee Hive.
Read More
A new ground game for broadcasters?
The 2020’s have been a decade of big changes, but for broadcasters and MVPDs it’s already been a time of particular transformation. Last year saw the reallocation of C-band spectrum for 5G services and the subsequent migration of providers, either to a new satellite, or to a reimagined approach to delivery – namely moving away from satellite delivery entirely and into a terrestrial-based delivery model that can pave the way for full IP-based delivery. Now that the 5G spectrum transition is, for the most part, in the rear-view mirror, is this “ground game” approach still a viable path forward for MVPD’s?
Terrestrial delivery is something that we’ve been talking about actively since well before the C-band transition (you can read more about it here). “The ground-based option is something we developed in response to the FCC mandate that affected so many of our customers,” explains James Toburen, Affiliate Sales executive for Comcast Technology Solutions. “At first, impacted MVPDs were really in a situation where they absolutely had to make a decision on what to do; but now it’s a decision that’s grounded more firmly in the benefits to the business, which are numerous.”
From “doing dishes” to a path towards full IP delivery
We get into the weeds on the technical aspects of terrestrial distribution in this blog, but as a quick recap:
Terrestrial delivery replaces the satellite feeds with broadband IP networking, typically fiber, at the cable plant.
These feeds are still aggregated and assembled into experiences that are delivered to customers.
Although technology to pre-compress and bundle satellite signals has lowered infrastructure costs from the “dish farms” of the past, fixed bandwidth issues still exist that make it challenging for providers to offer new services; plus, satellite is a uni-directional delivery method that doesn’t allow for the interactive experiences that customers expect and enjoy today.
With a move to internet delivery, MVPDs gains more scalability and flexibility, and the ability to offer more dynamic services.
“With Managed Terrestrial Distribution, it’s not a completely new delivery model; programming can still delivered to customers via QAM,” Toburen continues. “But our terrestrial distribution paves the way for transitioning away from managing that satellite dish-based headend. With IP-delivered video, you’re gaining not just a lot of flexibility and redundancy, but you’re delivering even more content that folks want.”
Five reasons to transition to terrestrial: a deeper dive
MVPDs and operators are adopting Managed Terrestrial Distribution as a foundational step in their digital transformation; it’s a transition that not only lowers risk but also leads to new dynamic content experiences for an evolving, more competitive marketplace.
For more information beyond the blog links above, read our guide: Five Reasons to Transition to Terrestrial Distribution. It outlines the immediate practical benefits to reimagining your delivery model – gaining quality, efficiency, and reliability – while also providing insights into the long-term benefits to services, business plans, and ultimately your bottom line.
Download the guide here.
Read More
Global Advertising: the connected, collaborative ecosystem
In this business, thoughts about “agility,” “time-to-market,” and “personalization” are used every day, but the rapid pace of upheaval in 2020 and 2021, whether it be from pandemic or social unrest, gave advertisers real-world opportunities to test their mettle; to see if their processes and technologies could:
shift at the speed of their audiences,
deliver ads on-time to every screen, and
collaborate with providers and emerging media platforms in order to inform data-driven content decisions more effectively.
If all the world’s a stage, it’s incumbent on advertisers and agencies to be positioned for success before the curtain lifts.
“The international advertising ecosystem has really evolved dramatically in a very short time,” says Simon Morris, Head of Enterprise Sales for CTS’ Advertising Suite. “On one hand, from a content standpoint you’ve got to be prepared with media and messaging for different consumers, and be able to execute or pivot at a moment’s notice. On the other hand, the complexity of executing accurately, from country to country, is an enormous undertaking. This is precisely why we’ve developed such solid foundational partnerships with companies like Peach, Innovid, and The Team Companies.”
Peach: Connected data and local support to “de-frag” the ad ecosystem
If you’ve ever de-fragmented your computer’s hard drive, then you’ve cleaned up your data so the entire system can run more smoothly. Peach is a lynchpin in a comprehensive global ad operation, currently enjoying its 25th year serving advertisers agencies, broadcasters, and publishers by bridging the data divide across a global customer base. Together, we offer a full-service solution to deliver spots to viewers in more than 100 countries. This partnership results in a comprehensive video ad management solution with a global network of local experts to provide reliable, trusted, centralized control of increasingly complex ad operations.
The combination of Peach’s local expertise and global capabilities deliver benefits that extend all the way into reporting that capitalizes on the connected data to bring back insights across the global ecosystem. Their agnostic API integrations across ad ecosystems, and collaboration with companies like Innovid, another one of our partners and one of the world’s largest independent ad platforms, elevate the benefits even further:
Innovid: Innovations for every ad experience
Earlier this year, Peach announced their platform integration with Innovid. This successful integration leverages the Innovid Creative Bridge to make moving content between platforms and customer folders seamless and straightforward. It’s more than just a process improvement: Peach stewards creative assets through rigorous accuracy and quality controls, managing distribution to eliminate the need for duplicative file uploads and downloads. From there, Innovid’s ad serving and hosting, as well as their creative toolsets, allow clients to customize content, get it to the right screen, track it, and then optimize performance – a continuous improvement loop.
The TEAM Companies (TTC) – a strategic alliance for metadata mastery
The complexity of modern advertising doesn’t just apply to the fragmented device / viewer ecosystem. Advertising is a creative endeavor, which means contracts, agreements, and talent / usage rights that vary wildly from asset to asset. Violations in rights or permissions aren’t just costly in financial terms but can also damage valuable relationships that need to be protected. Global advertising efforts take an already-complex problem and give it more to contend with. CTS’ Ad Management Platform is deeply integrated with the TTC platform to solve for all of it, providing end-to-end control of commercial rights management. Advertising companies can now track an asset’s use across media channels to ensure adherence to rights and policies no matter when or where an ad gets played.
Last year, TTC joined forces with Cast & Crew, a powerhouse that has served the entertainment industry for decades with financial services like residuals, payroll, and taxes. The union bolsters TTC’s strength, and its ability to expand services and global expertise.
Ultimately, it’s all about campaign performance
To be an advertiser in today’s global ecosystem is sort of like being an action hero on top of a bullet train racing at top speed. The real-time demands and the need for accurate execution are all built on top of technology that’s moving at a breakneck pace. Fortunately, there are experts in the disciplines, tools, and techniques needed to inform and act accurately on today’s decisions, and illuminate the path forward. As advertisers, agencies and providers learn to build a more efficient, simplified, and intelligent operation, they increase the amount of time and resources they can focus on understanding and optimizing performance across every active campaign.
Read More
What's Next For Media and Entertainment Technology in the New Roaring '20s?
It takes more than just a killer catalog or cutting-edge creative to stand out in today’s media markets. The winners are harnessing the power of emerging technologies to simplify operations, drive more value from data, and bring analytics to the forefront of decision-making. There’s a lot to take in, and to pay attention to.
Realizing goals for 2022 in an agile and cost-effective way will depend in large part on advertisers, MVPDs, operators, and content and streaming providers’ willingness to look beyond the current ways of creating and delivering media to new possibilities provided by emerging media and entertainment technology.
As advertisers and media and entertainment providers of all types seek to create highly personalized viewing experiences for customers, they will move toward embracing technologies that allow them to increase engagement, deepen relationships between audience and brand, and find new ways to monetize.
Smarter workflows with ai / ml
Artificial intelligence (AI) and machine learning (ML) have a lot of science fiction and misapplication applied to them; but these powerful technologies underpin the next evolution of video workflow automation.
As content and video providers search for new avenues to broaden their reach, AI represents an exciting opportunity for businesses to do more with data. In fact, analysts like ABI Research expect revenue from AI and machine learning from video ad tech to surpass $9.5 billion.
Operators and providers need a better way to understand how to get the highest performance out of their content investments. Emerging solutions will address these opportunities, and VideoAI™ is a new framework of learning technologies designed to do just that. VideoAI scrutinizes video content – video, audio, closed captions -- to create actionable metadata that streamlines operations and improves content ROI. This rich metadata can be applied in creative ways to improve workflows, give advertisers contextual information to target ads more effectively, and to create new experiences for live and on-demand viewers.
As AI technology proves itself in real-world video applications, more companies will move to adopt it to drive revenue growth and create tailored experiences for consumers, particularly as the end of third-party cookies approaches in 2023. One report suggests that AI-powered audience solutions will grow by as much as 20%. Are you ready for this sea change?
From soup to nuts, automation is on the menu
Automation is a buzzword across all tech sectors, and for good reason. Not only are AI and ML solutions the next step in the evolution of automation, savvy businesses will recognize that taking a comprehensive and holistic approach to automation will make all the difference in their ability to execute transformation. The right automation strategy saves time and money, allowing businesses to allocate resources from mundane, routine tasks to focus, instead, on strategic undertakings.
Automation in advertising will become especially important third-party cookies become more challenging for campaigns to rely upon. In today’s media landscape, many advertisers operate with a patchwork of solutions across multiple media sources, data providers, and technology partners. Under these conditions, it’s a serious challenge to gain a clear view into the ad creation and delivery process — and advertisers are unable to use new audience targeting solutions to tailor their messages. In a business in which automation has a foothold, plenty of people are still poring over manual spreadsheets.
That’s why a more holistic and unified approach to automation will enable faster and more seamless management of creative, media buying, and distribution. Instead of relying on manual processes and vendor-specific platforms, businesses can deploy a centralized, automated tool to support ad targeting and versioning.
A well-conceived and comprehensive automation strategy will ultimately allow advertisers to really and truly connect the dots across the world of channels and formats and across internal workflows and external partners. Instead of limping along, with piecemeal automation, an across-the-board strategy will bring together distribution, creative customization, and reporting to help reinstate control over a fragmented process, increase ROI, and deliver measurable results.
Redefining the ecosystem with aggregation and terrestrial distribution
In response to an abundance of content destinations, consumers are faced with a new challenge: How do I manage all my choices? Moving from device to device, app to app, service to service – it’s a 21st-century problem in search of a more streamlined solution.
To address this, successful pay TV operators will find a way to transform into “super aggregators.” Not only will this dramatically reengineer the viewing experience to prioritize consumer preferences, it will require operators to find soltuions that work across multiple services and platforms and provide customers with the content they want, when and how they want it.
For operators that have traditionally relied on satellite to get content onto screens, fiber-based delivery represents an exciting opportunity to stay ahead of the curve. Not only does terrestrial distribution allow providers to reduce their footprint and lower overhead, it provides a convenient path to IP while maintaining clear, high-quality service and expanding channel offerings.
Ultimately, in the evolving landscape, the winners will be willing to explore new ways of doing business and embrace opportunities such as emerging media and entertainment technologies to expand their vision and enhance the viewing experience for their customers in exciting and innovative ways.
Read More
Implementing Zero Trust Principles at Scale
The recently released OMB memo M-22-09 is a step toward bringing federal agencies in line with what many organizations in the private sector have been working toward for a while now – constructing a Zero Trust Architecture.
We discussed the strategic implications of this federal directive in a previous article and would like to also offer a tactical perspective on meeting these requirements from a decade of experience developing machine learning cybersecurity technology with the U.S. Government.
Beyond adopting a new buzz word, embracing Zero Trust means remastering the tested principle of trust nothing, verify everything in order to secure a network with untold endpoints geographically dispersed over a landscape fraught with increasingly sophisticated cyberattacks.
Encrypt data in motion
This includes internal and external data flows, as well as applications – even email. A vital first step is to encrypt all DNS traffic using DoH/DoT today and encrypting all HTTP traffic using TLS 1.3. While doing so you don’t want to compromise security monitoring so make sure to implement local DNS resolvers whose logs can be used to analyze the clear-text requests.
From a castle to the cloud
As the requirement to support remote users increases, embracing the security features resident in cloud computing certainly a critical first step to safely enable access from the Internet for both employees and partners. Following this macro-level migration trend of large IT organizations clearly recognizes that many cloud providers have already adapted to some measure of Zero Trust Architecture but there is not a one size fits all for every organization and their cybersecurity risks. Of course, no transition happens overnight and there will be some services that never do. For those use cases, we’ll still need to secure and monitor the needed on-premises resources as part of the broader security posture.
Logical segmentation
The federal directive to develop and implement a logical micro-segmentation or network-based segmentation is clear in the memo. The challenge then becomes finding ways to limit and, if necessary, quickly identifying the lateral movement of any adversary who might gain a foothold within your network is imperative going forward.
AI enabled hunting
Endpoint security certainly plays a role in moving toward Zero Trust, but as EO 14028 emphasizes this also involves developing a real-time hunting capability rooted in machine learning. And to effectively hunt government-wide, and most critically address the rise of zero-day attacks, collection of telemetry from systems without EDRs installed is invaluable – since an adversary would simply hide where the defenses are weakest. Achieving Zero Trust isn’t just about prevention, but comprehensive and continuous monitoring.
Hundreds of thousands of new malware variants are being developed daily. Global ransomware attacks rose by 151% last year1 and the average recovery cost per incident more than doubled to $1.8 million.2 This memo codifies what those stats make clear – in order to secure our nation’s vital infrastructure, we need intelligent solutions that go beyond monitoring systems for known threats.
BluVector provides advanced machine learning cybersecurity technology to commercial enterprise and government organizations. Our innovative AI empowers frontline professionals with the real time analytics required to secure the largest systems at scale.
Learn more about our philosophy of Zero Trust and understand how we deploy within existing security stacks. For more BluVector thought leadership on ZeroTrust, read Zero Trust: A Holistic Approach by Travis Rosiek.
Read More
Zero Trust: A Holistic Approach
The President’s Office of Management and Budget (OMB) released M-22-09 last month, announcing it will require all federal agencies to move toward a Zero Trust Architecture (ZTA) by FY24.
BluVector has worked closely with large government agencies since our inception, and we’re encouraged to see clear direction toward a common cybersecurity strategy.
After more than a decade of offering our machine learning technology to secure our nation’s most sensitive data and helping to protect one of the largest and most advanced ISPs in the western world, we’d like to offer our perspective on Zero Trust as a holistic philosophy toward cybersecurity.
Trust nothing, verify everything is a time-tested principle that has been crucial to designing modern IT networks. It’s also a tall order and we invite agencies and large-scale enterprise organizations to view Zero Trust as a continuous journey, not an end state. We continue to recommend implementing a layered defense that increases visibility across the entire network and connected endpoints to analyze all data for any malicious code.
Perimeters aren’t what they used to be
Even though Zero Trust doesn’t mean Zero Perimeter – like it or not, as remote work becomes the new normal for more federal and civilian employees, we are moving away from the security of trusted networks. New identities and devices join our expanded networks daily, running new applications and pushing more data from unknown places. In this brave new world, it’s more important than ever for the cybersecurity teams protecting critical infrastructures to assume other users, systems and networks are already compromised. These practices have been adhered to by cybersecurity professionals from time immemorial and must not be forgotten as our industry adopts a new buzz word into our lexicon. The task of hardening these infrastructures against growing threats is paramount and all who undertake it should choose their approach early enough to ensure it aligns with their planned IT investment.
Encrypted doesn’t mean protected
Encrypted data can be deleted, accessed later in memory after it has been decrypted, re-encrypted, or stolen and saved for future decryption with more advanced technology. In a ZTA, it’s imperative to encrypt all network traffic in transit, both DNS and HTTP. But operating on an open network gives the adversary many vantage points to monitor your network traffic as well. And if they can gain access through a backdoor left open on a legacy system with pre-existing accounts, whether your DNS is encrypted or not is the least of your worries. In an IoT environment, securing all the endpoints you can won’t secure your system, but might simply create a false sense of security. Remember, every single point of failure has the potential to become target number one.
Finding the adversary already in the system
Some large technology ecosystems may be currently compromised with threats that can lie dormant for months, maybe years. Putting new locks on a compromised system won’t mitigate this threat, but instead may obscure it, further enabling entrenched bad actors.
Moving toward a ZTA must be phased in, networks swept clean and continuously monitored for malicious behavior while adopting new technology and processes to better protect systems that will be open to everything – any human error or patch delay will result in immediate vulnerability.
Considering the current threat environment addressed in this memo, we recommend building a layered cybersecurity defense in order to effectively institute Zero Trust principles across any large enterprise – especially those defending our nation’s vital institutions. This will require developing a high level of cybersecurity maturity across your entire organization, investing in technology that empowers your security team to hunt effectively while protecting them from alert fatigue, and adopting policies that will allow your broader workforce to operate safely in the Wild West of the open Internet.
BluVector provides advanced machine learning cybersecurity technology to commercial enterprise and government organizations. Our innovative AI empowers frontline professionals with the real time analytics required to secure the largest systems at scale.
Learn more about our philosophy of Zero Trust and understand how BluVector can be deployed to enhance your security stack. For more BluVector guidance on implementing ZeroTrust, read Implementing Zero Trust Principles at Scale by Scott Miserendino, Ph.D.
Read More
Converging Media Technologies, Continued
In November we were proud to host the CTS Connects Summit for the second year. We frequently talk about the convergence of broadcast and digital workflow and this year it was even more evident that this decade will be marked by a convergence of media and advertising technologies, as real-time services, security, and monetization techniques all become more intertwined. This year’s summit had a natural focus around creating unity between industries, fostering collaboration, flexibility, and security, with particular emphasis on experiences that improve the value equation on both sides of the screen
Read More
See spot run the world
The story of advertising in today’s world is a tale of incomprehensible and increasing complexity. We can look back with nostalgia at the time when ad spots were all the same size, the same length, the same broadcast format; but in today’s world of real-time media placements and creative optimization, brands need a complete end-to-end approach not just to reach the world, but also to understand, what’s working and what’s not.
Read More
Digital-First Pay-TV: Four Takeaways
Pay-TV is going through a significant change as the market adapts to new consumption models and competitors. Operators have responded in a variety of ways, but so far, no consensus has emerged as to which models work, when they work, or why. Organizations who pivot to a digital-first approach — specifically, thinking from a converged broadcast and OTT perspective — not only create opportunities for themselves but can also respond more effectively to capitalize on them.
Read More
Customer Connections Newsletter July
Advertising production is an increasingly complex component of delivery, and many layers of specialization are required.
From file prep and media management to quality control and distribution, you need access to a team that provides personalized solutions and evolving technologies and products.
Professional post-production services combined with centralized ad management simplify and unify creative, talent, and distribution workflows across channels. Production Solutions from Comcast Technology Solutions offers advertisers and agencies visibility and control of their campaigns, along with real-time creative intelligence that enables creative optimization, targeting, and ROI.
Read More
MVPDs: Down-To-Earth Distribution For Rural Areas
Multichannel video programming distributors (MVPDs) in smaller, rural areas have more options than ever before to stay competitive – including creative approaches to their media delivery and distribution models.
Read More
What is a Next Generation Network Intrusion Detection System?
Intrusion detection was first introduced to the commercial market two decades ago as SNORT and quickly became a key cybersecurity control.
Deployed behind a firewall at strategic points within the network, a Network Intrusion Detection System (NIDS) monitors traffic to and from all devices on the network for the purposes of identifying attacks (intrusions) that passed through the network firewall.
In its first incarnation, NIDS used misuse-based (rules and signatures) or anomaly-based (patterns) detection engines to analyze passing traffic and match the traffic to the library of known attacks. Once the attack was identified, an alert was sent to the security operations team.
While the technology continues to play a key role in the majority of enterprises, the Network Intrusion Detection System has fallen out of favor for two key reasons:
The rules-based engines used for detection were subsumed into the Next Generation Firewall (NG-FW), making it more cost effective for some organizations to deploy a unified capability;
Threat actors are adept at executing attacks that evade the signatures/rules/patterns used by both the traditional Network Intrusion Detection System as well as the unified NG-FW.
Machine Learning and the Rise of Next Generation Network Intrusion Detection Systems
Like a traditional NIDS, the function of a Next Generation IDS/IPS is to detect a wide variety of network-based attacks perpetrated by threat actors and contain these attacks, where feasible, using appropriate controls. Unlike a traditional NIDS, this technology leverages machine learning powered analytics engines that are capable of identifying attacks that evade traditional misuse-based and anomaly-based engines.
Network attack types that should be addressed by an NG-NIDS include:
Malware Attacks:
Malware is malicious software created to “infect” or harm a target system for any number of reasons. These reasons span simple credential access and data theft to data or system disruption or destruction. Today, an estimated 30% of malware in the wild is capable of evading traditional signature-based technologies. Most organizations addressed this through the deployment of endpoint detection and response (EDR) technologies. In relatively homogenous environments with firm control over the endpoint, endpoint controls may be adequate. For organizations with an array of client and server technologies, limitations over patch and update frequencies, IoT devices, or endusers over whom there is limited control, a strategically deployed NG-NIDS acts as a primary defense against “unknown” malware.
Below are common use cases wherein a Next Generation Network Intrusion Detection System can play a protective role:
Malicious websites – These attacks generally start at legitimate websites that have been breached and infected with malware. When visitors access the sites via web browser, the infected site delivers malware to the endpoint. Alternatively, doppelganger sites can be used to disguise malware as legitimate downloads.
Phishing/Spearphishing emails – Threat actors trick endusers into downloading attachments that turn out to be malware. Alternatively, threat actors trick users to click on a seemingly legitimate link to visit a website, from which malware is delivered.
Malvertising – In this case, threat actors use advertising networks to distribute malware. When clicked, ads redirect users to a malware-hosting website.
In each of these cases, the NG-NIDS sits between an internal user and external site and is capable of detecting malware and issuing a block request to a firewall or endpoint manager to contain the threat. In situations where the enterprise uses a split tunnel architecture or allows mobile workers to access the Internet without restriction, the NG-NIDS will see suspicious activity emanating from an infected device once reconnected to the corporate network. NOTE: While a NG-NIDS can be highly effective and easier to deploy than endpoint technologies, it is highly recommended that organizations use both. It is certainly the most effective means of protecting an organization with a mobile workforce.
Attacks that “Live off the Land”:
One of the more frightening and rapidly emerging categories of attack is known as a “living off the land”, fileless malware, or “in-memory” attack. These attacks are specifically created to start or complete an action that is untraceable by today’s security tools. Rather than downloading a file to a host’s computing device, the attack occurs in the host’s memory (RAM), leaving no artifact on disk. Powering down or rebooting an infected system removes all artifacts of the attack; only logs of legitimate processes running remain, thereby defeating forensic analysis.
How is this attack accomplished? The attacker injects malware code directly into a host’s memory by exploiting vulnerabilities in unpatched operating systems, browsers and associated programs (like Java, JavaScript, Flash and PDF readers). Often triggered by a phishing attack, the victim clicks on an attachment or a link to a malware-infected website or a compromised advertisement on a reputable site.
Once the malware is in memory, attackers can steal administrative credentials, attack network assets, or establish backdoor connections to remote command and control (C2) servers. Fileless attacks can also turn into more traditional file-based attacks by downloading and installing malicious programs directly to computer memory or to hidden directories on the host machine. The threat actor can also employ a variety of tactics to remain in control of the system after a shutdown or reboot.
The role of the NG-NIDS is to intercept suspicious machine code (usually in the form of obfuscated JavaScript, Powershell, or VB Script) and emulates how malware will behave when executed. Operating at line speeds, the NG-IDS determines what an input can do if executed and to what extent these behaviors might initiate a security breach. By covering all potential execution chains and focusing on malicious capacity rather than malicious behavior, the NG-NIDS vastly reduces the number of execution environments and the quantity of analytic results, thereby reducing the number of alerts that must be investigated.
Worms:
Worms are a form of self-propagating malware that does not require user interaction. WannaCry, for example, targeted a widespread Windows vulnerability to infect a machine. Once infected, the malware moved laterally, infecting other vulnerable hosts. Once the target is infected, any number of actions can be taken, such as holding the device for ransom, wiping user files or the OS, stealing credentials, or scanning the network for vulnerabilities.
A strategically deployed NG-NIDS, sitting in an internal network, is capable of detecting lateral spread of the worm and issuing a block request to a firewall or endpoint manager to contain the threat.
Web attacks:
In a web attack, public facing services – like web servers and database servers – are directly targeted for a variety of reasons: to deface the web server, to steal or otherwise manipulate data, or to create a launching pad for additional attacks. The most common means of attack in this category include:
Cross-Site Scripting (XSS) – An attacker injects malicious code into the web server which, in turn, is executed on an enduser’s browser as the page is loaded.
SQL Injection (SQLi) – An attacker enters SQL statements to trick the application into revealing, manipulating, or deleting its data.
Path Traversal – Here, threat actors custom craft HTTP requests that can circumvent existing access controls, thus allowing them to navigate to other files and directories.
An NG-NIDS, sitting behind the firewall and in front of a Web or database server is capable of detecting these attacks and issuing block requests to an application firewall.
Scan Attacks:
Scans are generally used as a means to gather reconnaissance. In this case, threat actors use a variety of tools to probe systems to better understand targets available and exploitable vulnerabilities.
An NG-NIDS, sitting behind the network firewall, is capable of detecting these probes and issuing block requests to the network firewall.
Brute force attacks:
The threat actor attempts to uncover the password for a system or service through trial and error. Because this form or attack takes time to execute, threat actors often use software to automate the password cracking attempts. These passwords can be used for any number of purposes, including modification of systems settings, data theft, financial crime, etc.
An NG-NIDS, sitting behind the network firewall and/or at strategic points within the network is capable of detecting brute force attacks and issuing block requests the network firewall.
Denial-of-service attacks:
Also known as distributed denial-of-service attacks (DDoS), DDoS attacks try to overwhelm their target – typically a website or DNS servers – with a flood of traffic. In this case, the goal is to slow or crash the system.
An NG-NIDS, sitting behind the network firewall, is capable of detecting DDoS attacks and issuing block requests to the network firewall.
Read More
Why Network Detection and Response (NDR) Solutions are Critical to the Future of Federal Agencies’ Cybersecurity
Executive Order 14028, “Improving the Nation’s Cybersecurity,” signed by President Biden, shows the US federal government’s growing focus on cybersecurity.
The 2021 Executive Order requires U.S. federal government departments and agencies to focus on their cybersecurity framework and address the risks of malicious cyber campaigns.
This increased focus comes as internal audits of federal agencies’ security have indicated that number of federal agencies’ cybersecurity programs are not adequately or fully protecting them against modern cyber threats.
Like businesses and nongovernmental organizations, federal agencies face a rapidly evolving cyber threat landscape that requires an equally fast-evolving cyber security framework. In the current environment, signature-based detection is limited, especially when it comes to detecting zero-day threats and malicious activity that has never been seen before. Known hackers can even take actions to disguise themselves, like changing their techniques or processes, to avoid detection from an NCPS. (hsgas.senate.gov report, page 11)
To protect themselves against these evolving threats, government agencies can benefit greatly from security solutions that can detect intrusions in time for government security teams to respond and manage the harm that they could do to the organization. Since most cyberattacks occur over the network, network visibility is essential to government agencies’ ability to protect against evolving threats. This makes an advanced network detection and response (NDR) solution an essential component of government agencies’ cybersecurity defenses. Individual agencies should implement the NDR while a broader replacement for NCPS is considered.
Federal Agencies Face Increasing Cyber Threats
US government agencies are one of the biggest targets of advanced persistent threats (APTs). These threat actors have significant resources and expertise dedicated to the development and use of various tactics to infiltrate and cause damage to target organizations.
Federal agencies face various cyber threats from these advanced threat actors. Some of the most sophisticated and dangerous attacks these organizations face include ransomware infections, fileless and in-memory malware attacks, and the exploitation of unpatched zero-day vulnerabilities.
Ransomware and RaaS
Ransomware has emerged as one of the greatest threats faced by most organizations, including federal government agencies. In fact, DHS’s Cybersecurity and Infrastructure Security Agency (CISA) has launched the Stop Ransomware project specifically to provide coordinated prevention and response to ransomware attacks across the entire US government. In February 2022, CISA reported that 14 of 16 critical infrastructure sectors have been targeted by sophisticated ransomware attacks.
With the emergence of Ransomware as a Service (RaaS) business model, the ransomware threat has grown significantly. With RaaS, ransomware developers put their malware in the hands of other cyber threat actors for a split of the profits.
The ability to identify the malware before it gains access to an organization’s systems or to detect attempted data exfiltration is essential to minimizing the cost incurred by a ransomware attack.
Fileless Malware and In-Memory Attacks
Many traditional antimalware systems are file focused. The typical antivirus scans each file on the computer, looking for a match to its library of signatures. If it finds a file matching the signature, it quarantines and potentially deletes it from the disk.
Fileless and in-memory malware are designed to evade detection by these solutions by avoiding writing malicious code to a file on disk. Instead, the malware operates entirely within running applications.
With fileless malware attacks growing significantly year-over-year in 2021, the percentage of attacks that traditional antivirus solutions can detect is shrinking quickly. Threat detection techniques based on identifying anomalous behavior or network traffic are essential to detecting these types of attacks.
Exploits
New zero-day vulnerabilities and exploits pose a growing threat to federal agencies and their security. These exploits can bypass security solutions that depend on signatures to detect known attacks and are blind to novel threats until new indicators of compromise (IoCs) are released.
New vulnerabilities such as Log4j have caused a scramble to patch vulnerable systems, and cyber threat actors often move quickly to exploit vulnerable organizations after a new vulnerability is discovered and disclosed. In fact, half of new vulnerabilities are under active exploitation within a week.
In recent years, the rate at which zero-day vulnerabilities are discovered and exploited has increased significantly – with more than twice as many zero days exploited in 2021 as in 2020. Without the ability to identify and block attempted exploitation of these novel threats, federal agencies are continuously playing catchup after the latest vulnerability is discovered.
While these novel attacks cannot be detected using signature-based techniques, other approaches can identify these threats. For example, network traffic analysis that identifies command and control traffic between the malware and attacker-controlled servers can tip off a security team to an undetected malware infection.
How NDRs Can Help Solve Federal Security Challenges
As cyber threat actors expand and refine their tactics and techniques, detecting and mitigating attacks becomes much more difficult. Additionally, as government IT systems expand and evolve, protecting them against cyber threats becomes more challenging for security teams, especially as government agencies struggle to attract and retain scarce cybersecurity talent.
Managing the threat of ransomware, in-memory malware, zero-day exploits, and other cyber threats requires federal agencies to have tools that enable them to detect and respond to potential threats rapidly. Network detection and response (NDR) solutions are an essential component of a federal agency’s cybersecurity architecture and can be used with EDR to get a more complete solution, often with NDR solutions taking the lead in initial attack detection.
“While EDR can provide a more granular view into the processes running on the endpoint and in some cases more finely tuned response options, NDR is critical for maintaining consistent visibility across the entire network.” – ESG’s “NDR’s role in Supporting the Executive Order on Cybersecurity”
NDR solutions provide multiple features that can help an agency’s security operations center (SOC) to identify and respond to potential threats, including:
Network Visibility: Visibility into federal agencies’ networks is essential to detecting potential threats early within the cyber kill chain. NDR solutions can help to collect, aggregate, and analyze data about network traffic to help SOC analysts to identify and respond to potential threats.
AI/ML Threat Detection: Security analysts commonly face an overwhelming number of security alerts, making it difficult to differentiate between true threats to the organization and false-positive detections. Artificial intelligence (AI) and machine learning (ML) solutions can help analyze and triage alerts, reducing false-positive rates and enabling security personnel to focus their time and attention on true threats to the organization.
Creating a holistic approach with Zero Trust: All federal agencies are required to move toward a Zero Trust architecture by 2024. Successful ZTA implementation requires robust and flexible tools that work quickly and can scale, like the BluVector platform. It’s important to remember that ZTA does not address threats already in a network, and that weaponized data can elude ZTA security. Having complete visibility – including network, device, user, file, and data visibility – is critical to ZTA.
Recommended Responses: During a cybersecurity incident, a prompt and complete response is essential to minimizing the cost and impact of the attack but can be difficult to achieve under tense circumstances. BluVector’s NDR solution can provide recommendations about how to handle certain types of incidents, enabling security analysts to quarantine or remediate the threat more efficiently.
Support for Automation: Security teams commonly struggle to meet expanding workloads as organizational infrastructure becomes more sprawling and complex and cyber threats grow more sophisticated. With the ability to automate repetitive tasks and responses to common threats, NDR solutions can help to ease the burden on overstressed personnel and enable them to focus their time and attention where it is most needed.
The advanced threat detection and response capabilities of an NDR solution can help security teams to identify these subtle threats and respond quickly and effectively to minimize the risk to sensitive government data and critical systems.
BluVector Leading the Way in a High Threat Era
US federal agencies continue to face significant cyber threats from skilled and well-resourced threat actors. These cyber threats use various techniques to gain access and cause damage to government systems, including the use of ransomware, in-memory malware, and the exploitation of novel vulnerabilities. Often, these attacks are specially designed to fly under the radar and avoid detection by widely used security solutions.
While these attacks may be designed to be subtle and difficult to detect on the endpoint, they still need to gain access to agency systems and perform command and control communications over the network. BluVector’s Advanced Threat Detection and Automated Threat Hunting solutions can monitor an agency’s network traffic and sort out genuine threats from the noise, while efficiently supporting SOC analysts’ ability to detect and respond to advanced cyber threats.
BluVector has worked closely with large government agencies since our inception and is here to support the journey toward a common federal cybersecurity strategy. For over a decade, our solutions have enabled government agencies to achieve essential visibility into their solutions while rapidly and accurately identifying threats using AI and ML. Learn more about BluVector’s government solutions and our over 99% threat detection rate.
Read More
Fast-Food Ads Get Faster Service
“Getting the order right” – it’s job one for every quick-service restaurant (QSR), no matter if the order is coming from the front counter, an app or service, or the drive-thru window – and no order is the same. The end results might be different on every order, but some things have got to be there every time to keep customers coming back. Every order has to be served accurately, on-time, and in the highest quality. Hmm….where’s the advertising analogy here?
Read More
Customer Connections Newsletter May
Comcast Technology Solutions and PremiumMedia360, the leading company in TV advertising data automation, announced a new strategic relationship. The integration with the Comcast Technology Solutions’ Ad Management Platform enables TV advertising buyers and sellers to automatically synchronize and solve reconciliation discrepancies throughout the ad buy lifecycle. These new capabilities stand to prevent revenue loss associated with dropped spots due to order execution errors and invoicing delays. The integration can help save work hours typically spent in solving data discrepancies between buyer and seller systems.
Read More
Customer Connections Newsletter April
TV isn’t going anywhere — it’s going everywhere, transitioning from a broadcasting-only industry to a multiplatform video market, comprising linear and digital, streams and on-demand offerings, available across a growing range of platforms and devices. TV and video advertising are uniquely valuable, and as the market continues to shift, the ad industry has a big opportunity to lead the pace of change
Read More
MVPDs: Moving From Orbit Into The Cloud?
The 2020s are already proving to be a decade of sweeping change for MVPDs and cable operators around the U.S. The repurposing of C-band spectrum for 5G services is a plan that was accelerated last year after an agreement between the FCC and satellite operators. Although it’s been on the industry radar for a while, it feels like the whole thing is picking up pace as companies work to position themselves — and their technology — for a successful future. That future isn’t necessarily going to need a satellite.
“The two main considerations for MVPDs right now are: a) What does our distribution look like immediately post-C-band reallocation, and b) Where can we go from there?” explains Jill May, Manager of Product Development for Comcast Technology Solutions. “The first step is to deal with the immediate: Two-thirds of the C-band spectrum has been reallocated to 5G carriers. This means that existing services need to transition to a new satellite. For example, our services that have operated from the SES-11 satellite will migrate to the AMC-11 satellite this year. It’s not just a matter of keeping businesses whole, as there are some technology upgrades and efficiencies that come along with the move; but from there, the questions we started asking here were ‘Is a satellite feed really required for every business?’ or ‘What would it look like for a satellite delivery model to ultimately include and/or move to an IP-based model?’ That’s a big focus for us as MVPDs think about what their new big-picture plans could look like.”
MANAGED DISTRIBUTION GAINS NEW FLEXIBILITY
Satellite video distribution has clearly been around a long time. It’s extremely reliable, and it also used to be the only option for a lot of companies. Today, network advancements and build-outs, especially over the last few years, result in some seriously attractive options for media distributors looking to remain competitive in both price and features. “A move into a terrestrial distribution model has a lot of upside,” explains May. “Speaking from a Comcast network perspective, MVPDs can leverage a network that benefits from deep annual investment and evolution. Not only is there the security and encryption that a company needs to protect its investments and revenue, but also the speed, scale, and reliability that its customers can count on.”
Another big benefit that MVPDs can get from terrestrial distribution is a reduction in headend equipment and operating expenses across the operation. Headend equipment can be reduced by as much as half, along with a corresponding reduction in rack space, power, and cooling. On-premises equipment would need to be upgraded to a new platform, but a customer migration doesn’t need to happen in one gargantuan make-it-or-break-it effort. “What we’re planning is more of a hybrid model during the transition,” says May, “so business continuity can be maintained through a customer migration strategy that works for the partner’s needs.”
IP-BASED MVPDs — A NATURAL EVOLUTION
Once the operations of an MVPD or operator have left low-earth orbit for an earth-based distribution and content delivery network, the foundation is there to keep the customer experience in a state of continuous evolution. “We see an IP platform really as the natural next step,” says May. “There are technology approaches that we’re approaching as a white-label way for a terrestrial operator to offer more services and high-tech features. It’s a compelling way to improve an operator’s role as a content and streaming hub for their customers’ multiple streaming subscriptions.”
Listen to Jill May’s presentation on Managed Distribution during the CTS Connects Summit to learn more.
Read More
New Ways To Monetize SCTE 224
The SCTE 224 standard is fantastic for video operations enabling blackout management, web embargos, time-shifts, regionalized playout, and a host of other content rights-based program level video switching capabilities. The applications don’t end there; some new applications make the standard really attractive to bolster revenue generation opportunities for both programmers and operators.
Read More
Customer Connections Newsletter March
Do you have the meaningful insights you need to optimize creative and drive business results?
Creative Intelligence enables you to look at your omnichannel campaigns based on intelligent metrics from across platforms aimed at engaging your target audience. Armed with information on how your campaigns, and messages are performing in all channels, you can maximize your creative investment, optimize creative messaging and deliver the most relevant ad to the right customer.
Read More
An unpredictable year brings renewed focus to (and on) media businesses
For an industry as diverse as media and entertainment technology, there were some strong, common themes during the CTS Connects Summit last year that are only more relevant as 2021 starts to kick into high gear.
Read More
Navigating Content Rights for Operators
There is no denying that Operators — MVPDs and virtual MVPDs (vMVPDs) — are in a complex place in the video value chain. The requirement to manage rights across all content providers can be overwhelming, time consuming, and expensive. The need to streamline this function is critical so Operators can focus on delivering quality content at the right time to the right place for their customers.
No matter what the size, location or whether virtual or not, every Operator has to determine how they handle content restrictions, on the fly changes, and regionalization requirements from their programmer partners. In the past Operators would receive content pre-switched, and the content was delivered directly to that region via satellite to the IRD. This is a relatively hands-off approach but costly. Today, with the transition from satellite to terrestrial distribution and increasing capabilities of higher bandwidth video (4K/UHD, 8K etc.), responsibility for that control and switching is transitioning to Operators. This creates tremendous opportunities for cost savings and efficiency through leveraging technology that automates the process through machine-to-machine communication.
KEY BENEFITS FOR OPERATORS
For Operators, this transition enables some key benefits with the right technology and partner choice. First, by enabling this switching environment internally, they are centralizing the solution and then using their own infrastructure to distribute it to their regions. This applies to either vMVPDs or regular MVPDs since both need a central system to manage the content switching. For the MVPDs this offers them the ability to shutdown legacy facilities, remove the reliance on hundreds of IRDs, and scale faster with their IP modernization. In addition to these massive cost savings, by making this transition, Operators can scale their infrastructure faster, enable improved auditing capabilities, increase up time from their old infrastructure, and reduce operational costs by not having to manually manage their content switching system. Lastly, there are some unique linear addressable ad insertion use cases that with implementing the right solution the Operator can easily take advantage of and put into use.
RISK AND OPPORTUNITY
With all these benefits, there are still some risks to take into account. What are the risks you ask? Let’s start with the most significant one – delivering the wrong program to the wrong audience at the wrong time. Not only can it damage a brand through negative PR but it also creates tension between the Operator and content providers, and can ultimately lead to significant legal actions and carriage disputes. There are numerous examples of this happening today and blackout management is getting more prevalent in today’s complicated video distribution world that includes different device types and potential for global distribution. Because content rights are becoming more valuable and exclusive, especially for live programming, regionalization/personalization more critical, and Operator/Programmer relationships more of a differentiator, the need for automated system to manage these rights scenarios is crucial. Essentially the system operates as an insurance policy against the potential devastating risk of showing the wrong programming to the wrong audience. Additionally, the manpower required to manually manage the increasing complexity increases exponentially if an automated system is not in place.
The other key risk to consider is technology choice. Once an Operator decides to implement the centralized switching solution, it’s critical that they go down a path that’s scalable at every level – channel count, decisioning, ease of use etc. If not, they will find themselves wasting resources across the board. The SCTE 224 standard has become the best standardized language to manage the automated machine-to-machine communication for controlling rights, web embargos, program substitution and is growing in acceptance for managing ad insertion. It’s critical for Operators to use a standardized approach to help manage metadata and rights communication with programmer partners. SCTE 224 is ideal in this environment. In addition to providing clearly defined a messaging and communications protocol, it enables Operators to layer on a linear rights management to manage all of the nuances of implementing the defined actions carried by the SCTE 224 data. This is critical to making decisioning work at scale.
To successfully implement an automated rights management solution, it’s important that the solution lets the user build a metadata ingest platform that scales at a level that can handle complex SCTE 224 ingest scenarios. It should support a variety of source types and formats and not have to create or rebuild each ingest adapter/parser that is unique for each programming partner. While SCTE 224 offers a great standard way to communicate, there can be significant variations in the way it’s implemented. One content provider could use the 2015 version of the standard, another the 2020 standard and yet another could use the same year standard but have different metadata fields being communicated.
It is important to avoid compatibility issues, or Operators will always be playing a costly game of catch up with their development teams. For example, if a system accepts a variety of sources and data structure (easy in), manages everything with strict rules and structure internally, then it will have reliable and consistent behavior when working with a range of data sources from various content providers. However, if a system is built such that each ingest point to the lowest common denominator through rigid hardcoded solution for each partner, Operators greatly limit their flexibility as content partners make changes over time.
SCALE TO SUPPORT MILLIONS OF DECISIONS
It is common to have 100s to 1000s of different policies, audiences, and rules coming in from various content providers that each require automated decisioning made for them to determine which content should be played or not and where it should be played. Therefore, for the system to work correctly, a decisioning system must be implemented that can scale to support millions of decisions daily. This is essentially the brains of the operation – executing video switching decisions based on the metadata rules, policies, and audiences being received from the content programmers. Implementing this functionality requires seamless integration with the adapter/parser as the metadata comes into the system and then integration with your video infrastructure at the encoder, vIRD, packager, or manifest manipulator level to facilitate video changes based on the audience and policy rules in the metadata. The decision manager notifies the Operator’s video supply chain whether or not a program should be played as well as where and who can see it automagically based on the rules defined in the SCTE-224 data stream. If necessary, the decision manager can also “talk” to the Operator’s Ad Decision System to help facilitate unique programmer by programmer ad rules and insertion instructions.
Comcast Technology Solutions (CTS) is uniquely positioned to enable the Operator content switching requirements by leveraging our Linear Rights Management (LRM) solution. It is provided as a SaaS (Software as a Service), running in the public cloud, minimizing the need for physical hardware, and making it quick to start implementing and using. We normalize all sources that will drive our customers systems to a standard SCTE 224 format and then leverage a backend rules engine and decision manager to enable the switching of video content from a centralized control system. It is designed with an easy-to-use web-based UI that enables operations personnel to track, audit, and make changes as necessary. We integrate seamlessly with your video stitching infrastructure or can provide that functionality as a service ourselves. As your needs change over time, it is simple to make changes and roll them out immediately across your whole network.
Read More
Customer Connections
Dear Valued Customers,
When I started last January, I had hoped to meet most of you personally, but obviously this wasn’t possible. In place of in-person meetings, your sales and account teams have been connecting virtually to help you with your business needs. And, we have hosted several educational webinars, lunch and learns, and our CTS Connects Summit in November to share the latest in ad tech along with our Comcast Advertising partners FreeWheel and Effectv.
2020 was a year unlike any other. We are seeing buy- and sell-sides working together more closely than ever before, with the creative taking center stage. Comcast Technology Solutions is here to help you succeed in this new environment with creative management, media activation, and automation technologies that will power the next generation of advertising technology.
In 2021, we are dedicated to continuing to connect with you to ensure we’re exceeding expectations as your trusted partner and keeping you informed of new products and services. To this end, we are pleased to share our new Customer Connections newsletter. Each month, we will bring you the latest news, data, and developments from Comcast Technology Solutions and the industry. We hope you’ll find it an informative tool to help drive results and get the most out of our solutions.
Thank you for choosing us as your advertising technology partner. We look forward to working with you this year and to continuing to provide the technology and solutions your business needs to adapt and grow.
Wishing you a happy and healthy 2021!
Cheers,
Simon Morris,
Head of Sales, Advertiser Solutions
NEW TECHNOLOGY MAKES AD DELIVERY EVEN EASIER TO USE
Comcast Technology Solutions is proud to introduce a new technology enhancement that further simplifies our Ad Delivery and Creative Management tools.
The WalkMe application provides end-to-end support for a quick and easy product experience.
WalkMe is designed to enhance your experience and streamline all the activities and tasks your team performs every day, across the application. Through on-screen step-by-step instructions you can immediately find answers to your questions and get your orders out quickly.
Try it out for yourself or request a demo.
INDUSTRY LEADING POST-PRODUCTION & VERSIONING
As you know, post-production and asset customization are often the last steps in the production process, but can end up impacting the success of your campaign. If you don’t have the right message, the right version, encoding or accessibility baked into your spot, you could end up missing out on vital opportunities.
Comcast Technology Solutions makes post-production easy with our Production Services. Voice-overs, reslates, versioning, closed captioning and more are done in our in-house studios. All assets can then be sent to thousands of linear and digital destinations automatically.
And, when combined with our centralized ad management platform, you can simplify and unify creative, talent, and distribution workflows across channels. Plus, get creative intelligence into how your ad performed, allowing you to further personalize, improve targeting, and ultimately drive greater ROI.
Let us help you with your post-production needs. Learn more about Production Services.
Read More
CTS Connects: Allison Olien Talks Business Continuity
In this video, Allison Olien, VP & GM of Communication and Technology Provider Solutions, discusses the importance of business continuity for her customers and how Comcast Technology Solutions is supporting customers.
Read More
Video Trends Vlog Episode 6: On Virtual Channels
Content providers and programmers are always on the lookout for new ways to monetize their content. Non-linear, on-demand streaming experiences may seem to dominate the conversation, but it would be completely wrong to interpret the rise of on-demand as a consumer rejection of the linear, “always on” broadcast experience, which is more popular than ever. More accurately, it’s a story about the increasing power of consumers to choose what they want to watch. What if you could put them together, and launch a channel that’s always full of stuff that a specific viewer wants to watch? It’s a compelling offer that’s now more attainable than ever.
Comcast Technology Solutions gives businesses a way to implement virtual channels that complement non-linear experiences with cool new niche channels that can attract and maintain like-minded audiences. Keep watching as Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development talk with Matt Smith, Executive Director of Business Development and Strategy about how, with the right technology approach, businesses can quickly and cost-effectively augment their consumer offerings with tailored linear experiences.
Read More
Video Trends Vlog Episode 5: On Content Delivery — Getting the content to the user
How you deliver content to the user can have big implications on quality of experience. How fast, and reliably, content is delivered is often a major factor in signal interruptions, latency issues, and buffering.
In episode 5 of our series, Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development talk with Matt Smith, Executive Director of Business Development and Strategy discuss getting the content to the user, finding the best performance for your content, the value of the Comcast experience and infrastructure for delivery.
Read More
Video Trends Vlog Episode 4: Content Monetization Strategies and Technologies
New features and greater interactivity add up to a richer experience for viewers – and stronger returns from your investments in content and technology. To maintain an operation that evolves with audiences, it’s important to stay centered on what experiences you want to provide to consumers – and how to make those experiences more relevant at the individual level. It’s not just about enabling content, but how to blend monetization approaches and merchandising to optimize the revenue potential of every program.
Join Matt Smith, Executive Director of Business Development and Strategy, Joe Mancini, Director of Product Development and Peter Gibson, Executive Director of Product Management as they talk about content management strategies and technologies that drive stronger profitability for multi-platform video commerce.
Read More
Video Trends Vlog Episode 3: Go-To-Market Strategy
2019 is proving to be another huge year for the industry, as some of the biggest entertainment brands are poised to launch new streaming and on-demand destinations of their own. Where business models used to be split up more clearly between subscriber-based approaches (SVOD), ad-supported models (AVOD), or shopping-cart transactions (TVOD), businesses are pivoting to a hybrid approach with the flexibility to incorporate all three – with the power to shift based on viewer insights and consumer trends.
In this video, Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development both share some insights on how to go to market with an advanced, cost-effective technology plan that helps companies reach revenue targets.
Read More
Video Trends Vlog Episode 2: Getting Nerdy With SCTE 224
SCTE 224 – known affectionately as “Scuddy 224” – refers to an expanded Event Scheduling and Notification Interface (ESNI) standard that was launched in 2015 by the Society of Cable Telecommunications Engineers (SCTE). Simply put, SCTE 224 was borne from the increased need for metadata to provide a deeper set of instructions on how and when a piece of content can be used.
SCTE 224 expands on previous standards, enabling content distribution based on attributes that are now critical to our mobile world, such as device type and geographic location. Audience-facing electronic programming guides (EPGs) are also created with this communication in order to set accurate expectations with viewers as to what’s going to be available and when.
Listen to Joe Mancini, Director of Product Development for Comcast Technology Solutions, explain why it’s so crucial to implement a way to manage the increasing complexity of linear rights across a complex sea of device types, delivery platforms, geographies, and legal requirements.
Read More
Video Trends Vlog Episode 1: Live and VOD Trends
A consistently fresh blend of live and on-demand content puts businesses on a strong path to attract viewers and keep them engaged. We’re seeing a shift in the industry where more content is being served via terrestrial delivery, while still working to preserve a broadcast-quality experience – an experience that’s not just high-quality, but a consistently reliable one regardless of the viewer’s device type or location.
Join Matt Smith, Executive Director of Business Development and Strategy, as he sits down with Peter Gibson, Executive Director of Product Management and Joe Mancini, Director of Product Development in the first of a five-part series of short videos where they cover some of the biggest VOD trends and critical business challenges for the video industry.
Read More
Channel Origination Overview
Channel origination has come a LONG way in just the last two decades. Allison Olien, Executive Director at Comcast Technology Solutions, provides a peek behind the curtains to see what video delivery used to be like, how it looks today, and where it's headed.
Read More
Metadata Makes Linear Video Exciting Again
Linear streaming services and alternative video distribution paths continue to increase in both diversity and reach. So, where is the next-level growth catalyst for linear video going to come from over the next 10–15 years? What’s going to keep it attractive in the eyes of viewers? It’s all in the (meta)data. The metadata associated with video can power complex use cases, enable more intelligence and automation, and extend users’ video experience beyond their first screen. Let’s dive deeper into some of these use case examples.
Read More
Regionalized/Time shifted Playout, Affiliate Distribution, RSNs, and Player Entitlements
Managing a broadcast network at scale is a difficult task even for the most knowledgeable broadcaster. One of your biggest challenges is efficiently and economically creating your live channel without having to invest in unnecessary infrastructure. This is especially true if your channel requires significant time shifting or regionalization. Add in requirements for Regional Sports Networks (RSNs), Local Affiliates, time shifted channel playout, the transition from satellite distribution to IP, TV Everywhere (TVE)/Over The Top (OTT) and you’ve got a handful.
Read More
Taking Control of Regionalization with SCTE 224
Media and entertainment companies around the world deliver content to all screens that spans multiple time zones and regions. To create a more compelling viewing experience and keep viewers watching for longer, they may deliver time-delayed content across time zones and allow for local programs to be inserted in the regions they serve. There is a growing trend to add greater regionalization capabilities and make video consumption even more relevant to consumers in different geographies. The objective is to increase revenue from advertising and improve audience retention over time.
As the level of sophistication around controlling how a channel should be presented to the viewer increases, so to does the level of complexity. This covers a range of scenarios for how and when we want regions to behave differently from each other.
Examples include:
Event blackouts for viewers in certain regions are not entitled to see;
Content replacement using streams from other feeds;
Content insertion made specifically for a region that includes local native languages;
Dynamic ad insertion; and,
How to handle cases where viewers travel outside of their local area, but still want to watch the home channel on a mobile app.
Enabling Technology
As an industry, we can enable greater regionalization and have control of the process by using SCTE 224, an Event Scheduling Notification Interface (ESNI) standard that lets us describe what the audience for a channel can see over time. In other words, it creates the rules we need to control content access and therefore lets us address the complexities of managing regional content.
For a system to be practical, particularly at scale, it should be controlled centrally. This way, rules we want regional participants to follow can be pushed out to each region, so they know in advance what to do for each channel. Since SCTE 224 gives us a common format, the provider can automate its responses to these rules.
What is delivered by using SCTE 224 are trigger events that are sent in advance of the live stream for each channel. Each region is provided with its own specific set of trigger events it will use to implement the prescribed operations. These trigger events may be shared with other regions or may be unique to a region. SCTE 224 gives us the power to make these determinations. It also lets us make changes as needed, and post updates to affected regions immediately.
For the content originators, the result is that all content access rights are controlled centrally and made available to participants in a common form. This makes execution less prone to error since a common format is used by all participant regions.
While SCTE 224 defines how we want to manage the regions, we also need to associate this with video in the channels themselves. This is achieved using SCTE 35 in-band triggers — trigger data that is embedded in the stream carrying the channel. Triggers define exactly when an event needs to take place. When an SCTE 35 trigger arrives, it can be matched with the rule provided by SCTE 224 to ensure the system takes the correct action at that time. The benefit of this method is it accurately aligns the rules we want to enforce with the video and creates a clean viewing experience for consumers.
Enabling Workflows with Comcast Technology Solutions
Comcast Technology Solutions is uniquely positioned to enable practical and powerful solutions for regionalization in support of hundreds of channels and all the regions in your network. This includes the ecosystem used to enable the system at each regional participant so that IRDs switch to the correct sources, blackouts are applied, DAI is triggered, and users watch the right variant of a channel for their current location.
Our solution is provided as a SaaS (Software as a Service), running in the public cloud, minimizing the need for physical hardware, and making it quick to start using and implementing. By normalizing all sources that drive your system to a standard SCTE 224 format, we make it easy to set up and fine tune each region. This includes defining regions by zip code, setting the rules for channel access and interaction with dynamic ad insertion. As your needs change over time, it’s simple to make changes and roll them out immediately across your whole network.
With Comcast Technology Solutions, you have a powerful and flexible way to implement regionalization that can be managed from anywhere with an internet connection.
Read More
Video Quality: Table Stakes for Audience Loyalty
“When consumers are paying for something or receiving video from a big-brand media company, they expect a premium high-quality experience. It has to be as good as TV, on every platform and device – and that’s our main priority in the years ahead.”
The above quote comes from one of the executives who participated in our newest research project with MTM (which is available at the bottom of this page). It’s not exactly a revelatory statement, but I’d argue that in my years with this industry I haven’t seen any evidence that consumer expectations for premium quality end with premium services. It’s quite literally the cornerstone of every video-centric service: if the video quality isn’t up to par, the best marketing in the world won’t matter, and frankly neither will the best content.
In Hindsight, Just How Clear Will 2020 Be?
This decade has started off with events that have upended a lot of trend lines – or sent them skyrocketing, as the case may be. Deloitte Insights recently published the fourteenth edition of their Media Insights survey, and it provides some fascinating data around how the COVID-19 pandemic has impacted streaming services. We already know that usage in every category shot through the roof – that’s been one of the biggest media stories of the year, as giant bandwidth consumers like Netflix and YouTube even agreed to temporarily reduce their usage by double-digit percentages by throttling back on 4K and hi-definition programming.
From a quality standpoint, it’s all a stark reminder to media companies that they’ve got to be ready for anything. For many consumers who purchase subscriptions and on-demand content, or who are supposed to be influenced by advertising, the pandemic has had an impact on both spending habits and budgets. Set against the backdrop of increased usage and cancel-at-any-time subscriptions, it’s clear that media destinations have to demonstrate more value on a daily basis in order to maintain screen time with consumers who are looking to cut costs.
Strong initial offers, incentives, or tentpole “big-event” content can do wonders for short-term gains, but the actual viewing experience has to be consistently on point in order to cultivate short-term watchers into a growing community and to build audience loyalty. It really is the sword that media companies need to keep sharp on both edges:
On one side is content – simply put, providing the things that people want to keep watching; not just to attract viewers, but potentially advertisers as well.
On the other side is the fact that on a global scale, you’ve got to serve that content to an ever-changing planet full of screens, most of which are mobile.
A content and distribution plan that’s built for quality is the first step for being competitive in the coming decade. From there, successful audience building needs a continued dedication to relationship building and performance optimization in order to get the most value out of every creative and operational investment. We reached out to a cross-section of our peers to see what that looks like.
New Paper: The TV 2025 Initiative
Comcast Technology Solutions has partnered with international research and strategy consulting firm MTM to present the TV 2025 Initiative; a collection of industry perspectives on the continued transformation of the video industry, and thoughts about what the future may hold for streaming services as we embark on a new decade.
Senior executives from over two dozen large media and entertainment companies participated in the research, sharing their current experiences and future expectations over a wide range of topics, tackling questions such as:
Now that streaming has finally come into its own, What are others in the industry expecting, from a quality and scale standpoint, for streaming’s near-term (and long-term) future?
What best practice trends are developing? How are media companies building a mix of internal teams and partner-driven expertise? How will your growth impact your technology’s ability to deliver broadcast-quality experiences?
At the rate of change in just the first part of this year, 2025 already feels like it’s practically around the corner; what do today’s executives think will be driving industry transformation? What can be learned in order to chart a strong course from here to there?
Get the paper right now, right here.
Read More
What’s Your Five Year Plan?
It’s been a long quarter this week.
At least around our virtual office, that’s been a general consensus pretty much all year long. It’s a feeling that gets stronger as we look a back to the beginning of the year. Remember, way back in January? Before the concept of “social distancing” was in our daily lexicon? Or “virtual office” became a default setting? Remember when 2020 just looked like the start of another busy decade? Today, it feels different. It is different. I don’t think it matters where your skills or services sit in the overall scheme of media services and technologies. Even a pure entertainment destination feels more like an essential service than ever before.
But even as every week has a make-or-break feel to it, from content performance to network load to playback quality, all it takes is a little five-year perspective to recognize an insescapable truth: 2015 was nothing like 2020. And 2025 is likely going to be a whole new ballgame in and of itself. We’re all working hard to pivot with our audiences to stay relevant; but those long-term roadmaps are still as important as ever.
More Devices, More Power, More Viewing
At the end of 2019, Deloitte published its first Connectivity and Mobile Trends survey, which provides some powerful insights for advertisers, agencies, and media companies as they define their technology roadmaps and partnerships for the coming decade. It’s no secret that the upward trajectory of video advertising is placing more pressure on every mechanism and process that brings an ad from production to multi-platform consumption.
Deloitte’s data shows that within an average U.S. household, there are currently around eleven connected devices, at least seven of which have screens for video. This is a huge leap from just a few years ago: in 2014, a similar survey by Ericsson showed an average of around five connected devices. The focus of the survey wasn’t just about what’s going on today, but also to see how connectivity impacts consumption, and how behaviors might change as data gets faster.
Think about the basic paradigm shifts we’re dealing with in our workplaces in homes, and how these shifts impact content consumption. In our own research (more on that below), industry professionals we contacted made particular note of the fact that a lot of the changes in consumption habits, spurred on by quarantines and social distancing, aren’t going to go away. Coupled with the faster, more capable networks of the not-too-distant future, more viewers have first-hand experience in what streaming has to offer. But what happens when “connected” becomes a foregone conclusion for even more devices? Certainly, the media market is going to get even more diverse, driving a real need for more automation, better data, and more emphasis on personalized experiences that win screens.
Streaming’s Growing Competitive Landscape
In just the past year, premium streaming content has blown up, with major content owners like Disney, HBO, and NBC Universal (also a Comcast company) bringing their wares to bear with an in-house offering. Streaming as a whole has really come into its own, with the experience itself commanding more respect (and investment) than the “bolted-on” feel of older iterations that just barely satisfied the goal of providing “something in the OTT space.”
With so many big media names launching new offerings of their own, it’s important not to overlook the increasing potential for smaller, niche channels to thrive. In fact, as we all spend more time focusing on customer journeys and personalization of content, it’s never been a better time for new audience-specific destinations to partner up with the platforms and technology providers needed to stand up a brand that’s new, differentiated, and advertising-supported.
New Paper: The TV 2025 Initiative
What’s your five year plan? Comcast Technology Solutions has again partnered with international research and strategy consulting firm MTM to present the TV 2025 Initiative; a collection of industry perspectives on the transformation of TV and thoughts about what the future may hold for streaming services as we embark on a new decade.
Senior executives from over two dozen companies participated in the research, sharing their current experiences and future expectations over a wide range of topics, tackling questions such as:
Now that streaming has finally come into its own, how are you managing your services? Are there differences between approaches based on complexity, or based on a service’s relationship with a broadcast service?
What best practice trends are developing? How are media companies building a mix of internal teams and partner-driven expertise?
At the rate of change in just the first part of this year, 2025 already feels like it’s practically around the corner; what do today’s executives think will be driving industry transformation? What can be learned in order to chart a strong course from here to there?
Get the paper right now, right here.
Read More
Get your head in the clouds
Make “me-time” more accessible to your audience, regardless of the situation.
No matter how your business sees the future, keeping pace with evolving viewing habits is critical to growing your audience. Consumers now expect the ability to curate their own viewing experiences, including where, when, and how they watch.
According to Deloitte, on average more than 60% of Millennials and Generation Z consume streaming video daily. By comparison, less than 30% of Baby Boomers stream video daily.
Therefore, it’s imperative to make the user experience seamless across all screens to reduce user confusion, and friction to using your service. This means linking to on-demand content and functionality from the broadcast experience with services like instant restart, and making all devices behave the same. Now is the time to streamline the viewing experience by converging broadcast and digital to reduce costs and overheads.
Innovative business models can help make your offering accessible for ever-more niche groups of users. For example, if the data says your millennial viewers are only interested in your reality TV shows, then make a reality TV show bundle. If the Baby Boomers want classic movies or news, you can also do likewise.
While people adapt their viewing habits, and to using new services, at different paces, technology keeps evolving. Historically, technology has required linear and digital businesses be run as two separate units. But, by converging broadcast and OTT systems, broadcasters and operators can significantly reduce the doubling-up of efforts to drive savings and increase operational efficiencies.
With a Cloud TV platform, you can put an end to consumer confusion, and create engaging user and brand experiences, while providing the tools needed for business model innovation.
Maximize reach and revenue with a complete cloud-based solution that works on any device, with any business model, and launch unified user experiences across all devices.
Read More
Automobile Advertising: Keeping Dealers In Alignment
Automobile manufacturers advertise the experience of driving and using a car. Automobile dealers do too, but more importantly they advertise the experience of buying and owning that car from their dealership. Together, both manufacturer and dealer are working to court new car buyers in a concerted effort to win loyalty on multiple levels – and if everything stays on track, over multiple car purchases. In fact, according to a 2019 white paper by AutoPoint*, the post-purchase experience means 52% more to brand loyalty than in any other industry. So, when it comes to advertising, dealers need all the love they can get from the carmakers they represent.
As advertising gets more complicated, and ad destinations get more diverse, the need for manufacturers and dealers to be in alignment has never been greater. At the top, brand continuity and marketing quality benefit immeasurably by helping the ad operations at the dealer level to operate smoothly. And, it must be stated that the sophistication of those individual dealers varies from shop to shop. An individual outlet needs the same media as a high-powered dealer network with multiple brands and locations, and needs to get their ads completed and in-market just as quickly, but likely with a smaller budget. The easier it is for each constituent in the advertising workflow to find, share, and manipulate ad creative, the better.
An “Internal Collaboration Engine” for Advertisers
For one of our clients, a major global automotive brand, improving the speed and accuracy of media management with dealers is a two-way benefit: for the brand, it’s easier to keep campaigns on course, and for the individual dealers it’s improving their ability to get their work done and delivered on time.
Big-budget creative is filmed and created by the manufacturer for a wide variety of vehicles, from sports cars to sport-utility vehicles. Dealers need this footage as the foundation for their own spots by adding their own messaging: promotions, incentives, sales floor footage, service center specials, etc. Like every advertising effort, every commercial must be transcoded into a multitude of formats. Multiplied across products, and with multiple campaigns active simultaneously at any given time, it gets really complicated for dealers to find and access the media files they need.
“As we explored how to improve this internal relationship, we recognized this fantastic synergy between a problem our partner needed solving, and a similar problem we had already solved for internally in a completely different area,” explains Simon Morris, Senior Director of Product Sales, Advertiser Solutions. “We had built a tool that made it easy for our broadcast partners to find our ad creative easily through a secure cloud-based tool, and then grab the right version with the correct aspect ratio and coding. So, to create something for automotive dealers, the foundation was already there, and that meant it was easy to organize and display content in a way that worked for our automotive partners. We continue to focus on building tools that solve wider industry needs, and in particular ones that are focused on driving operational efficiencies, speed and transparency.”
The portal isn’t just a static “box in the sky,” but also a vehicle to improve communication within the internal ad ecosystem. No one wants to use creative that might be outdated, and all content needs to be in accordance with talent and other contractual rights associated with it. Integrated with the Ad Management Platform, the new portal accomplishes this, giving dealers a straightforward way to get automatic updates and notifications of newly available creative.
New Article: The Great Car Chase
The manufacturer – dealer relationship is not unique to the automobile industry. Many industries have similar advertising relationships with franchisees that stretch globally: consumer-packaged goods companies, restaurant chains, or other complex product creators whose ongoing consumer relationships will be nurtured and maintained at a local level.
In our new paper, The Great Car Chase, we use the automotive industry as our vehicle to examine advertising complexity at the highest level, and how to gain control of it. Global companies can – and must – shift to a centralized architecture that improves both transparency and control across all of their global ad traffic. In doing so, they can also position themselves to incorporate new technologies that will deliver tailored content in real time; elevating the performance of every campaign. Read it to learn more about how to:
Maximize ROI by optimizing through a unified solution that promotes collaboration and informs better decisions across the entire workflow to elicit stronger, positive buyer responses
Optimize on the fly to meet tight deadlines, handle versioning needs automatically, and respond quickly to changing opportunities
Simplify access and management of creative assets for local dealerships.
Deliver campaigns worldwide using one centralized platform.
Take back control and become the conductor of your brand across any market, any screen.
*Customer Loyalty Drivers – Current and Future, AutoPoint, April 2020
Read More
The High-Performance Content Catalog: INSIGHTS TO GET THERE
Media destinations are largely defined by what’s in their catalog of content, but what really matters is what audiences are actually watching. It’s an obvious statement that premium content commands top dollar because it drives more screen time. But what’s going on with the entirety of your offering? Title-specific analytics are crucial for valuating a program, but programmers and providers alike can really benefit from a more holistic view of how larger parts of their catalog are working for them (or not working, as the case may be).
Smarter decisions around content aren’t just about knowing what to buy and what to create. Understanding what audiences aren’t watching is as valuable as knowing what’s doing well. We’ve all experienced going to a service we use regularly and scrolling through suggested programs similar to ones we’ve previously viewed (or whatever content needs an ROI boost). After that it can seem like an exercise in diminishing returns, but most of us have dug deep into a programming guide only to find something fantastic that we didn’t even know was offered. With some better-informed merchandising or marketing moves, the entire catalog’s ability to create stickier audiences improves.
To provide these macro views across a wide spectrum of content engagement, Comcast Technology Solutions is introducing Insights – a new service to give companies a holistic view into the overall health of their content catalog.
INSIGHTS: SOME NEEDED CONTEXT FOR YOUR CONTENT
“Insights is a new service that is focused specifically on business impact,’” explains Peter Gibson, Executive Director of Product Management for Comcast Technology Solutions. “So, it’s less about the number of times a viewer streamed a piece of content, or how quality is impacting playback, but more about how your content is tracking with audiences as a whole.” Insights provides answers to questions like:
Are audiences watching what you think they’re watching?
What’s getting binge-watched? And for how long per session?
Have you become a viewing habit for your audience?
Combining both historical and a daily view of the actual content consumption by your audience, practical decisions can be made based more on real-world context, with less conjecture. For example, perhaps you’ve been purchasing a lot of dramatic programming, but your audience is watching a lot of comedy. Insights can help to provide clarity where parts of your catalog might be underutilized due to lack of discovery, which might spur some additional advertising or some new merchandising decisions. Or, it might show that content creation and acquisition dollars could be reassessed based on the holistic impact they’re having on audience engagement and retention.
HOW INSIGHTS WORKS
Insights is a new feature within Comcast Technology Solutions’ Cloud Video Platform. Information is collected on the server side, which translates into minimal client-side implementation work to get Insights up and running. Once it’s up and running, Insights combines historical data with current performance measurements to provide context over time. The Insights dashboards include a wealth of data not normally found within analytics solutions. Among the multitude of benefits:
Binge Tracking
Understand how much audiences are watching in one session, and for what period of time. Clearly we ALL want to encourage binge-watching. Insights provides some needed clarity around not just which shows are finding their audience, but also how those audiences are watching with tools like Device and Content Insights.
Measuring Engagement
How much do audiences really like your service? Are audiences coming back actively and consistently (a sticky, engaged viewer), or is it just something that they’re getting billed for (a high churn risk)? Engagement measurement can lead to opportunities to re-jigger marketing strategy or direct outreach / offers to viewers.
Attention Indexing
Attention Indexing looks at your entire catalog to see how audiences are really watching. For instance, a new show might get a lot of play, but . . . .is it getting played in its entirety? Or are folks tuning in to a longer movie and only playing part of it. Catalog insights might determine that folks are clicking on content because it’s merchandised, but they don’t stick with it to completion. Lots of starts with low completion rates…that’s a problem that most out-of-the-box analytics tools aren’t designed to illuminate.
MACRO-LEVEL UNDERSTANDING FROM DAY ONE
“Insights can immediately help you make content catalog merchandising, marketing, and policy decisions with data based on your historical video activity,” explains Gibson, “Plus, you can easily add it to your existing analytics solutions if needed.” In the space between how you want your content to be used and the reality of your audience’s behaviors, Insights gives you an added layer of business intelligence that brings you to a deeper understanding.
Learn more about Insights, our CTSuite for Content and Streaming Providers, and our CTSuite for MVPDs and Operators.
Read More
Ad Management for Political Campaigns
As the calendar slips into the start of a new decade, the 2020s are poised to roar for advertisers ahead of one of the most anticipated U.S. election seasons in memory.
Need proof? Check out what the industry experts are predicting. Kantar Media forecasts political campaigns for U.S. federal office will spend $6 billion on paid media placements this year. Broadcast is projected to take in $3.2 billion and cable $1.2 billion, according to the Kantar study. The study did not include ads sponsored by Political Action Committees (PACs).
Of the $6 billion in political campaign spending this cycle, Kantar expects 20%, or $1.2 billion, to go to digital.
According to Cross Screen Media, there could be eight million broadcast airings of political ads in 2020, an increase from 5.5 million in 2018. A lot of the political ads will air on local newscasts. According to a report from Kantar CMAG, political advertising took up 43% of available slots on local news in battleground markets, compared to just 10% at the start of the season.
EMarketer estimates US advertisers will spend $42.58 billion on digital video placements overall, amounting to 28.1% of digital ad spending. Most of those placements will be bought programmatically, with 82 % of US digital video ad spending forecast to be transacted in automated channels next year.
Why are experts predicting such large advertising spends this season? In addition to a banner ad year, experts are also expecting a tremendous voter turnout.
Turnout in 2018 was the highest it had been for a midterm election since 1914, according to University of Florida political science professor Michael McDonald, a nationally recognized elections expert who maintains the voter data site United States Election Project. Next year, he predicts, 65%-66% of eligible voters will turn out, the highest since 1908, when turnout was 65.7%.
For political advertisers to capitalize on their advertising opportunities and reach voters at the right time and right place, they need a campaign built on speed, mobility, quality, reliability and scale. To stay ahead of the polls and make an impact fast, you need to get your message to voters faster, at higher quality, without worries and hassles.
“Video advertising is essential to modern political campaigns, helping to build awareness of both candidates and issues. Fast, efficient placement of broadcast-quality advertisements is essential for a high-functioning campaign,” said Richard Nunn, Vice President and General Manager of the Ad Platform at Comcast Technology Solutions.
The CTSuite offers a advertisers a new solution for political clients designed to get their message to potential voters quickly, and across more touchpoints than ever before.
Manage campaigns, ad spots, payments, closed-captioning, versioning and distribution all from a mobile platform “on-the-go". Fully transparent order entry and management provide views into the lifecycle of spots from upload to final delivery.
Cast your vote for efficiency...
and learn more about ad delivery with the mobility and scope to keep pace with your passion.
Read More
Why mobile should drive your CDN strategy
A strong relationship between content creator and consumer is built on reliable, consistent, playback experiences. Don’t leave your customers stranded without the connection they need to the content they love. A global world demands consumers have universal access where, and when, they want it.
According to eMarketer, 75% of all video plays are now on mobile devices. Mobile video traffic accounts for two-thirds (63%) of all mobile data traffic, and will grow to account for nearly four-fifths (79%) of all mobile data traffic by 2022 – representing a 9X increase since 2017, according to a 2019 report by Cisco. By 2022, Cisco projects that: mobile will represent 20% of total IP traffic. The number of mobile-connected devices per capita will reach 1.5. The average global smartphone connection speed will surpass 40 Mbps ad, Smartphones will surpass 90% of mobile data traffic.
In fact, consumers in the United States spend nearly as much time watching videos as they do working, nearly 40 hours per week, according to Deloitte.
As viewing experiences become increasingly mobile, the importance of delivering content seamlessly across vast geographic landscapes is paramount. Content creators must ask if their CDN is smart enough to always chart the shortest, cleanest, most efficient, and highest-quality pathway to every screen?
Meet your audience where they are with a multi-CDN strategy
Depending on the content delivery strategy utilized by the creator, consumers may, or may not, be able to enjoy the highest-quality viewing experiences.
Today, more video content is being watched than ever before, with 85% of all internet users in the U.S. watching online video content monthly on their device of choice (Statista, 2018). The global video streaming market is estimated to be worth $124.57 billion by 2025, according to Market Watch.
Speed alone will not ensure your content gets to where it needs to as quick as possible. The edge-tier of every CDN provides points of presence (PoPs), where your audience and your content interact. By employing the use of more than one CDN, you get more PoPs for your programming. Additional CDNs can widen your delivery footprint and allow you to serve customers from many more locations and over different network routes – this is especially true when attempting to deliver content to a global audience.
The variability of performance is another reason to utilize a multi-CDN strategy. Individual CDNs vary in their performance across different locations, network, and time of day. No single CDN can perform well in every area of the world. Quality of experience will differ depending on their proximity to points of presence, as well as capacity and utilization.
A multi-CDN media delivery strategy ensures downloads are seamless and downtime is not a concern, no matter your location or device.
Preparing for tomorrow today
Today, content is created for global audiences. According to YouTube, 80% of users come from outside the U.S., and YouTube services 88 countries in 76 languages, or 95% of all internet users.
Ericsson’s November 2018 Mobility Report, predicts video traffic to grow 35% annually through 2024—increasing from 27 exabytes per month in 2018 to 136EB in 2024. This means video’s share of global mobile traffic will rise to 74% from 60% today.
As content creators look toward the future, how and where consumers view their creative will occupy increasing levels of mindshare when devising delivery network strategies.
Avoid the additional resources required to manage multiple partners independently, and leverage Comcast’s multi-CDN fabric to deliver your broadcast quality content to any device. Take advantage of the Comcast CDN and other pure-play CDNs using DLVR multi-CDN routing technology with direct interconnects to more than 250 global networks.
Learn more about our CDN Suite of services
Read More
Harness the full capabilities of the marketplace with advertising automation
Today, brands debate whether to prioritize broadcast or digital video channels in their advertising campaigns. But why choose? A blended, multichannel, strategy can increase ROI by as much as 35%, according to the Advertising Research Foundation.
However, according to a survey by 4C and Advertiser Perceptions, lack of full-funnel visibility and control, led half of marketers interviewed to say the inability to plan media seamlessly across platforms is a top challenge.
A typical Fortune 500 advertiser currently cobbles together solutions from dozens, or even hundreds, of media sources, technology partners and data providers. As a result, it’s nearly impossible to get a full picture of the media supply chain from ad creation to delivery.
While ads may still reach the end user, inefficiencies built into these workflows create costly redundancies and time-consuming manual processes, hindering an advertiser’s ability to take full advantage of the sophisticated targeting capabilities now available.
“To the outside world, yes, ads still get to their end user,” said Richard Nunn, VP and General Manager of Advertising at Comcast Technology Solutions. “It isn’t broken in its entirety. It works, and it has worked for 70 years. But the reality is that it doesn’t work well enough. The process should be faster, more seamless, more cost effective and free of errors.”
Create at the speed of automation
In the past it could take weeks and millions of dollars to manually create hundreds of video ad versions - matching creative with audience-level targeting. Today, advertisers can create hundreds of ad versions in under 60 seconds. Why is this necessary?
Technology is evolving to provide advertisers with ways to better understand viewers contextually and respond not just by serving up ads for more relevant products, but also to tailor the ads themselves to elicit more positive buyer behavior and support a better viewer experience.
“Being able to match targeting to creative execution on the fly is really the holy grail of marketing,” Nunn said. “It gives consumers the personalized experiences they’ve shown they want to engage with, and it reduces the complexity of execution by removing many of the manual interventions that were slowing advertisers down.”
Technologies like dynamic creative optimization (“DCO”) hold the promise to a new generation of more personalized ad service – a win-win for audiences and advertisers alike. But in order to fully realize this dream, advertisers need more than a DCO solution.
They also need to adopt more a unified approach to enable faster, more seamless management of creative, media buy and distribution. Many advertisers and their agency partners still rely on emails, spreadsheets and individual vendor platforms to communicate media and creative availability, delivery and reporting. A centralized, automated ad-management tool where media buys are identified, filled and tracked is an essential component to sophisticated ad targeting and versioning, allowing personalized spots to be delivered in minutes, not days.
“This isn’t the silver bullet, but it solves one of the big underlying problems – automation and trying to connect the dots in this world of converged channels and formats,” Nunn said.
Bringing together many of these “dots” – including distribution, creative customization and reporting – in one management tool helps restore advertiser control over an increasingly fragmented process. It empowers advertisers and brands to deliver near-real time customization to satisfy consumer expectations and drives measurable results.
Comcast Technology Solutions is reimagining the ad management and ad versioning processes by introducing automation – helping advertisers generate customized ad creative capable of driving a measurable increase in ROI.
Click Here to Learn More
Read More
Unique new ways to maximize classic VOD content
What’s old could be new again.
From music and fashion to television and movies, culture and content have a way of coming back around and finding new audiences — especially in an age where your new favorite (and long ago canceled) show is only a click away.
“Your 13-year-old may discover a TV series from the 80s like Knight Rider, or Miami Vice, and fall in love with it,” Comcast Technology Solutions Matt Smith said. “There are countless shows sitting out there that could provide value.”
Across the globe, there are huge back catalogs of videos and shows without an opportunity to shine. With the right technology, your favorite old show could find a new audience and help drive new revenue.
Today, 78% of people watch online videos every week, and 55% view online videos every day, HubSpot reports. Video is expected to make up 82% of internet traffic by 2021, according to Cisco, and content providers are searching for the next classic show to crossover to a new generation.
In the most eye-catching example, WarnerMedia paid a reported $425 million over five years to get Warner Bros. Television’s “Friends” away from Netflix, and now plans to offer it on its streaming service HBO Max, set to launch in spring 2020.
However, using library content to build a new platform isn’t a recent phenomenon. When cable channels first began to take off in large numbers during 1980s and ’90s, retired broadcast network series served as their lifeblood. “The Andy Griffith Show,” the biggest hit sitcom of the 1960s, was among the most popular shows on TBS, and reruns of NBC’s procedural crime drama “Law & Order” turned A&E into a viewer destination, according to the Los Angeles Times.
Grow and curate your audience
By 2020 there will be close to 1 million minutes of video crossing the internet per second, and by 2022, online videos will make up more than 82% of all consumer internet traffic — 15 times higher than it was in 2017, according to Cisco .
With the Virtual Channels platform from Comcast Technology Solutions, you can grow an existing fan base — or create one from scratch — by pulling complementary content together into seamless experiences that like-minded viewers will keep watching. Launch a brand that is truly fan-centric. Or, aim for a particular demographic with crowd-pleasing programming.
Discover a new way to turn existing on-demand and live programming into a linear “always-on” over-the-top digital experiences. Stitch together a curated programming, and schedule and serve it to audiences seamlessly.
So, break out your pastel T-shirts and linen suits. Get your black Ferrari convertibles revved up. And, put on your favorite pair of Ray-Bans. There’s enough neon, and potential syndication gold, out there to bring in additional ROI like it’s 1984.
Learn more about how Comcast Technology Solutions makes it easier, and more affordable than ever, to add choice, value, and revenue to your video offerings.
Read More
CDN: Media Delivery for Gaming and Virtual Reality
Developers spend years crafting the newest games. When the next big release date comes, the last thing creators and consumers want are slow speeds and long download times.
Worldwide online gaming traffic reached 915 petabytes per month in 2016 and is expected to increase by 79% in 2019, according to Cisco Systems. As the scale of online traffic grows, the need for delivery networks to carry the content does as well. The Cisco Visual Networking Index (VNI) forecasts a 36% compound annual growth rate in Content Delivery Network (CDN) traffic per month from 2017 to 2022. As file sizes grow exponentially, the need for CDNs to provide continuous, reliable and fast access become ever more critical. The number of end-users downloading content is also expanding quickly, making localized delivery that much more important as geographical demographics expands.
In addition, the appetite for gaming is expanding to eSports, virtual reality, and other new forms of media and entertainment, changing the way many think about these offerings. According to a 2018 survey by the Entertainment Software Association, 79% of gamers report video games provide mental stimulation, as well as relaxation and stress relief (78%). The role of video games in the American family is also changing, with 74% of parents believing they can be educational for children, and 57% indicating they enjoy playing with their child weekly.
The economics of eSports and VR
Esports is already big business. Thanks to advertising, sponsorship and media rights, eSports revenue is expected to reach $1.49 billion by 2020, according to Newzoo, a gaming industry analytics firm. North America will generate $409 million in 2019, the most of any region, the report found. China will generate 19% and South Korea 6%, with the rest of the world comprising the remaining 38%.
In July 2019, 16-year-old Kyle 'Bugha' Giersdorf, from Pennsylvania, was crowned king of Fortnite and took home the $3-million-dollar grand prize. The tournament, held in New York's Arthur Ashe tennis stadium, had a total payout for all levels and ranks of $30 million dollars.
Earlier this year Comcast announced it would be building a $50 million eSports stadium in Philadelphia, and would be partnering with Korean phone company SK Telecom to create an new eSports organization called T1 Entertainment & Sports, which will field teams in a number of different video games, including League of Legends, Fortnite, and Super Smash Brothers.
The global VR gaming market is expected to be worth $22.9 billion by the end of 2020, according to Statista. VR is expected to generate $150 billion in revenue by 2020, according to Fortune, and 500 million VR headsets are expected to be sold by 2025, according to Piper Jaffrey.
When the true VR tipping point comes is the object of much speculation, but one way it could penetrate the consumer market more deeply is by starting with public venues that allow users feel comfortable with the technology. A recent article in Variety discussed how Las Vegas is transforming into a kind of virtual reality hub, with VR arcades and location-based VR entertainment centers opening up in casinos and shopping centers. One of the first casinos to embrace VR was the MGM Grand, which opened a free-roam VR arena in cooperation with VR startup Zero Latency in September of 2017.
A 4K video can consume up to 10 GB of data per hour. A 4K VR experience will require several times that bandwidth. While transporting vast amounts of data, VR content providers, like game developers, need to solve the challenges of latency, quality and speed.
CDN, interactivity, and media delivery
The streaming of interactive content like games and VR differs from video streaming in one major way. While video streaming is a one-way delivery from servers to your device, games and VR are a two-way street. Every time a button is pressed, the signal must travel to the server where the experience is being run. Then, the signal from the server has to go back to represent the result of pressing the button.
A multi-CDN media delivery strategy ensures this process is seamless and downtime is not a concern. If one server goes down, the next is ready to compensate and deliver cached content to those users by caching content nearest to your destination. This improves performance and reduces errors by offloading bandwidth from the origin.
The more servers, the better the odds a consumer is going to get the highest-quality speeds without lag.
Thanks to this two-way street, viewers and gamers can provide feedback in real time, join interactive events that coincide with live media, and unlock rewards within content.
While interactivity might have started in gaming, the tactic will likely carry over into nearly every kind of media audience. The interactivity of advanced advertising, where viewers can feed back in real time, or possibly join an interactive event, becomes increasingly attractive.
A 2017 report from Magna found interactive video ads drive 47% more time viewing a message compared to a non-interactive ad. Even if viewers don’t click, simply having the option to interact makes the ad 32% more memorable than non-interactive ads, the report found.
Learn more about how Comcast Technology Solutions global media delivery innovations can help reimagine your strategy, and prepare your business for what comes next.
Read More
Q&A with Allison Olien: How today’s opportunities can be tomorrow’s foundation for MVPDs
Allison Olien, Vice President and General Manager for Comcast Technology Solutions, collaborates with MVPDs and Content Providers to introduce the newest, most innovative offerings. She took some time to chat with us about how she sees the evolution of the industry for small and medium sized MVPDs, and new opportunities for growth.
Read More
Self-Service Advertising: Reimagining advertising in the U.S.
Self-service advertising models are bringing transparency, control and efficiency to the forefront of the international market. While here in the United States, we typically opt for managed-solutions – a distinction changing fast due to the rise in self-serve digital platforms.
Running campaigns through fully-managed platforms puts the responsibility and control in the hands of a third party. Someone else uploads creative, makes changes or optimizations, and sends back insights. For those with limited time or resources, this can be the right option.
However, for those with the appetite, a self-service advertising platform capable of navigating an entire campaign lifecycle, from planning to execution and reporting, presents major benefits.
We’ve seen the use of self-service advertising platforms for demand and supply-side platforms increase as knowledge and expertise has increased. Agencies and brands now directly use these platforms to deploy and manage advertising spend and activity.
Why Self-Service?
The self-service model empowers advertisers to create and change assets quickly, deliver ads in near-real time, and take control of campaign workflows from start to finish. By limiting the number of hands touching any given campaign, self-service offers increased brand safety, as well as deeper insight into each part of the workflow.
For agencies, manually executing campaigns and working with each advertiser individually can be costly. Automating the process cuts down on time and resources. In addition, self-service can help build links between media agencies, post houses, creative agencies and clients as they work together in a shared platform.
In 2017, IAB commented on the rising trend of self-service models in their annual Internet Advertising Revenue Report saying, “of the roughly 9 million small and medium-sized businesses in the U.S., 75% or more have spent money on advertising. Of these, 80% have used self-service platforms, and 15% have leveraged programmatic advertising. These findings point to significant ad spend going to publishers with these capabilities, and presents a growth opportunity for those that can add self-service and programmatic geared to these businesses into the mix.”
Barriers to Change
I hosted a February 2019 IAB panel discussing how self-service can help solve the complexity of disparate workflows. Peach, an advertising partner of Comcast Technology Solutions, delivers millions of ads every year to more than 100 countries internationally, where the adoption rates for self-service are the opposite of what we see in the U.S.
“We’re up at 98% self-service. So, pretty much everything is self-service internationally,” Peach Head of Enterprise Sales Alex Abrams said during the panel discussion. “In the States, it’s probably 98% managed-service. It creates inefficiencies and barriers to move into 24/7 automation.”
Peach transitioned from managed-service to self-service about four years ago, when automated workflows and technology made it possible at scale. The U.S. market has been more reluctant.
Owning the workflow
Justin Morgan, Senior Director of Marketing Automation at Comcast, described the impetus for a self-service approach, expressing the market’s desire to create efficiencies by having control over each portion of the workflow.
“I think there is a wave of companies wanting to own their data. Own some production elements. Own really anything they can,” Morgan said during the panel discussion. “Put simply, I can either give direction to a company to put something in implicitly, or I can just put it into the system myself. Obviously, the latter is more efficient and faster, and probably cheaper.”
“We should have all of that at our fingertips. So, from that perspective, everything we are doing now is really to move toward a self-service model,” Morgan said.
The evolution toward self-service may be daunting to some, but there has never been a better time to rethink your strategy.
According to the most recent IAB Internet Advertising Revenue Report, digital video advertising revenue reached $7 billion in the first half of 2018, up 35% from the previous year. Cisco estimates by 2021, 80% of all the world’s internet traffic will be video.
In addition, research from Digital Content NewFronts’ 2019 Video Ad Spend Report showed brands expected, on average, to increase spend by 25 percent to $18 million on digital video in 2019—with nearly two-thirds (59%) of ad buyers saying they planed to increase their advanced TV spend.
The U.S. market is poised to reap the benefits of a model designed to reduce complexity and cost while optimizing the revenue potential of each piece of content.
Global trends toward reducing inefficiencies, enabling automation, and taking control of each part of the workflow are indicators of the growing domestic opportunity for these services and for self-service advertising solutions.
Learn more about how Comcast Technology Solutions self-service innovations can reinvent your advertising workflow.
Read More
Three industry terms to refresh and evolve in 2019
We’re well into the new year. The months ahead will be filled with messages of innovation, disruption, and . . . . well, monetization. It was actually the word “monetization” here in our office that sparked talk about some of the most overused terms in our industry. Monetization is certainly not a term that’s in danger of being phased out. In fact, it’s going to be even more prominent as business models and audiences evolve. That said, it did start a fun conversation about some of the terms and phrases getting pretty long in the tooth around.
With Q1 almost in the rear-view mirror, we’re reflecting on all of the new innovations and opportunities surfacing across the video ecosystem. While we operate today with a host of relevant buzzterms like hybridity (more on that below), or converged workflows (linear, non-linear, digital, broadcast), here are some of our picks for terms in serious need of an upgrade:
THE OBVIOUSLY “FRAGMENTED AUDIENCE”
2008 called – it wants its market insights back. We all get it: today’s audiences are connecting with content in multiple ways, over countless device types, platforms, and services (oh my!). We’re far enough past the early days of streaming adoption that there are plenty of industry professionals out there who have not only spent their entire adult lives as multi-screen, multi-platform content consumers, but their entire careers have taken place within a multi-platform delivery ecosystem. To have a conversation about modern video delivery is to accept as a given every audience can be broken out by how they watch, where they watch, and when they watch. The real conversation isn’t about a new phenomenon of audience fragmentation to be mindful of, but more about serving media accurately and more effectively under any circumstance. It’s about keeping the audience engaged no matter the device through a great experience – whether the experience hinges on quality, discoverability, share-ability, or, let’s be honest, price.
ARE WE OVER “OVER THE TOP” YET?
Can we let this one go yet? Of course not. It’s now the de facto way to describe digital delivery as opposed to broadcast. This doesn’t change the fact today’s market is no longer as cut-and-dry as subscriber services vs. over-the-top of subscriber services. It made more sense when serving content “over the top” of other subscriber services sounded revolutionary—because it was. Now that delivering media over the internet is standard operating procedure, it begs the question “over the top of what?” Digital delivery isn’t going over, under, or around anything – and in reality, digital delivery is often going with something else. The obstacles now between you and your audience (other than catching attention to begin with) are technical challenges like latency, metadata management, file size, formatting, hi-def quality demands, etc. So, if we’re stuck with OTT as a term that now means more than it used to, let’s at least agree we’re well into OTT 2.0: an age of mature digital delivery that enhances discoverability, personalizes commerce strategies, and merges broadcast and digital workflows to intelligently evolve along with consumer devices and habits.
ABOUT THAT “MONETIZATION” WORD AGAIN. . .
It’s the word that started the conversation, so it seems apt to wrap with it as well. Looking back at my list, I’m seeing some commonalities: terms we can’t escape, we repeat a thousand times a day, and that’s true meanings continue to evolve. According to Merriam-Webster, monetization as a term was “coined” (I know, bad pun) circa 1879, at which point it literally meant “to establish as legal tender.” It’s of course expanded to mean “extract revenue from,” but the deeper meaning in our industry is much more than the static act of putting a price tag on a program. Hybridity of your monetization strategy is more than just blending subscriber or ad-based approaches. To extract the most value out of content requires context – not just within the relationship that exists with each viewer, but also the relationship of each piece of content with what’s going on in the world, and the ability to pivot accordingly and intelligently.
I should probably add a disclaimer: none of these terms are going away any time soon. We’re sure to be applying each for the foreseeable future. For those of us who eat, sleep, talk, write, and work with these words on a daily basis, we’re always looking for the next great term to repeat into ubiquity. Got any suggestions?
Read More
The State of Digital Transformation in 2019
In 2017, we started off the year talking about the convergence of digital and broadcast workflows. It was inevitable: by bringing the consistent quality and process maturity of broadcast with the agility, reach, and monetization capabilities of digital, companies are creating, as we said at the time, a new ecosystem of win-win scenarios. In truth, it’s really a win-win-win, as consumer choice – and power – have only increased the benefits at the individual viewer level.
2018 saw the acceleration of change in pretty much every measurable way. The capability of viewing devices, from folding mobile screens to gargantuan “wallpaper” in-home displays, continues to drive consumer quality expectations. At the same time, advances in video management and delivery continue to find creative ways to exceed those expectations by wrapping broadcast-quality reliability into first-rate experiences and personalized, hybrid business models that remain relevant to consumers daily.
Our initial partnership with research and strategy company MTM resulted in our first TV Futures Initiative – an international effort to identify the trends that are shaping the relationships between content, consumer, creators, and providers. For 2019, we’ve again partnered with MTM, along with FreeWheel (also a Comcast company) to produce the new research paper, 2019 TV Futures Initiative: Digital Transformation – A Framework for Success.
A FRAMEWORK FOR MEASURING AND TRACKING INDUSTRY EVOLUTION
One of our primary goals in creating this paper was to help media companies with a structured way to strategize and self-analyze in a rapidly changing market. The digital transformation framework we identify consists of six key areas to consider and prioritize:
Content: It’s not exactly a secret that job one of every media destination is to offer content that’s relevant and valuable to consumers. The proliferation of standalone and subscriber services creating original top-quality content has blown the competitive landscape wide open, but it’s more than just the popularity of a program – it’s also about the quality and variety of presentation. Broadcasters and providers are adding value through new streaming services and content partnerships that build audience engagement. UHD programming continues its rise, and while emerging technologies like 360 video and VR/AR might not earn a place on your roadmap today, they are increasingly worth exploring.
Products: How are you bringing your content to market? It’s important to take a hard look at how content is bundled and packaged, and how those decisions play out across platforms, devices, and regions. “Know thy audience” must be a mantra for providers to be heard above the noise of a crowded market, so any learnings about consumer behavior and response needs to be turned into actionable and agile planning.
Delivery: The consumer-facing demands of right now, any-device excellence put tremendous pressure to perform on infrastructure and processes, requiring new ways to eliminate outdated and disparate workflows in favor of a more unified delivery model that can keep up. Content, especially high-demand content, needs to be delivered seamlessly to multiple linear and non-linear destinations seamlessly, with a close eye on security. Protection of content is a serious concern for live programming (like premiere sporting events) as well as on-demand services, and in today’s environment it’s incumbent on each provider to ensure that the value of their assets aren’t compromised.
Customer Management: The importance of cultivating a healthy transactional relationship with consumers cannot be overstated. Direct-to-consumer business models provide unprecedented control, but like the saying goes, “with great power comes great responsibility.” Consumers expect easy, intuitive support as part of their relationship, and companies are building audience relationships that are longer and stronger by meeting folks on their terms – with tailored payment solutions and incentives for customers who bring more people to the brand. Technically, even an ad-supported service is transactional in nature, because consumers are being asked to watch messaging that paid for some eye-share on a program or stream. Informed focus on the customer experience is the only way to find and follow the north star for each brand.
Sales and Monetization: The word that rules our world here is Hybridity. The market continues to shift and grow. Monetization takes on an even deeper sense of urgency when it’s costly to produce. Incorporating an advertising component into a subscriber service, offering specific content as a standalone purchase, tailoring offers based on individual consumer habits – they’re all dynamic ways to maximize the revenue potential of content so long as the underlying technology and management processes can support it.
Technology and Operations: The common thread that runs across every point in the transformation framework is how companies are leveraging new technologies and techniques to close the gap between content and consumer. New approaches may supplant or augment existing components of your workflow, technology changes need to be prioritized, and analyzing the impact of how these advancements will impact the skillsets needed by your team are all considerations (side note: we published a white paper that introduced a different framework for this point alone, called Media Technology and Lifecycle Management).
GET THE RESEARCH
The 2019 TV Futures Initiative research paper illustrates the above points in greater detail, and provides a handful of case studies from companies that are applying these tenets to transform their operations and succeed with consumers globally. You are invited to download the paper for yourself as a tool to:
Gain insights into key digital trends from each of the above points
Better understand the challenges and opportunities ahead
Learn from firsthand experience of other companies
Apply this format to your own internal assessments
Read More
The “Retro Look” For All The Wrong Reasons
The sheer quantity of quality video out there makes it a fantastic time to be an advertiser or agency. So many new devices, new audiences… the possibilities are literally becoming endless. Here’s the thing though: be it live streaming, linear, on-demand, broadcast, you name it – consumers are increasingly savvy about how premium video should look and sound like. Audiences are being trained every day to expect “the good stuff” from every device, and ad quality doesn’t get a free pass. In fact, it’s even more important: seen amidst a crisp, clear, high-definition program, a poor-quality ad looks even worse. When an ad break shifts from an HD or UHD program over to a commercial with quality that feels like a ’70’s sitcom, it can directly impact consumer perception and, in turn, campaign performance.
YOUR ADS ARE BEING PLAYED ON BETTER SCREENS
Let’s focus for a moment on in-home screens and living-room entertainment to see how fast consumers are jumping on the high-def train: Do you remember way back when (oh, a couple of years or so ago), when the price for a 4K screen averaged thousands of dollars and comprised just a small piece of the market?
According to Digital Tech Consulting, 4K TV’s cost around $900 today on average, down from an average of $4,000 just five years ago.
The Consumer Technology Association predicts that 4K Ultra-High Def (UHD) screens will be in 41 percent of U.S. homes by the end of 2019. That’s up from sixteen percent just two years ago.
Throw in the ubiquity of powerful mobile devices with fantastic display technology and you’ve got a global audience that experiences more than ever when an ad is pixelated, if the sound level is out of whack with the rest of the program, or any other technical issue that comes between their screen time and your intended brand message.
AD QUALITY IS FOUNDATIONAL TO BRAND SAFETY
We sat down with Richard Nunn, Comcast Technology Solutions’ new Vice President and General Manager, Ad Suite, for a short Q&A session about where the advertising industry was headed. The quality question was certainly part of the discussion.
“The quality issue for both broadcasters and advertisers is a problem, and it’s a challenge across the industry,” Nunn said. “If you break it down, I think the first point is the fragmentation of channels that are out there, different (file) sizes and formats is a major problem for the buy side. They have to create multiple assets of varying sizes that they’ve never had to do before, ten times what they’ve ever done before.”
The advertising industry has always understood the end-user experience as crucial to the ad-supported programming model. From a brand safety standpoint, we spend a lot of time and energy working to improve brand safety for the advertiser, especially in a more automated ad buying and selling environment. It’s a major concern – ensuring that ads are delivered only to brand-safe environments is certainly a foundation of a successful ad strategy. But even in a situation where an ad suite has built “destination confidence” with advertisers, protection for the user experience – in the form of ad quality that’s commensurate with programming quality – is as important as ever.
HOW TO WIN IN 2019 AND BEYOND
Live streaming is an increasingly important and attractive market for a campaign to tap into, but users are more quality-sensitive than ever. As reported by Broadcasting and Cable, a new report from Conviva provides advertisers and agencies with some neon signposts about where live streaming is headed, and what success looks like:
Not only were global viewing hours of streaming video up by 89% last year, but in fourth quarter alone the increase was a whopping 165% over 2017.
Likewise, video plays were up 74% for the year, and 143% for fourth quarter.
At the same time, consumers are leaving poor experiences in greater numbers:
There was an increase in the abandon rate in 2018: Conviva reports that 15.6% of potential consumers quit a video due to problems with the experience. In the previous year, the abandon rate was more like 8-9%.
For live-streamed content, 16.7% of viewers quit poor experiences, which is up from 11.2%.
Other video destinations, such as vMVPDs or connected TVs, also experienced increases in abandon rates, albeit smaller (up about 2%)
The takeaways? Dodgy spot quality just won’t cut it anymore. It’s important to note that the consumer’s perception of quality is formed from an aggregation of everything that happens prior to the actual viewing experience. With that in mind, brand safety is really a shared goal between an advertisement and the program that hosts it. Poor quality from either side can – and does – have a detrimental effect on the other.
Video consumption habits continue to increase, evolve, and change. Consumers have more ways than ever to enjoy video, they’re smarter about what their screens can deliver, and they’re increasingly less inclined to sit through a sub-par experience. It’s incumbent on every partner in the ad delivery workflow to focus on the end result so that they complement the programming being offered to customers.
Read More
Video Monetization Models: To Maximize the Value of Content, Hybridity is the New Normal
From the research that comprises our recent paper Survive and Thrive: The Changing Environment for Content Services and Video Services, one of the surprising takeaways was that “being discovered” surpassed operational costs as the number one business challenge facing survey respondents. Along those same lines, “maximizing customer lifetime values” was in a virtual tie with operational costs for second place. The clear takeaway here is that success in video means to drive engagement up, while driving costs down. Today’s video monetization models are geared towards accomplishing these fundamental goals while building a fantastic content-consumer relationship – one screen at a time.
BUILDING ENGAGEMENT, FROM AD SPOTS TO PREMIUM CONTENT
It’s that “relationship” thing that’s led us to our current industry conversations around maximizing customer lifecycle values, hybridity and flexibility in commerce and video monetization models, and so on. Advertisers are in the same boat, quickly becoming savvy to the unique audiences at different destinations. It’s a constant learning experience for every video-based business, which is not just a statement about consumer behaviors. It’s also about “unlearning” outdated processes and manual procedures in favor of radical new ways to consistently: a) get to a screen quickly, b) merchandise effectively, and c) serve a consistently crisp, clear audio/visual experience.
The advertising industry, for example, understands that there are different audience cultures at different video destinations, and that campaigns are more successful if they’re tailored to be more aligned with the content that’s carrying them. The trick is to be able to participate in whatever folks are watching. The increased shift to on-demand viewing means that advertisers need a much faster response time from their ad delivery workflow, while gaining cost efficiencies that free up needed resources for targeted ads with shorter shelf-lives (we talk more about that in this blog). Industry adoption of a centralized cloud-based delivery model is an important first step towards modernizing the process of buying, selling, and delivering video products.
Within the video industry, relationship building is a complex balancing act between how consumer behaviors change at the macro audience level, and how an individual’s preferences change over time. There’s a direct tie back to advertisers as well: brands that need a deeper relationship with consumers – automotive brands, for example – are going to gravitate towards destinations and delivery methods that can bring them into a closer orbit with actual buyers.
ORGANICALLY GROWN AUDIENCES ARE HEALTHIER
There’s never been a better time to build an audience than today. That said, it’s also an increasingly global audience, which means that commercial strategies need the technology horsepower and know-how to pivot based on regional changes in addition to larger lifecycle management concerns. Some merchandising requirements might seem like no-brainers, like the ability to describe video assets in multiple languages as well as in images; but nurturing relationships at that scale means solving for the entire end-to-end – from app/storefront to multi-currency transactions, in different ways, with different customers, in different regions.
From a technology standpoint it’s imperative to have the ability to change up the way your content is merchandised, and the way promotions and offers are displayed, so that the user experience can be tailored to support all these factors easily. There are a ton of factors to consider:
Content might be bundled for specific audiences by category: such as genres, series, sequels, and trilogies. Or, dynamically bundle content targeted for specific devices and access rights (e.g. one movie for the iPad, two movies for all devices, or a collection of media that dynamically changes).
Pre-set access rules and pricing templates might be applied for different viewing rules like pay-per-view events, season passes, single transactions, movie bundles, or subscription access.
Product tags need to be easily applied at the video and bundle levels to make it easier for folks to search for and select content
For a subscriber-based model, pricing and special offers can get complicated quickly. Maybe a code-based promotion is the way to go (i.e. “enter FREEWEEK to start your trial”), or maybe a discount is applied based on a percentage, or at a fixed rate, or based on a minimum time commitment, or . . . you get the idea. Ultimately, a true hybrid video monetization model is something that needs a lot of technology behind it, to enable companies to turn their consumer data into actionable intelligence – and then into stronger, longer relationships.
THE BUSINESS OF “PLAY”
Demographics and buyer personas exist to categorize the constituents of a market, so that businesses can understand them and do a better job catering to them. Video brands looking for a deeper one-on-one with customers need the flexibility to shift and pivot on a more personal level. For more information we invite you to read our new article The Business of Play: Monetization in a Video Economy. A truly savvy business model begins with creative out-of-the-box thinking; but implementing it is a technology challenge that’s evolving right along with delivery quality and screen resolution.
Read More
VOD MONETIZATION: NEW TERRITORY FOR LOCAL ADVERTISERS
Are you watching more video-on-demand (VOD) programming? Of course you are. We’re living in a binge-watching economy where entire seasons of a program are delivered all at once and VOD consumption is a growing component of the average viewer’s video diet. The market opportunity for advertisers is nothing less than explosive: according to a 2018 report by the Video Advertising Bureau (VAB), ad impressions for VOD have experienced a dramatic increase from 6.3 billion in 2014 to 23.3 billion last year. So how do you capitalize on this 4x growth?
DISPARATE TIMEFRAMES VS. TIME-SENSITIVE ADS
For businesses and agencies at the local and regional levels, VOD advertising can be a tough nut to crack, primarily because of the convoluted process of placing and managing VOD ad spots. It can take anywhere from 24 to 72 hours to make spots available for play through multichannel video programming distributors (MVPDs). A 24-hour turnaround would already be too long for VOD to be viable for short-term promotions or late-breaking opportunities. If it takes even longer, it means that the viability of a campaign with a very short shelf life would be inconsistent across destinations. There are a lot of reasons why VOD processes (and timeframes) are all over the map:
Dynamic Ad Insertion (DAI) for set-top-box (STB) VOD requires additional metadata in a specific format (CableLabs 1.1) in order to work properly with the intended program. The metadata requirement lies on the MVPD side, but content providers need the software and people to write it. Many commercial spots are delivered either without it, or with metadata that’s written in an incompatible format.
DAI for digital destinations requires a variety of media formats. Content providers need the correct software to solve for these destinations, as well as video experts who can ensure that all transcoding / transformation is performed properly. The post-transcoding playback experience must be at its best, prior to being subject to any of the other factors (such as geography and network traffic) that may impact the consumer’s playback.
Every MVPD also has to account for its internal own delivery mechanism – or multiple mechanisms. They may outsource their work, maintain their own technology stack, or both. Larger distributors may have additional traffic coming from internal clients, such as advertising for news, sports, or other programs that the MVPD offers.
Adding to the overall complexity of VOD ad delivery is volume: to say there are a lot of ads to process would be a gross understatement. For national spots alone, just one major provider is moving hundreds of thousands of spots annually; and when it comes to VOD delivery, spots are still being relegated to a manual queue, mainly due to the challenges above.
Everything adds up to non-linear inventory that’s a real challenge to monetize effectively, especially during critical Nielsen C3/D4 periods that impact the value of advertising. Local ads or short term, time-bound promotions aren’t a good match for VOD playback unless they can be swapped out in near real-time once they’re no longer viable.
EXPANDING VOD MONETIZATION WITH THE CLOUD
With a centrally-managed ad library and management architecture that resides in the cloud, the disparate VOD workflows across delivery endpoints are brought together into one consistent, simplified, and accelerated process. This is huge news for both sides of VOD monetization: not only does it open up new revenue opportunities for ad purveyors, but it also improves business for content providers on the buy-side of the equation looking for more ways to sell ad space. The centralized location within the overall ad delivery pathway is key, providing both advertiser/agency and content provider/broadcaster with visibility and ease- of-use. Where the previous workflow takes up to 3 days to complete, the new process can be completed in hours.
(function(n) {
var url = "//comcast.postclickmarketing.com/adstor-workflow",
r = "_ion_ionizer",
t = n.getElementById("ion_embed"),
i;
t.id = r + +new Date + Math.floor(Math.random() * 10);
t.setAttribute("data-ion-embed", '{"url":"' + url + '?_ion_target=embed-1.0","target":"' + t.id + '","appendQuery":false}');
n.getElementById(r) || (i = n.createElement("script"), i.id = r, i.src = (n.location.protocol === "https:" ? "//8f2a3f802cdf2859af9e-51128641de34f0801c2bd5e1e5f0dc25.ssl.cf1.rackcdn.com" : "//1f1835935797600af226-51128641de34f0801c2bd5e1e5f0dc25.r5.cf1.rackcdn.com") + "/ionizer-1.0.min.js", t.parentNode.insertBefore(i, t.nextSibling))
})(document);
Ads and promos are now aggregated into one common location.
Digital/ad operations teams can locate assets (both external ads and internal promotional spots) via web-based UI.
Assets can be viewed, rules applied, and distribution initiated to preset destinations.
Information needed for ad decisioning services (ADS) is immediately generated.
Ops teams can copy and paste the information into ADS, and associate the creative assets with the correct campaign immediately.
The across-the-board benefits translate into more than just better performance. They imbue the entire operation with the needed agility to respond to changes in short order, and make last-minute revenue opportunities possible.
Read More
Comcast Technology Solutions Certified Partners Program
Video standards and customer expectations are constantly evolving and new technology solutions are required to stay competitive. There are more than a dozen platforms on which consumers watch video, each with support for varying video streaming, caption, and digital rights management (DRM) formats, not to mention application development APIs and platform capabilities. Artificial Intelligence and Machine Learning are profoundly changing our understanding of not only user behavior and your business’s health, but the content itself. Monetizing your video content requires ever greater sophistication, not only by providing different business models (AVOD, TVOD, SVOD, XVOD), but also optimizing customer acquisition and retention (CRM, paid and organic search, social). Distribution must be scaled, and measured to ensure timely delivery, no buffering, and fast time to first frame.
Video is hard to do right, and the consistent quality of every video experience is foundational to a healthy consumer lifetime value. Choosing the right mix of vendors and partners to launch a video solution is critical to commercial success. Comcast Technology Solutions is fortunate to partner with leading technology providers across the industry, and around the globe, to deliver best-in-class experiences to our customers, and our customers’ customers. To that end, we’re excited to announce the launch of the Comcast Technology Solutions Certified Partner Program to address these complexities.
CERTIFIED PARTNER BENEFITS
We work closely with our Certified Partners across a broad ecosystem of specialties to ensure the most consistently reliable, trusted, and cost-effective integrations with Comcast Technology Solutions’ portfolio of products and services. Together with our Certified Partners, we continually assess, improve, and evolve our joint offerings to ensure our customers benefit from rapid time-to-market, and our combined, emerging roadmap updates.
When our customers select a Certified Partner, they are assured that the partner’s product integration into Comcast Technology Solutions’ technology stack is well-scoped, follows our best practices, and has passed rigorous technical validation.
Certified partners receive training and support on our technologies, as well as guidance on each project from initial product integration all the way through to customer delivery.
Partner integrations are evaluated and certified for technical compliance, as well as for transparency and alignment across our sales, marketing, product, and delivery teams.
All Certified Partners build their integrations against the same set of Comcast Technology Solutions specifications, which ensures a level playing field where partner product solutions can predictably interoperate, differentiate and shine.
INITIAL LAUNCH
Our initial launch of Certified Partners includes industry leading partners in the Audience Insights & Analytics and UX & Application space.
Jump TVS, Streamhub, and Wicket Labs each offer unique solutions to enable audience measurement and rich insights, as well as the overall health of the business.
Accedo and Diagnal are experts in video application development, offering a variety of products and solutions that will grow with every business.
We are proud to welcome them to the program. Comcast Technology Solutions and our Certified Partners across the ecosystem enable customers to put more focus on their business, and less on their technology stack. Because Certified Partners’ solutions are well scoped and vetted, our customers know what they are getting, and when they are going to get it. By deploying Certified Partners’ solutions, Comcast Technology Solutions customers realize faster time-to-market and thus time-to-value.
If you’re a current customer looking for a well-integrated partner solution, take a look at our current roster of Certified Partners here.
If you’re a partner or vendor who is interested in bringing your solution closer together with Comcast Technology Solutions’ products and services through our Certification program, contact us!
COMCAST TECHNOLOGY SOLUTIONS CERTIFIED PARTNERS PROGRAM
Comcast Technology Solutions
2018-11-23
2018-11-23
Blog
https://www.comcasttechnologysolutions.com/sites/default/files/2016-09/CTS_Final.jpg
Video standards and customer expectations are constantly evolving and new technology solutions are required to stay competitive. There are more than a dozen platforms on which consumers watch video, each with support for varying video streaming, caption, and digital rights management (DRM) formats, not to mention application development APIs and platform capabilities. Artificial Intelligence and Machine Learning are profoundly changing our understanding of not only user behavior and your business’s health, but the content itself. Monetizing your video content requires ever greater sophistication, not only by providing different business models (AVOD, TVOD, SVOD, XVOD), but also optimizing customer acquisition and retention (CRM, paid and organic search, social). Distribution must be scaled, and measured to ensure timely delivery, no buffering, and fast time to first frame. Video is hard to do right, and the consistent quality of every video experience is foundational to a healthy consumer lifetime value. Choosing the right mix of vendors and partners to launch a video solution is critical to commercial success. Comcast Technology Solutions is fortunate to partner with leading technology providers across the industry, and around the globe, to deliver best-in-class experiences to our customers, and our customers’ customers. To that end, we’re excited to announce the launch of the Comcast Technology Solutions Certified Partner Program to address these complexities. CERTIFIED PARTNER BENEFITS We work closely with our Certified Partners across a broad ecosystem of specialties to ensure the most consistently reliable, trusted, and cost-effective integrations with Comcast Technology Solutions’ portfolio of products and services. Together with our Certified Partners, we continually assess, improve, and evolve our joint offerings to ensure our customers benefit from rapid time-to-market, and our combined, emerging roadmap updates. • When our customers select a Certified Partner, they are assured that the partner’s product integration into Comcast Technology Solutions’ technology stack is well-scoped, follows our best practices, and has passed rigorous technical validation. • Certified partners receive training and support on our technologies, as well as guidance on each project from initial product integration all the way through to customer delivery. • Partner integrations are evaluated and certified for technical compliance, as well as for transparency and alignment across our sales, marketing, product, and delivery teams. • All Certified Partners build their integrations against the same set of Comcast Technology Solutions specifications, which ensures a level playing field where partner product solutions can predictably interoperate, differentiate and shine. INITIAL LAUNCH Our initial launch of Certified Partners includes industry leading partners in the Audience Insights & Analytics and UX & Application space. Jump TVS, Streamhub, and Wicket Labs each offer unique solutions to enable audience measurement and rich insights, as well as the overall health of the business. Accedo and Diagnal are experts in video application development, offering a variety of products and solutions that will grow with every business. We are proud to welcome them to the program. Comcast Technology Solutions and our Certified Partners across the ecosystem enable customers to put more focus on their business, and less on their technology stack. Because Certified Partners’ solutions are well scoped and vetted, our customers know what they are getting, and when they are going to get it. By deploying Certified Partners’ solutions, Comcast Technology Solutions customers realize faster time-to-market and thus time-to-value. • If you’re a current customer looking for a well-integrated partner solution, take a look at our current roster of Certified Partners here. • If you’re a partner or vendor who is interested in bringing your solution closer together with Comcast Technology Solutions’ products and services through our Certification program, contact us! .
Comcast Technology Solutions
Read More
Broadcast Quality? It's Still a Thing.
What has a stronger impact on an audience’s perception – a great experience or a singularly awful one? In a world where every viewer is also a social media publisher in their own right, we could certainly point out the consequences of poor video performance (like we did in our March eBook). As consumers, we really don’t need to look much further than our own behavior. If something’s not loading quickly, or playing without interruption, it’s not getting played.
The hunt for consistent broadcast quality across device types is the foundation of any long-term viewer relationship. Today, content can not only be viewed across devices, but also across devices via different platforms. For example, your living room smart TV can play the same piece of content through its own built-in platform, through your cable provider, through a game console, or some other connected device. Where folks choose to gorge themselves on video is an experiential thing, and more often than not it’s an experience that’s inexorably tied to other experiences. No matter where these experiences occur, broadcast quality is increasingly the standard.
Technology leaders weigh in . . .
We asked over 200 technology managers in our industry to share their perspectives on where we are at in the quest for broadcast quality and what keeps them up at night. We’ve published the results in a whitepaper called Survive and Thrive: The Changing Environment for Content Services and Video Services. In this paper, the quality question offers some interesting online/broadcast comparison: Over-the-air broadcast and pay TV linear channels are an important part of the respondents’ distribution strategy (with 44 percent and 34 percent of respondents, respectively),
When asked the question “when will online video meet / exceed TV quality and reliability,” the results showed pretty significant split between the respondents who think “we’re already there” and the ones who view parity as a next-decade thing :
Regardless of where they felt we were in the process, consumers are certainly happy with the across-the-board improvements to resolution, color, and overall playback quality: Ericsson ConsumerLab reports an 84% satisfaction rate with online video from their own study[1].
Exceeding (and shaping) tomorrow’s consumer expectations
With the accelerating commercial adoption of 4K / UHD / HDR screens, and the eventual arrival of consumer devices that can take advantage of the ATSC 3.0 broadcast standard, we are of course seeing a massive increase in available 4K programming across the entire provider ecosystem. While clearly a significant improvement in picture quality, the sheer size of ultra-high definition video files bring new challenges of their own. As consumers acquire more of a taste for (and an ability to play) “the good stuff,” every touchpoint in the delivery workflow is more heavily taxed by having to store, transcode, and deliver files that are up to four times bigger than standard. It’s a lot of change that needs to be managed all at once, especially when the premium prices for premium content create premium expectations. It’s no wonder that 92% of our respondents said that meeting or exceeding broadcast quality and reliability is still important.
To learn more about how our respondents feel about a wide range of topics, including the use of cloud technology and artificial intelligence, download the whitepaper here.
Read More
The Changing Video Distribution Services Environment for Content Providers
We recently partnered with Broadcasting & Cable and TV Technology to sponsor a new research paper called “Survive and Thrive: The Changing Environment for Content and Video Providers.” For this paper, the survey targeted technology managers who are responsible for the implementation of video distribution services. Participants were asked to rank the importance of ten of their most pressing business challenges. Discoverability topped the survey as the number one challenge, with 71 percent of participants rating it higher than maximizing lifetime value or controlling operational costs. With over 200 premium video services in the market, it’s becoming increasingly difficult for brands to get noticed; but consumers have to find the front door before any other performance metric matters.
The attached infographic illuminates some more findings from the research:
(function(n) {
var url = "//comcast.postclickmarketing.com/NewBayResearchInfographic",
r = "_ion_ionizer",
t = n.getElementById("ion_embed"),
i;
t.id = r + +new Date + Math.floor(Math.random() * 10);
t.setAttribute("data-ion-embed", '{"url":"' + url + '?_ion_target=embed-1.0","target":"' + t.id + '","appendQuery":false}');
n.getElementById(r) || (i = n.createElement("script"), i.id = r, i.src = (n.location.protocol === "https:" ? "//8f2a3f802cdf2859af9e-51128641de34f0801c2bd5e1e5f0dc25.ssl.cf1.rackcdn.com" : "//1f1835935797600af226-51128641de34f0801c2bd5e1e5f0dc25.r5.cf1.rackcdn.com") + "/ionizer-1.0.min.js", t.parentNode.insertBefore(i, t.nextSibling))
})(document);
So, where are content and video providers placing their focus?
Personalized Experiences
The subscriber-based service model clearly works, but what’s the secret ingredient that keeps members paying – and actively using – a service? The answer is probably unique to each provider’s audience, but the end result is the same: a media experience that learns how to remain relevant to paying customers by showcasing the right content, using member insights to guide decisions, and delivering trusted, high-quality viewing.
Cost-Effective Technology
Multi-platform video delivery is complicated and expensive. It behooves companies to quickly capitalize on partnerships and efficiencies that can free up resources for content creation and curation. The results of this survey might indicate that the promise of the cloud is still viewed with caution, but with rising content costs and more direct-to-customer offerings on the horizon, time to market is a critical success factor that cloud solutions can greatly improve.
“Hybridized” Monetization
It’s increasingly common to see an ad when watching something that’s part of a subscription that you’ve already paid for. Providers are getting smarter about how to organize and monetize their experiences, not just to maximize the value of each asset, but also to make purchase decisions easier. Tiered memberships, targeted incentives, and promotions that incentivize new viewers are all part of a mature monetization strategy.
Want to know more? Download the research paper, “Survive and Thrive: The Changing Environment for Content and Video Providers,” here.
Read More
Your Player: How Fast to First Frame?
Did you “like” or “dislike” a video this week?
Watching video is an emotional experience. Sure, most programming is supposed to be – your favorite program, a riveting news piece, or even a lousy movie will all elicit a specific feeling, but even the most straightforward and not-very-interesting piece of content is still elevated when delivered by video as opposed to straight text. Why? Simply put, video is essentially an amalgam of every other communication medium. With your brain engaged by text, pictures, and sound, stories become more memorable and knowledge is transmitted and retained more completely. With a plurality of our senses working in concert, we’re not only more inclined to make purchase decisions, we’re also more likely to abandon a video that isn’t getting the job done:
Video is such an effective sales tool that according to recent research from Hubspot, 81% of polled businesses are using it – that’s up from 63% just last year. [1]
Over 85% of online viewers who stop watching a video left because the loading time was too long. [2]
67% of viewers say that quality is the most important factor when watching live streamed programming [3]
The long and short of it is this: whether someone is clicking “Play” or the video is set to start with a page load, every second between the intended start and the actual start brings you closer to viewer abandonment. No matter if your goals are to inform, inspire, elicit, or entertain, playback quality really, really matters. To add more complexity to the mix, our multi-screen consumption habits require that your video performs well no matter if it’s being viewed in a living room, at a desktop, on a phone, or anything else in your life that has a screen on it. Meeting consumers on their terms only works if their expectations are being met or exceeded.
PDK6: OPTIMIZED PLAYBACK PERFORMANCE
Both speed and quality drove the innovations contained in the newly-released sixth version of our Player Development Kit (PDK6). Built from the ground up for the most up-to-date browser standards for playback, UI rendering, and autoplay requirements, PDK6 is optimized to be the fastest to first frame, with playback at lightning-fast speeds in most cases. PDK6 brings a host of improvements to the consumer experience, as well as to operational efficiency:
Better advertising: PDK6 loads faster, renders ads natively, and supports both client and server-side ad insertion out-of-the-box and integrates with ad-serving platforms.
Unmatched flexibility: The streamlined and mobile-friendly UI building make PDK6 easy to implement and customize, while also providing a consistent player experience across browser platforms and device types.
Broad industry support: PDK6 is pre-integrated with third-party analytics vendors, and provides support for analytics solutions from providers such as comScore, Nielsen, Google, NPAW, and much more.
To learn more about the capabilities of PDK6, download the technical one-sheet here.
[1] “The State of Video Marketing in 2018,” Hubspot, January 24, 2018.
[2] “2017 Video Streaming Perceptions Report.” Mux, April 13, 2017.
[3] “What Audiences Expect from Live Video,” New York Magazine and Livestream, infographic, 2017
Read More
Digital Rights Management - a Multi-Platform Checklist
Digital Rights Management (DRM) is, at its core, a way to protect the long-term value of intellectual property in a content-driven economy (for those interested in the history of DRM, Vice.com’s Motherboard published a fantastic article last year on the subject). Content access has to be managed across a dizzying combination of device types and streaming formats, not to mention mobile devices that can travel in and out of different programming areas. No matter where someone is, what device he/she is using, or what platform is running, your technology stack must be capable of understanding and granting consumer requests instantaneously.
A primary driver of DRM complexity is simply that there’s no single “right answer” to enable DRM – every destination uses a rights management approach to meet their needs. There are several components in a complete service that you should look for so you can:
Establish licensing
Negotiate handshakes between the consumer’s player and the right DRM provider for the viewer’s interaction
Streamline the entitlements process to make it a seamless and unobtrusive part of a consumer’s experience
JITP/E for Content Prep and Encryption
Storage is an issue that gets more challenging as consumers demand higher and higher quality video. Better quality means bigger files, and each one of those files needs an iteration that’s formatted for consumption by all the browsers, mobile apps, set-top boxes and other TV-connected devices you’re working to reach. Just-In-Time Packaging and Encryption (JITP/E) creates format-specific video on request, reducing redundant file storage. JITP/E also enables the multitude of adaptive bit rate (ABR) packaging format+encryption schemes on the fly so that it’s unnoticed by the consumer. Once a video is encrypted, it also needs to be decrypted for playback based on the end-user’s device / platform requirements such as:
HLS / Fairplay for iOS and OSX devices
DASH / Widevine for Android and Chrome devices
DASH / PlayReady for Microsoft devices and set-top boxes (STBs)
Key Management Service
Key management processes are unique for each DRM method and not something where you can just “flip a switch.” Whether it’s configuration and testing, encryption, user authentication, or playback, you’ll need to maintain relationships with every key management company required to reach a device-diverse audience. Here at Comcast Technology Solutions we let clients dictate just how much of those functions they want to control, or we can manage the entire DRM relationship / process ecosystem.
License Management
License management, token authentication, and decryption key transmissions are all parts of your DRM process that need to provide a solid pathway from user request to content delivery. Once a video is packaged and DRM-protected for the platform and device of a specific playback request, it’s ready for the user to receive it from a server within your content delivery network (CDN) – provided that the user is authenticated and a license to unlock the content for playback is granted and provided.
Entitlement Setup and Modification
Entitlements establish the conditions that a user needs to meet in order to be entitled to watch a program. This conditional access is really complicated in a multi-platform environment and must be assessed for each request. Just some of the factors that your entitlements process must solve for:
Users might have entitlements on multiple devices
There might be specific limitations, such as a cap on monthly usage, tiered access to specific files or higher-definition video, or limitations created by the device itself
Off-line viewing might have unique requirements of its own
It can sometimes be easy to confuse license delivery with entitlements, yet each brings a unique set of capabilities to the overall DRM solution.
Licensing: DRM and its associated license control the physical access to the video file such as how many times it can be played on a specific set of hardware and for how long before an entitlement check needs to be performed again.
Entitlements: The entitlement system controls the consumer’s access to the asset based on the business or purchase model employed (e.g. is the user currently a paying subscriber either as part of the native D2C experience or via a TV-V model, did the user just purchase rental access to an asset than only allow the asset to be viewed a certain number of times in the rental period).
Using these two access control systems in concert allows content providers to offer a wide array of flexible monetization and protection combinations. The ability to maintain dynamic control over user entitlements is more than just a crucial management function. It’s also a vital part of your monetization plan when consumers can easily add more variety into their experience. Upselling to a higher tier of access, or simply making it easy to buy individual content titles, are functions that have an impact on entitlements. Your DRM solution should give you a clear snapshot of all the entitlements granted to viewers, and the ability to easily modify those rights.
Ultimately, your end-to-end DRM solution needs to be tailored to your needs, your technology stack, and your business model. We developed our DRM approach for flexibility so that once a client’s DRM solution is customized and deployed, additions or changes to the methods used by different devices can be easily integrated to keep the entire workflow updated.
You can learn more about Comcast Technology Solutions’ Digital Rights Management capabilities here.
Read More
Attaining Long-Term Subscriber Love
The attraction of a subscription VOD business model is a reliable revenue stream, powered by a personalized experience that evolves to remain relevant and valuable to viewers. Monetization choices are a big part of that, and have an indelible impact on the way subscribers value their services.
Planting Seeds for Organic Growth
It’s easy to see how effectively a popular show can bring a competitive destination to the attention of a larger audience. Binge-watching is a relatively new phenomena, and the increasing number of industry awards for over-the-top (OTT) movies, series, and other programming is testimony to the drive for great content that really resonates with audiences. A great show can lead to explosive growth in new subscriber sign-ups, especially when paired with the launch of new add-on options that build more value, or creative introductory offers that entice and invite deeper participation at a more attractive price. Once the allure of introductory offers and big-ticket content segues into daily use, however, media brands are learning to engage and even incentivize their members to help grow the brand organically. But what does organic growth really mean?
Simply put, it’s an emphasis on growing from the inside out. Not only through ads, search optimization, or other marketing, but also by improving the value of each relationship and cultivating a base of happy customers who will share your content with others. Just like a garden you’d tend in your back yard, organic growth starts with consistently delivering the things that keep your customers healthy – which for video includes things like a great playback experience, rich and unique content offerings, and an enduring sense of value. From there, you can provide tools for your fans to use, and incentives like gift vouchers or refer-a-friend programs that make participation rewarding and easy.
New white paper: Advanced Commerce: SVOD
Comcast Technology Solutions has just published a new paper that speaks specifically to the advanced commerce tools and techniques that SVOD destinations can employ to lengthen and strengthen subscriber relationships. Sections include:
The “Perfect Subscription VOD Model” — Your technology choices support your ability to meet each viewer on his / her terms, no matter what that means today – or what that might mean tomorrow. Subscription terms may shift based on business objectives, and offerings may evolve into a hybrid of subscription, ad-based, or transactional components to appeal to a wider range of consumers.
Making it Easy to Locate and Buy Content — Your user experience (UX) and user interface (UI) design can build experiences that elevate the lifetime value of subscribers through seamless monetization opportunities and simple, effective quality of life enhancements.
Enabling Tailored Promotions and Custom Offers — Just like any other relationship, the successful SVOD business is constantly evolving along with the habits and preferences of consumers.
Read More
The Quickening Pace of Video Delivery Workflow Convergence
Last year we kicked off 2017 by referring to it as “The Year of Converged Workflows.” And in many ways, it certainly was, with both broadcast and over-the-top (OTT) video companies learning from – and working more closely with – each other than ever before. At this year’s National Association of Broadcasters (NAB) convention in Las Vegas last month, it was clear that no matter what the video delivery vehicle is, conversations and innovations focused on merging the quality and reliable performance of broadcast with the efficiencies and multi-platform power of IP delivery. With the mid-point of the year on the horizon, here are just a couple of ways in which the world of converged workflows will continue to accelerate:
Improving communication between workflows
The landscape for 2018 and beyond is uncharted territory. In the short term, 4K programming and premium on-demand content will continue to increase the size of the average file, and the quality expectations of the average viewer. In the longer term, new forms of consumer video (VR and AR, for example), will need even more complex supporting information than a “standard” video playback. This will require a robust, intelligent video delivery superhighway that can quickly process and communicate a mountain of data. Today, the industry is laying a foundation of standards and protocols that will provide more flexibility and better machine-to-machine communication. Two highlights:
SMPTE 2110: The Society of Motion Picture and Television Engineers announced this suite of transport protocols last year, and as adoption increases it promises to be a big step in the right direction for a converged broadcast / IP workflow. SMPTE time-stamps the video, audio, and ancillary data of a video signal and then sends them in separate IP streams, which enables the receiving end to ensure a better playback -- or to just receive specific parts (audio only, for example) without having to accept the entire program.
SCTE 224: Each video relies on a massive amount of accompanying metadata in order to be delivered accurately, to honor all policies, restrictions, and rights, and to enable robust search discoverability. The SCTE 224 standard, which we’ve written about extensively (here’s our March blog on the subject), is not ubiquitous yet; however it’s gaining traction as an effective way to bring QAM and IP delivery into alignment. On the vMVPD front, FuboTV announced their adoption of the standard in March.
Closer collaboration between OTT and pay TV
Audiences don’t see broadcast vs. digital as an either-or proposition. Over half of US homes have both pay TV and OTT services, according to a new study by Parks and Associates. Furthermore, the same study shows the value of pay TV’s presence in the home as an important platform by which digital media brands can connect with families through a premium living room viewing experience. OTT users report watching their services from their television screens at least 50% more than other platforms such as PC’s, smartphones, or tablets. Consumers are empowered by choice, but the sheer amount of choice can make it challenging for consumers to build a personalized “library of content access” that makes it easy to manage and discover content. Every destination thrives based on its ability to deliver content that resonates with audiences, but it’s not just “the next big show” that builds lasting relationships with viewers. Local programming, live sports and events, niche programming, and monetization strategies that simplify the management of subscriptions and transactions are all part of a long-lasting consumer-content relationship.
It's worth keeping in mind that collaboration is not a matter of traditional broadcast workflows being supplanted by IP delivery in its entirety. Instead, think of it as an ongoing evolution of the hybrid models that are in operation today. After all, the points of complexity within the video delivery workflow have definitely shifted. As technology innovations simplify the production and delivery of quality content, it has also created a diverse landscape of possibilities as to how video can be used and sold. The “best of both worlds” approach solves for today’s opportunities while maintaining the adaptability needed for tomorrow’s video market.
Read More
You’re Only Live Once
The excitement of any major live event, like this year’s college basketball playoffs, aside from being responsible for countless hours of distracted coworkers, is a great time for our industry to witness the improvements to live streaming in action on a massive scale. Digital destinations are providing a better core experience to consumers, with improved framerates and cool new personalization features. When it comes to straight-up video quality and reliability, however, satellite broadcast is a great way to spearhead a big ticket, multi-platform live event, sports or otherwise. By combining both broadcast and digital into a well-planned live event, consumers get the best of both worlds.
“Best Effort” does not an SLA make
IP-based delivery is light-years ahead of where it was just a few years ago, but as more and more consumers develop an appetite for “higher- and higher-quality” video, the widening data stream continues to push the performance limits of every platform. Advertisers and investors understand the value of reaching a gargantuan audience, but they also – rightly – seek assurances that viewers are getting an experience that supports their business objectives. A best-effort IP-based delivery means that there’s no guarantee that the end result will meet any quality metrics, or that the stream will even be successful.
Dan Rayburn, streaming video analyst at Frost & Sullivan, pointed out a common problem with live streaming in a recent interview with Fast Company: there are so many potential failure points, solving for them all is an expensive proposition that most services don’t account for. “When a backup doesn’t kick in, or you don’t have it, or you haven’t planned for it, or you don’t want to spend the money for it, well, your live event’s going to go down,” said Rayburn. Accountability for streaming quality is still a challenge. It’s one thing to talk about how many people tuned in to a streaming event, and another thing entirely to talk about lag, buffering, or streams that were lost entirely.
Broadcast, on the other hand, provides the availability and reliability that consumers have relied upon since the first days of live television. It’s that straightforward, low-latency, consistent quality that digital delivery is benchmarked against. When millions of dollars (or more) are on the line, it can be an exceptionally costly corner to cut.
The satellite-forward, service-first model
The best delivery architecture is one that leverages the strengths of broadcast and digital into a seamless multi-platform workflow – piloted by expert event engineers who plan every transmission and quarterback the communication from start to finish. Look at it this way:
IP-only: lower cost, complex failsafes required, “best effort” performance
Broadcast: more expensive, rock-solid best practices, high availability and reliability
Strategic use of both: trusted, cost-effective performance that supports both linear television and cloud experiences
It’s impossible to overstate the value that an experienced team brings to a top-tier live event; not just in overall quality, but in cost savings as well. A complex live-streaming event, such as a high-profile awards show, is a great example. There are cameras everywhere: the red carpet and interview areas out front, around the audience, the main stage, backstage, etc.
A well-planned and executed event will map out all redundancy paths, but also will prioritize the use of broadcast signal. For example, the main stage cameras would be captured via broadcast to ensure quality, but ancillary feeds could be digital-only, saving costs.
Satellite and digital captures are then converged into a master control so that the complete program can be compiled, encoded, and distributed across platforms.
“Simulated” live events open up more possibilities
The hybridized broadcast / digital model is really a best-of-both worlds architecture that not only bakes in the video quality and process redundancy expected of a major live event, it also opens up opportunities to appeal to viewers in new ways. A program can be created that captures the immediacy and advertising allure of a live event, while also incorporating existing video assets into one seamless program. A live studio audience portion, for instance, could be woven into a playlist with sections of non-live material, encoded and then distributed just like a linear television feed. It’s a cost-effective way of producing a special “one night only” event, or a way for a large organization – a coast-to-coast church broadcast, for instance – to maintain a live “feel” with the dependable quality that keeps viewers engaged.
Live programming is going through the roof, with media brands of all stripes working to realize – and monetize – experiences that bring viewers closer to a true “front row experience” than ever before. That said, there are no do-overs in live video; so before the world tunes in, it’s worth the extra planning to ensure that viewers on any platform get the kind of quality that keeps them in their seats.
Read More
Multi-CDN Optimization: The Importance of Measuring Each and Every Stream
One way a CDN competes for your video delivery business is by showing you data that proves that it is faster than other CDNs. Every CDN has a handful of industry reports, third-party tests, and/or head-to-head customer bake-offs that prove that at least sometimes, under some conditions, they are fastest.
But none of that data answers the questions that really count: how well does each CDN perform for you, delivering video to your end users? And how can you combine two or three CDNs to get the best video experience for every user?
Any CDN’s overall performance is an average of how that CDN performs on millions of deliveries; across many regions; to dozens of different devices; connected to hundreds of access networks; at different times of the day and days of the week; and averaged over hundreds or thousands of customers – all under constantly changing conditions. Reducing all of those variables to a single, average statistic ignores all the complexity and volatility inherent in the real-world conditions under which end users watch video.
For example, each customer has its own line-up of CDNs, and is mapped to a particular subset of each CDN’s resources; each customer’s users are distributed differently – geographically, across access networks, and across devices; and the demand profile against each customer’s content catalog varies and is unique.
The most effective way to manage an environment this complex – and ensure the best possible results – is to measure the performance of each CDN as it delivers each and every video stream, and then use those specific, detailed measurements of your audience’s actual video experiences to assign CDNs stream-by-stream, in real time.
And the actual numbers prove it.
We looked at two customers – let’s call them Publisher 1 and Publisher 2 – each of which uses three CDNs (two of which are the same). Over a 30-day period DLVR, a Comcast Technology Solutions partner for the Comcast CDN, made over 500 million CDN decisions for these two publishers. Both publishers rely on DLVR to make CDN performance the priority, but each has very different business rules for CDN decisioning.
A comparison of the decisions DLVR made for these two customers reveals a few key data points:
CDN performance varies significantly from one publisher to the next. These two customers showed very different levels of hazard conditions, even among the same CDNs.
It’s important to measure every stream to optimize video delivery for performance in a multi-CDN workflow. CDN performance varies from moment to moment, delivery to delivery, and from customer to customer.
Predictive, performance-based multi-CDN switching is the only path to getting the most impact out of a multi-CDN workflow.
DLVR data demonstrates the difference in CDN decisioning for each publisher. Publisher 2’s business rules are set up to switch to CDN 1 as a last resort to maintain performance, whereas Publisher 1 relies on CDN 1 more often.
There’s clear value in having multiple CDNs. But a metric that averages publisher results into a blended measurement tells you very little about your specific CDN performance that is useful or actionable. Measuring every stream as it plays, and then acting on those measurements stream-by-stream to select the best CDN for each user delivers the best possible experiences across your audience.
Check out more information on how Comcast CDN and DLVR work together to provide robust multi-CDN strategies.