Videos - Striim Wed, 11 Sep 2024 00:20:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://media.striim.com/wp-content/uploads/2020/09/20152954/mstile-150x150-1-120x120.png Videos - Striim 32 32 Activating Microsoft Fabric with Real-time data for Analytics & AI https://www.striim.com/blog/microsoft-fabric-real-time-data-for-analytics-ai/ https://www.striim.com/blog/microsoft-fabric-real-time-data-for-analytics-ai/#respond Mon, 20 Nov 2023 17:25:32 +0000 https://www.striim.com/?post_type=video&p=65795

Striim, Microsoft’s strategic partner in data integration, introducing its new Microsoft Fabric adapters to enable data engineering, data science, analytics and AI user groups with a modern real-time data streaming & integration to Microsoft Fabric Data warehouse & Lakehouse

Speaker: Alok Pareek, Cofounder and Executive Vice President of Engineering and Products at Striim

]]>
https://www.striim.com/blog/microsoft-fabric-real-time-data-for-analytics-ai/feed/ 0
Everett Berry on Microsoft Fabric vs Databricks. Should Databricks be worried? https://www.striim.com/blog/everett-berry-on-microsoft-fabric-vs-databricks-should-databricks-be-worried/ https://www.striim.com/blog/everett-berry-on-microsoft-fabric-vs-databricks-should-databricks-be-worried/#respond Thu, 09 Nov 2023 13:00:40 +0000 https://www.striim.com/?post_type=video&p=75123

Ever ask yourself how to choose between Microsoft Fabric and Databricks for your enterprise data workloads on Azure? Join this discussion with cloud pricing and cost optimization expert Everett Berry from Vantage.sh as he illuminates the differences between these two powerful data lake technologies. We delve into the depths of their unique features, pricing models, and deep integration with Azure.

Our conversation ventures into the world of AI and its transformative impact on the modern data stack. Everett offers brilliant insights into how data teams are redefining their strategies to prioritize AI in their roadmaps.

About Everett:

Everett is Head of Growth at Vantage.sh. He is known for creating one of the most widely used indexes of cloud infrastructure costs at Vantage Instances.

Follow Everett Berry on X (formerly known as Twitter)

Everett’s original article on this topic: Microsoft Fabric: Should Databricks be Worried?

What’s New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What’s New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

]]>
https://www.striim.com/blog/everett-berry-on-microsoft-fabric-vs-databricks-should-databricks-be-worried/feed/ 0
John Kutay, Head of Products at Striim joins theCUBE hosts Lisa Martin and Dustin Kirkland at Google Cloud Next 2023 https://www.striim.com/blog/john-kutay-head-of-products-at-striim-joins-thecube-hosts-lisa-martin-and-dustin-kirkland-at-google-cloud-next-2023/ https://www.striim.com/blog/john-kutay-head-of-products-at-striim-joins-thecube-hosts-lisa-martin-and-dustin-kirkland-at-google-cloud-next-2023/#respond Wed, 30 Aug 2023 22:39:35 +0000 https://www.striim.com/?p=76523

Transcript

Lisa Martin 0:06
Good morning, everyone and welcome to the cubes j one coverage of Google Cloud Next live at Moscone south in San Francisco. I’m Lisa Martin. Dustin Kirkland is my cube analyst. co host. We’re here with about 20,000 people, you can hear the din of the bus behind us. There was a tremendous amount of announcements this morning. Lot of great Google Cloud execs, customers, partners, we’re here with Striim joining us next, John Kutay, the head of product joins us, John, great to have you. Thank you so much for joining us on The Cube.

John Kutay 0:36
Thanks so much for having me. Super excited for this discussion.

Lisa Martin 0:39
Yeah, I would love to share with the audience more about Striim . What do you guys do mission vision help us understand that?

John Kutay 0:46
Striim is unified data streaming. For generative AI analytics and operations. We love helping our customers infuse real time data into their decisions into their operations. And now generative AI, which is becoming a top priority for many of the enterprise data teams that we’re working with.

Lisa Martin 1:04
It is Gen AI is probably the hottest topic on the planet, or one of you talked about real time. And I think one of the things we’ve learned in the last few years is that access to real time data is no longer nice to have for companies. It’s an imperative. It’s really, for every industry, it’s really hard to do that. But I’m curious what some of the gaps in the market were, when Striim was launched that you guys saw the thought we can solve this?

John Kutay 1:27
So the company’s CEO and CTO came from Golden Gate software, which at the time of its acquisition by Oracle was the number one database replication product in the market. But it was very pigeonholed into just copying data between databases. And there was this obvious demand in the market to not only move data in real time, but to analyze it. And now with this big wave of generative AI, it’s not about data going into some warehouse and you wait for someone to pull up a report. Now you want data automatically making decisions for you. You want your customers to talk to a smart, AI driven bot that knows everything about them, and can answer questions for them. And this all requires real time data.

Lisa Martin 2:07
Absolutely. And every company whether I always think of whether it’s the grocery store, or the gas station, or Starbucks has my data, and I expect that they not only use it responsibly and securely, but also use it to give me that real time relevant, personalized experience that I want every company has to be really I’ve heard people say data driven. And I heard someone last week say no, not data driven, Insight driven. Difference. Yeah, there’s a difference there. Talk to us about how Striim is working with enterprise data teams to really help them extract the value of data, and especially working with Gen AI,

John Kutay 2:40
Macys.com, who we presented with previously at Google Cloud, next session, you know, they power remember, they’re not in the business of doing data, right. They’re trying to sell clothes, and they had a digital first initiative with Striim , help them go from their existing investments in their on premise, you can call legacy infrastructure, and help make sure that that data is in Google Cloud within seconds. Because if they’re building new digital applications, that data has to be there. So we’re really proud to have customers like that. And then we have other examples of airlines, for example, they want to run their operations on time, they need good customer experiences, they need to make sure the aircraft’s are safe. We help American airlines do exactly that. We were presenting with them at a data and AI Summit. And with Striim, Databricks MongoDB. They were able to again, take their aircraft telemetry, action it for their operational teams that are there to maintain the aircraft, make sure that everything’s safe, everything’s ready to go. And best of all, everything’s on time.

Lisa Martin 3:45
Yeah, that’s what it’s all about, right? Being on time these days. Yeah. And

Dustin Kirkland 3:48
along those lines, talk to me a little bit about the velocity in terms of, you know, how teams integrate this, how fast how long does it take how long till we see results from integrating Striim ?

John Kutay 3:55
Absolutely, we’re really proud of being able to get our customers into production in a matter of weeks. Even when it’s complex. It’s breaking down long standing data silos within the enterprise, a lot of technical complexity. For instance, at our presentation with American Airlines at data in AI Summit, they were really proud of the fact that they went to production at global scale, within 12 weeks of Striim and it’s because Striim’s a unified data streaming platform, meaning connectors, the data movement, the modeling, the processing, streaming into your target systems, meaning whether it’s Google Cloud, infrastructure, data, bricks, snowflake, all that data has to be there with quality and uptime SLA is that are that the business can trust?

Lisa Martin 4:39
Where are your customer conversations these days? Are you talking with Chief Data Officers, CIOs, is all of the above does it I’ve mentioned it can vary depending on the organization. But every company is so data rich, but they have to be able to figure out how do we get access to this now,

John Kutay 4:53
it’s really important to be a catalyst for internal collaboration, meaning you have to work with the CIOs the Chief Data officers all the way down to the people who are in the trenches, building the pipelines and build alignment there. And that’s something that we’re also really proud of. And, you know, because at the end of the day, yeah, you’re solving technical problems, but you’re delivering on business use cases and initiatives. And that’s the most critical thing.

Lisa Martin 5:16
What are some of the key use cases that you see that maybe have more horizontal play across industries that Striim is involved in?

John Kutay 5:24
Yeah, that’s an amazing question. So right now, data teams, you know, they already had a year’s worth of initiatives on their play. And now a generative AI, all the innovation that’s happening here at Google Cloud Next, and across the various platforms, there is a very high priority mandate for data teams to adopt generative AI, and really bring their data into generative AI and then do the reverse, which is bring generative AI to where their data is today. So those are some some of the use cases that they’re looking at in terms of making sure that data is making decision on its own. Yeah.

Lisa Martin 5:59
Can you share a little bit about the partnership with Google what you guys are doing together? How you’re helping customers really unlock the value of AI and Gennai?

John Kutay 6:06
Absolutely, we’re really proud of our partnership with Google. If you’re a big query user, you go into the Add Data button Striim’s right there, you can launch it from your console stream as a Google Cloud native products, our CTO alo Pareek, presented up here at Google Cloud Next, since the beginning, when they were doing these shows, and you know, we’re really proud of helping enterprises quickly realize the value of Google Cloud by complementing their existing enterprise investments, getting that data into Google Cloud and making sure that it’s reliable and the business can build on top of that, using the the modern infrastructure that Google is providing.

Lisa Martin 6:44
Yeah, that modern infrastructure, they talked a lot about that this morning. And providers, it’s was probably like, music to their ears.

John Kutay 6:50
Yes. Frames, clearly a important piece of that for sure. How do your customers think about the return on investment, you know, the Striim, Google Cloud, all that making their investment in in you and seeing a return? Look, when I work with the data team, and I tried to work with them on their goals and OKRs and things along those lines, if their goal is to move data from A to B, that’s not good enough, we have to talk about what your actual business initiatives are and how this data project or you know tactic is going to help you there. So the example like I brought up with Macy’s, right, they can tie that to customer experiences having more fresh, reliable data is critical American Airlines, their their aircrafts, you know, moving, making sure that those are operating with the as fast as possible. aircrafts are taking off on time well maintained. And that’s really where you see the ROI is like, how is data helping your business meet their mission statements?

Lisa Martin 7:51
When you’re in customer or prospect conversations, John? And they say, why Striim? What do you say? What are those key differentiators that really shine a light on value prop?

John Kutay 8:02
Yeah, absolutely. The fact that it’s simply a unified platform, but just in a couple of clicks, we’re spinning up a lot of complex infrastructure that you don’t have to know about as an end user, making sure that it’s very reliable, it’s very fast. You know, instead of Striim vendors were you know, I mean, sorry, companies are pulling in six, seven vendors do the same thing. Now you get the whole thing in one single pane of glass, you get your connectors, you get your data processing your data delivery, monitoring data quality and freshness, so that the data stakeholders know that there’s ultimately trust in that data.

Lisa Martin 8:37
And that trust is currency these days, right? It’s absolutely has to be there. But sounds like what Striim is doing to me as you’re really, are you helping companies to like kick out six to seven other vendors so that from what I’m hearing workforce productivity cost efficiencies are why as Dustin was talking about, it seems like those are some of the big outcomes in general that organizations can achieve with Striim.

John Kutay 8:59
I always think about is very purpose driven. You know, you have a specific business problem you’re trying to solve, rather than it taking years of development and expensive investments, you can get your initiatives off the ground and into production very quickly. And you know, that’s just with the power of the platform and the way that we can partner with data teams as well to make sure that they’re tying it to their business initiatives and getting that value out of it.

Lisa Martin 9:22
Yeah, it’s all about getting trusting making sure the data is trustworthy responsible, secure and extracting that value. Last question, John, for us before we wrap here anything new exciting coming up, first thing that we should be looking for any events, any webinars, things that you want to plug?

John Kutay 9:35
Yeah, in fact, tonight, we’re doing a what’s new and data live. This is a thought leader ship session that I run, really excited to have Bruno Aziza was formerly Yeah, head of data analytics. Now he’s at capital G alphabets, capital G. And we have Sanji Mohan, who was previously at Gartner. And we have Ridhima Khan, VP of dapper Labs is going to talk about modern digital consumer experiences. So that’s Tonight at Salesforce sour, we’re going to record it. So it’ll be made available to everyone. And we’re going on score with what’s new and data and bringing all the data practitioners, data leaders to really talk about how they’re innovating with data and meeting all these business goals that they’re trying to deliver.

Lisa Martin 10:16
Awesome. Lots of stuff going on. Yeah. Best of luck tonight. Sanjeev is a is a cube analyst from time to time. We know Bruno. He’s been on the show. So lots of great folks that’s that we were talking about before we went live like tech, it’s just like two degrees of separation. John, it’s great to have you You’re now officially a CUBE alumni, I probably can get you a sticker. So appreciate you sharing with us what’s going on at Striim with Google and how you’re really enabling those data teams to maximize value and use Gen AI. Thank you so much.

John Kutay 10:41
Thank you for having me.

Lisa Martin 10:42
Our pleasure for John Kutay and Dustin Kirkland. I’m Lisa Martin, and you’re watching The Cube live day one of our three days of coverage of Google Cloud Next. Dustin and I are going to be right back with our next guest. So don’t go anywhere.

 

]]>
https://www.striim.com/blog/john-kutay-head-of-products-at-striim-joins-thecube-hosts-lisa-martin-and-dustin-kirkland-at-google-cloud-next-2023/feed/ 0
Oracle to Snowflake Initial Load https://www.striim.com/blog/oracle-to-snowflake-initial-load/ https://www.striim.com/blog/oracle-to-snowflake-initial-load/#respond Thu, 29 Jun 2023 20:20:30 +0000 https://www.striim.com/?post_type=video&p=62440

 

1. In this video tutorial, we will show you how to complete the initial load from Snowflake to Oracle using Striim’s Flow Designer.

2. To get started, we will go to the Create App page in Striim and click Start from Scratch using the Flow Designer. Next, you will name your new application and begin to design your initial load application.

3. In the components panel, Search for database as a source and drag it over to the right. With this component, we will connect to Snowflake as our source database.

4. Setting up Snowflake as a Source requires your connection URL, Username, and Password. Under Advanced settings, you will enter in which tables you are going to move and any other necessary details. Select New Output and enter in a name for the data stream.

5. Now that you have your source configured, you will drag a database as a target component over to configure your Oracle connection. The same information is required when setting up your target. Under Advanced Settings, be sure to enter your batch and commit policy.

6. Once you have Snowflake and Oracle configured, we will deploy your application.

7. Before running your application, you can preview the data stream by clicking on the blue eye icon or by going directly into Snowflake and Oracle. Let’s take a look at what is happening in our source and target.

8. In Snowflake we can see that we have read 1,000 events, and Oracle has not received any data at this point since the application is not yet running.

9. Now we will start the application and can watch as the initial load is completed in real-time. Our Striim Application Progress screen shows that 1,000 events have been moved from Snowflake to Oracle.

10. In Oracle, we will double-check that all 1,000 events were written by running a query. You can view additional details about your initial load by reviewing the Monitoring page in Striim.

11. In this video, you have seen an initial load application from Snowflake to Oracle created and running in just a few minutes. Thanks for watching!

]]>
https://www.striim.com/blog/oracle-to-snowflake-initial-load/feed/ 0
Snowflake to Oracle Initial Load https://www.striim.com/blog/snowflake-to-oracle-initial-load/ https://www.striim.com/blog/snowflake-to-oracle-initial-load/#respond Thu, 29 Jun 2023 20:17:14 +0000 https://www.striim.com/?post_type=video&p=62439

1. In this video tutorial, we will show you how to complete the initial load from Snowflake to Oracle using Striim’s Flow Designer.

2. To get started, we will go to the Create App page in Striim and click Start from Scratch using the Flow Designer. Next, you will name your new application and begin to design your initial load application.

3. In the components panel, Search for database as a source and drag it over to the right. With this component, we will connect to Snowflake as our source database.

4. Setting up Snowflake as a Source requires your connection URL, Username, and Password. Under Advanced settings, you will enter in which tables you are going to move and any other necessary details. Select New Output and enter in a name for the data stream.

5. Now that you have your source configured, you will drag a database as a target component over to configure your Oracle connection. The same information is required when setting up your target. Under Advanced Settings, be sure to enter your batch and commit policy.

6. Once you have Snowflake and Oracle configured, we will deploy your application.

7. Before running your application, you can preview the data stream by clicking on the blue eye icon or by going directly into Snowflake and Oracle. Let’s take a look at what is happening in our source and target.

8. In Snowflake we can see that we have read 1,000 events, and Oracle has not received any data at this point since the application is not yet running.

9. Now we will start the application and can watch as the initial load is completed in real-time. Our Striim Application Progress screen shows that 1,000 events have been moved from Snowflake to Oracle.

10. In Oracle, we will double-check that all 1,000 events were written by running a query. You can view additional details about your initial load by reviewing the Monitoring page in Striim.

11. In this video, you have seen an initial load application from Snowflake to Oracle created and running in just a few minutes. Thanks for watching!

]]>
https://www.striim.com/blog/snowflake-to-oracle-initial-load/feed/ 0
Introducing Striim for Databricks https://www.striim.com/blog/introducing-striim-for-databricks/ https://www.striim.com/blog/introducing-striim-for-databricks/#respond Thu, 29 Jun 2023 20:14:52 +0000 https://www.striim.com/?post_type=video&p=62438

Striim is excited to introduce to you our fully-managed and purpose-driven service for Databricks. In this demo, you will see how simple overall data pipeline configuration is between Oracle to Databricks. You will be able to set up a pipeline in under 5 minutes and watch the data in Databricks in real-time.

Striim for Databricks is the first fully-managed and purpose-built streaming service for Databricks in the industry. Designed for everyone, you do not need to have any prior E T L expertise, and the simplified user experience requires little to no coding. This solution also offers reduced Total cost of ownership with consumption based metering and billing.

In this demo, we will demonstrate creating an Oracle to Databricks pipeline and moving your data in a few simple steps. In addition to this video, we have added inline documentation to help you understand the on screen information.

When you launch the service, you will be brought to the Create a Pipeline screen. On this screen, you will enter in a Data Pipeline Name.

Then, you will connect to Databricks. Connection details to Databricks are saved so you can reuse it for future pipeline configurations. In this demo, we will create a new connection to Databricks that requires the account keys to be entered. The Service automatically validates the connection and checks for all the necessary prerequisites.

Introducing Striim for Databricks

Similar to the target, the service will save the source connection details for future use. Prerequisite checks are run against the source as well and the report will be shown to you.

If the connection is valid, the service identifies the schema on the Oracle source and presents the list for you to select the correct one.

The service then checks for the compatibility of the source schema with Databricks and presents the table list for your selection. While selecting your tables, you can also choose the transformation per table that will be applied as the data flows through the pipeline in real time. For this demo, let’s choose to mask this specific column’s data.

Striim for Databricks also offers intelligent performance optimization with parallel data processing by grouping the tables.

A summary is shown in case you choose to make modifications before running the pipeline. In this demo, we reviewed it and started the first pipeline. As you can see, within a few seconds the pipeline was created and started to move the initial load automatically.

Striim for Databricks also has an intuitive overview dashboards and monitoring screens. The source and target statuses are displayed here on the Overview screen. In this case, Oracle is online and green, and Databricks is Paused which means the data is not flowing yet between the source and target. We will review our Oracle data and Databricks to ensure the data flow is going to move smoothly.

Let’s check what’s happening in our source and target. First, we will go into Oracle to check the number of records that have been moved through change data capture (cdc) for each table. Then we will check in Databricks that the same number of records have been updated for each table. We can also review the tables and columns that we have masked to ensure it processed correctly.

We will also use the Manage Tables in Pipeline feature to remove any tables that we no longer want to stream.

Next, we will use the Optimize Pipeline Performance screen which will show us which tables in the pipeline may be causing issues in the data stream. We can then pause the pipeline to optimize performance by creating table groupings and reducing the time spent between batches being sent to Databricks.

Now we will go back to the Oracle database and insert values into the source table. As you can see, the pipeline immediately recognizes the changes on Oracle and starts capturing the changes in real time. If we run a query on Databricks now to check the changes made to that table, we will see the C D C events are already available.

Thanks for watching! You have now seen our seamless, automated, and real time data capture using Striim for Databricks service.

]]>
https://www.striim.com/blog/introducing-striim-for-databricks/feed/ 0
Microsoft Fabric and Striim https://www.striim.com/blog/microsoft-fabric-and-striim/ https://www.striim.com/blog/microsoft-fabric-and-striim/#respond Fri, 23 Jun 2023 20:56:45 +0000 https://www.striim.com/?post_type=video&p=61498

Microsoft Fabric and Striim from Striim on Vimeo.

Using Microsoft Fabric, we created real time dashboards in minutes using CDC from MongoDB. See how simple it is!

This data pipeline continuously takes Sales and Inventory data from MongoDB using Striim. Integrates with Azure Event Hubs, and visualizes it in Onelake using PowerBI. It’s a cool solution put together by Striim developers to combine operations, events, and analytics – highlighting our partnership with the newly announced unified Microsoft Fabric.

 

 

]]>
https://www.striim.com/blog/microsoft-fabric-and-striim/feed/ 0
How to Migrate Transactional Databases to AlloyDB https://www.striim.com/blog/how-to-migrate-transactional-databases-to-alloydb/ https://www.striim.com/blog/how-to-migrate-transactional-databases-to-alloydb/#respond Tue, 11 Oct 2022 23:27:44 +0000 https://www.striim.com/?post_type=video&p=51927 Striim offers a fully managed, unified data movement platform that streams data to AlloyDB with unprecedented speed and simplicity. Migrate and replicate data to AlloyDB with zero downtime and near real-time SLAs for business applications on AlloyDB.

]]>
https://www.striim.com/blog/how-to-migrate-transactional-databases-to-alloydb/feed/ 0
See it in action: Schema Evolution https://www.striim.com/blog/schema-evolution/ https://www.striim.com/blog/schema-evolution/#respond Wed, 07 Sep 2022 14:24:58 +0000 https://www.striim.com/?post_type=video&p=50373 With Striim’s schema evolution capabilities, you can have full control whenever data drifts. Capture schema changes, configure how each consumer propagates the change or simply halt and alert when a manual resolution is needed.

]]>
https://www.striim.com/blog/schema-evolution/feed/ 0
Building Real-Time Data Products with Data Streaming https://www.striim.com/blog/building-real-time-data-products-with-data-streaming/ https://www.striim.com/blog/building-real-time-data-products-with-data-streaming/#respond Mon, 29 Aug 2022 19:32:19 +0000 https://www.striim.com/?post_type=video&p=50184

John Kutay, PRODUCT MANAGER, Striim

Data leaders are evaluating methods to meet the various needs of cross-functional stakeholders. Some business users and customers need data in real-time, others need materialized views in a business format, and some want all of the above! Learn how data streaming and change data capture can decentralize your data operations.This talk will include specifics on sourcing data from collaborating operational systems (OLTP databases, sensors, API data) and transforming it into Data Products in the format of actionable business data with strong SLAs and SLOs for uptime and delivery speeds.

]]>
https://www.striim.com/blog/building-real-time-data-products-with-data-streaming/feed/ 0