Interview with Oded Breiner, Brinker's CTO
- Brinker Editorial
- 6 days ago
- 25 min read
Trident Talks with Oded
In this episode of Trident Talks, Rosie Larter sits down with Oded Breiner, Co-Founder and CTO of Brinker, to explore how AI is changing the game in disinformation, narrative intelligence and deepfake analysis. Oded’s been building for 20+ years - from enterprise tech to AI startups pre-LLMs - and now he’s applying that experience to one of the biggest modern challenges: large-scale influence operations. In this episode, you’ll hear: • What “narrative intelligence” looks like in practice - tracking how stories spread across platforms even without obvious links • The real take on deepfakes: focus less on pixels and more on intent and narrative risk • Why Brinker is taking an agent-first approach
Transcript :
0:09 | Rosie Larter: So, hello and welcome to this episode of Trident Talks. I'm Rosie, a senior consultant over at Trident, and I'm delighted to be joined by Oded, one of the founders and CTO of Brinka. Now, before I ramble on, because I can definitely do that, would you mind just giving us a quick introduction of like who you are and what really got you to the position you are today?
0:32 | Oded: Sure. Sure. Um, so I'm Oded Briner. Um, been doing tech for around 20 years. Um first uh 10 years of my career were in uh big bigger companies like SAP and uh big Israeli telecom where I did a bunch of uh development work—QA, knowledge management, a lot of things around the tech from management to uh to development. Uh then in the last 10 years I focused more on startups. The first one was in uh focused primarily on AI. It was basically a Siri competitor before Google had an answer to Siri, before they had Google Now. Um Siri was uh really the hot news back then and Android users didn't have a an answer for that like they didn't have their own version. So we built it. It was called Robin and it was really popular. Millions of users and the users were using it like hours a day. We had like truck drivers that would use it for their entire trip.
1:38 | Rosie: Wow.
1:40 | Oded: So, we did a lot of AI and was before much much before uh we had LLMs. We basically had basic NLP back then. Um that's where I started learning about uh AI and machine learning and it was just the beginning of deep learning. I remember we had uh Nvidia chips were free for us because they want to see if Nvidia chips were good for AI. So, we got them for free back then.
2:09 | Rosie: Yeah.
2:11 | Oded: And now it's the the most expensive thing, but then then they would give it to you for free so you can try it out for AI because it was just in the beginning. Uh then I uh built a couple more startups, one in uh in the ad tech space uh where we did a lot of stuff with the mobile gaming and mobile ads. And the last one before Brinker uh was Worknet and we built um a customer success management platform uh over Slack which was also connected to AI and chats and things like that. Uh then I joined Brinker. I joined Daniel and Benny my co-founders. Um and uh it's been around uh two years we've started building Brinker and the rest is history.
2:56 | Rosie: Well, fantastic. I think it's always good just to kind of understand you specifically and where you've really come from before getting into kind of the weeds about the product and everything like that. And before we do so, I think it would be really interesting to discuss kind of the founding story and how I guess you and your co-founders really came together. But what specifically kind of inspired you to found Brinker? Was it like a gap in the market or was there a specific incident that happened?
3:29 | Oded: Yeah, it's a great question. So Benny uh one of my co-founders who was the original uh founder or the original idea came to his mind after the 2016 uh election manipulation that uh Russia supposedly or not supposedly uh interfered with the US elections and that that that the problem and the risk came to his mind and he's he started um uh donating and working with NGOs that were trying to protect uh democracies from manipulation and then his the idea came to his mind that he needs to do uh something about that with more tech and more startups because that's his background uh and he approached Daniel my other co my third my second co-founder uh and Daniel uh had a lot of influence uh experience he he did a lot of uh he had a startup around influence where he um uh tried to decrease polarization between groups in the US. So he had a lot of experience with that and then they searched for uh the tech guy CTO and they saw that I had a lot of experience with AI and I I knew one of uh the people that Benny worked with and they said okay he has a lot of AI experience a lot of experience with chats and text and uh so that's a great way to um uh do the technology and was a little bit before just at the beginning of generative AI. So uh then when generative AI came along then uh LLMs started it was like the perfect match to get someone who has uh a lot of experience with AI to try to fix influence from uh you know from other countries or other companies or enterprises.
5:18 | Rosie: Mhm. Well, you all seem like quite serial entrepreneurs. I'll say that. Um all have quite a long history of either being very early in these companies or part of founding teams...
5:30 | Oded: Yeah I was I was I was uh make a joke which is which is funny and sad as at the same time like I I was in the dot-com bubble I was just starting to do tech and you know I don't know if I missed it like I did a lot of stuff a lot of interesting stuff and uh uh built a lot of interesting products but I can't say that I got rich back then or the dot-com bubble. I was pretty young and then there was the 2008 uh bubble and um now we're in the AI the AI bubble or the AI opportunity. It depends on how you look at it. It can be a bubble but it can be like a you know a wave of technology that people start. I was also in the mobile like bubble or mobile wave of technology and also build some products for mobile. So uh yeah, so I've been through all these waves and now it's the AI wave and I think now I'm I'm I'm going to uh try to do the thing that is most influential until that I have have ever done and hopefully you know change really actually do a meaningful change in the world or society or technology in this way.
6:37 | Rosie: I love that you've kind of seen success through using the words you use kind of the bubble, but it's like you've well yourself and your co-founders as well have kind of grown through each one. Um, which I think for people kind kind of listening who maybe haven't been through kind of that similar growth, maybe a lot maybe first-time founders or something like that, it's probably quite inspiring to watch three founders come together to build something like this. Um, and we'll get into kind of more about the product and everything. Um, but I think it'll be just kind of keeping on track with this founding story and I guess through the research that I've done and the conversations that we've had, it seems you guys are really positioning yourself for kind of larger enterprises, public sector, um, kind of based I guess how did you validate that or how did you see a gap in the market for that specific kind of industry?
7:36 | Oded: That's a great question. That that was a lot of a lot of meeting and talking with customers. So when we just started, we met a couple of people that introduced us with actual with with NGOs that or you know fighting racism and things like that and they had an immediate need for uh software that can find misinformation, disinformation, racism online and you know mitigate it, find it uh analyze it and to see how to work with that. So that those were some of our earliest customers. But then we started getting approaches from Intel um bodies in all all sorts of governments in the world and they started to uh they started to hear about our uh narrative intelligence capability and they they wanted to learn about it try our product things like that and then we we saw that okay there's a an actual big big and important need from governments.
8:33 | Rosie: Mhm. And that that's that's kind of what started the whole idea, right? The 2016 election in the US where Russia intervened in the US. So the US needs to protect itself from intervention. So that was kind of obvious at the beginning, but when we actually got approached by government bodies and they told us about features that they need that we didn't have yet or they wanted us to build, that's how we did the validation. So uh we had the one intel body uh in Israel telling us um they they would like to see a a link analysis of how things are moving in through the the web, how things are moving through social media and uh we just started working with LLMs and we thought okay we don't want to do a conventional metadata analysis. We don't want to show a map of who liked who and who's following who because that's like old school or that's like basic link analysis. Uh so uh we said okay maybe we can show a link analysis of the story how the story is moving from a person to person even if they didn't like each other on social media or shared their content just they take the same story and uh repost it without actually referring the old uh the old post and then we got an amazing interesting uh link analysis map that shows how the story is moving from Telegram to Facebook to uh Twitter and back and how people are uh uh reusing it without them actually even maybe knowing each other or uh maybe they do know each other but there's no actual proof that they know each other. There's not they're not liking each other's posts and they're not retweeting. They're just using the same same stories, the same uh um the same narrative that they want to spread.
10:21 | Rosie: That's insane though, kind of going back to what you said at the beginning there about how governments were almost approaching you as well to kind of help fix their issues or or like you said there, kind of create um like products that maybe weren't on the market. I can't say I've spoken to many founders where government organizations have actually reached out to them. I feel like it's normally kind of on the flip side of we can help you.
10:47 | Oded: We we did both. We are we both especially when we got got started approaching by uh uh government bodies. So we said okay let's approach some more of them. Uh but yeah Danielle did an amazing job with uh content marketing. So we started writing a blog and writing on LinkedIn and recording some podcasts and uh and uh the words in disinformation narrative analysis narrative intelligence they got spreaded out and got connected with the word Brinker. So that's how um that's how some governments found us through all those uh blog posts and you know content.
11:25 | Rosie: Wow. So it's all been kind of I guess like founder-led so far as well.
11:30 | Oded: Yes. Or majority of it. Yeah. Yeah. We're still at a pretty early stage. Uh so we don't invest a lot of uh money in anything that is not developing the technology. We're focusing on developing the the best technology and doing it the fastest. So we'll have a technological edge because that that can be our uh you know that can be our real uh advantage.
11:51 | Rosie: Well and it's obviously been really successful so far. So um and I think a lot of people listening we kind of speak to obviously candidate side every single day and that's always a big thing for them is um have the I guess process so far been pushed by founders and that's when I think a lot of people kind of jump on that bandwagon as well. So it's interesting to kind of for listeners as well to hear that you're doing that. I think that's so important.
12:22 | Oded: Yeah. I think uh anything that you do that is not founder-led that the founder is not part of every support call that one of our intel people or analysts or even the developers are on—which I am not on—uh I feel like I miss I miss a lot of stuff. Even I watch the recording afterwards. I watch the recording of a call or something like that... let's say a support call a customer said okay this feature and that feature is not working well or I needed to be changed. When I when I'm part of that call I feel that I learn a 100 times more than if I just get you know an email summary or even watching the the video because I can ask questions... uh it's like live isn't it?
12:57 | Rosie: Yeah yeah live.
13:03 | Oded: And I can do like an actual discussion even suggest some suggest some solution on the fly and uh and I I learned a lot from that like uh it's it's also like a little bit of a centric uh or central thinking which is not so good usually you know micromanagement or anything like that but the more time I can spend with customers and with the people are using they're using the software is better. You always have to have the greatest people uh working with you and we have the greatest people I think and uh one of the some of the best in in the world working with us in our team as developers and Intel people um but and they give the best support and they can do great features but you know to really steer the technology and the product and how you want to build it and really understand what are the priorities the more I am part of calls the better I can do that.
13:56 | Rosie: Yeah, and kind of what you were saying earlier like customer insights is especially early on is one of the most important kind of feedback I guess like processes you can deal with or you can come up against and you've obviously taken that on board. You've had people reach out to you. You've been able to then also kind of do that reach out process as well. Um, so like I said, you've obviously been very successful and putting all that time into the product and creating the best possible product you can. Um I can't say that we again talking to many founders they have that kind of or they outwardly show that kind of passion as well for what they're doing technology side. Um now obviously I know we've spoken before and I've done my obviously own research but it would be good to know for kind of the listeners. Would you be able to run us through and please get as technical as you want here kind of the like technology behind Brinker and what makes it maybe different from like a traditional threat intelligence platform that you kind of see on the market maybe today?
15:02 | Oded: Um, by the way, the passion for uh the product and listening to the customers came from my experience with consumer apps. So when we built Robin the serial competitor, we would just go out to the mall and ask people to try to use our app and then got feedback. And that was I remember the first time we did that, I was like, "Oh my god, I can't believe that like they used it totally different than what we thought they use it."
15:30 | Rosie: Really?
15:32 | Oded: Yeah. And it was also like back in the day before you could like record what the user is doing uh with the app, before you can like uh do analytics of what they're clicking, that was really at the early stage. So uh lot not a lot of people uh had that in their apps. We actually built that ourselves to measure what people are using. But being with the actual person uh uh using our app was amazing. Like I I remember for example um that we worked a lot on a wizard like a walk through at the beginning of the app. We said okay this is the most important thing. So they learn how to use the app. It was like really popular back then. Every app that you would install it would teach you how to use the app. And I remember we gave the our app to uh someone on the mall. I think he was like 16 or something like that or even younger. And he took he took the the app. He looked at the walk through and he didn't give it like a second of thought. He was like like next next next. It was like it didn't even blink. And I was like, "Oh my god, I can't believe that." Like that's that we we thought this was so important, but then they like didn't even look at it. And that's like we learned then we put some statistics on it and we found out that like most users would do that. So we said okay that's not enough. We need to make the buttons and everything really self-explanatory. Uh so sorry that's for like so that's—
16:50 | Rosie: No it's good to have context behind that. Yeah story.
16:55 | Oded: So the technology uh brinkers technology. So I can you know in one word I can say that our biggest differentiator than uh other companies that are doing things uh that are in our area or competitors or you know more conventional companies is LLMs. So we start when we started building brinker LLMs just started and we built everything using LLMs. Every part of our software is I either based on LLM or is using it in some way and most of most of the companies that are in this field they started back in 2018 2016 uh 2019 way before LLM started so um that's our key differentiator like that we we brought up in the GI age or the LLM age that's when we started.
17:47 | Oded: And technically the one one of the things that is um uh was the the first thing that we uh uh implemented was basically um uh narrative completion. Let's call it narrative completion. We call it the beginning AI filter, but now you can call it narrative completion or narrative uh discovery. So, um, before we had LMS, you would take a piece of content, turn it into some sort of vector like a string of numbers or an array of numbers better. And if you want to understand the narrative there or if uh it's it matches something that you are looking for, you would compare that uh array of numbers to other arrays of numbers to see how similar they are. And based on that you would say okay I have this narrative or I don't have this narrative.
18:41 | Oded: Um one of the things that we started way uh at the beginning something that a lot of uh products new products that have been built in the last couple of years have added it as well is this AI filtering process. So this narrative analysis. Uh so you have some email software today like uh um superhuman and uh some other advanced email software that does something similar. Every email that comes into your inbox get asked a question using an LLM about it. So, is this email trying to sell me something? Is this email uh an ad? Is it spam? Is it solicitation? Is there a recruiter?
19:21 | Rosie: Recruiter. Yeah.
19:28 | Oded: Yeah. So, this this part of AI filtering is something that we added to our technology. So, we take every post that we find online that has the interesting uh keyword that we're looking for and then we ask a question about it using an NLM. And you know the PC for that that was pretty simple. We just took an API from OpenAI and asked the question about anything that we got into the system and it worked pretty well. It already gave some value to the customers. But once we started working with real customers on that, we saw that like the chargers were off the charts because you're basically asking about everything on social media. You're putting on JGPT basically. So that was like a lot of uh cost and it was really uh slow and very very intensive and that that's when we started you know really deep diving into uh how we do this technology better.
20:17 | Oded: How do we filter out the results before we send them to the LLM? How do we uh get rid of benign stuff that is not interesting for the LM? Like hello, hi, what's up, interesting, you know, things like that? How do we um how do we filter out? How do we maybe we mix and match multiple LM? Some LLMs are cheaper but are good with uh finding out uh false positives or uh finding out false negatives. How do we filter that out and then send it to the more more expensive LLM? And that's one of our key technologies—how we do all that process with multiple LLMs including some custom LLMs that are language agnostic can be in uh that two posts can be in two different languages and we can show that they are exactly the same.
21:05 | Oded: Um so that that was for the first part of uh of our AI filtering or narrative analysis—the basic of our narrative analysis. Uh then we did the this that link analysis map that I talked about because one of the intel bodies told us that that that would be interesting to them. So uh that that that also took a lot of uh you know research because we want to show a connection between two posts even if they they're in two different languages uh but they're talking about the same narrative. We're trying to push the same narrative. Uh we want to show uh um better results or we want to emphasize a connection better if it's a narrative that is jumping from one social network to another. So we want to make at our first version we just showed a map with links. We told them look you can search here for Twitter and you can search here for X—sorry—and you can search for Facebook on the other side and if you see a connection between them the narrative jumped between X to Facebook.
22:04 | Oded: But then they were like okay so now I'm supposed to go over all these nodes and look for X and Facebook and see where was a jump. So we said okay let's make that clear to the user. Let's make it a red line if it's a jump between uh two different social networks or two different languages. Uh so that was that. Um one one more uh technological uh edge or something that we are really uh focusing on is with deep fake analysis. So uh there is this saying in one popular podcast that I listen to it's called the Verge there they have a saying of "What is a photo?"
22:44 | Oded: Okay because back in 201 I don't know 14 something like that or a little later uh phone manufacturer manufacturers started manipulating photos so you know it started with Chinese phones but at the beginning when you would you would take a photo with uh some Chinese phones it would photoshop your face like make you look uh uh younger without even you asking it.
23:10 | Rosie: Oh my god.
23:12 | Oded: Yeah, without even you asking. Actually, I think that most most most uh Zoom and Google Meet and most software like uh conferencing software today also does that. So they they it started I think with some Chinese phones or with Samsung and uh they added that feature on default and people you know really liked it. Oh, I look great in this photo. They shared it more and and took more photos. So then at the beginning Apple didn't do that but then Apple also joined the trend and also now every photo that you take is manipulated. Basically a really quick photoshop is being done to every photo almost every photo that we take.
23:41 | Rosie: That's crazy. I had no idea.
23:49 | Oded: It goes much it got it goes much deeper than that. So like for example uh there are some phones that if you take a photo of the moon it will actually superimpose an image of the moon you know with the to see all the features of the moon and uh—
24:01 | Rosie: My mind's being blown right now.
24:08 | Oded: It's crazy. I saw some examples and it's crazy. So like Google Google called it when Google added to their Pixel they they took it to a whole another level because they had the best AI with image manipulation and they called it uh computational photography. So whenever you take a photo we we do a lot of computations and Apple at the beginning did some really simple stuff. Let's just decrease the lightning from the sun so it will not blare out your face or something like that and increase the contrast on your face so it won't won't be faded away things like that. But the more time uh uh the more that that uh time and features and technology evolved they do more and more manipulation. So almost all photos are manipulated today and they say by 2026 the manip the that by the end of 2026 they say that the manipulation will be so bad that like you can tell you can just say that all the photos are fake like 90% of photos are fake not even just manipulated.
25:07 | Oded: So we thought about that and we said okay so there's no reason to actually check pixels or check manipulation when you uh try to find out deep fakes and we said okay there are all these companies trying to find uh uh image and video manipulation but it's a dead end because all photos will be manipulated and that's why when we said okay let's use the LLMs and our narrative intelligence capabilities to find what is actually being said in the photo or the video and see if there's a problem there. So, for example, if we take uh uh a photo of um uh that was generated using AI, but it's the cover of Time Magazine. Okay. So, it is manipulated, right? It's it's generated using AI. It's actually really uh apparent that it's manipulated because that's the goal of the Time Magazine cover. Uh but it's real, right? It's not it's not a manipulation because it's it's a real Time Magazine cover. So why would you say that it's not real? So uh we use the LM to check what is the intent here. This is the Time Magazine cover. We check if it's a real a real image of that Time Magazine. Then we say okay it is 100% manipulated but it has no there is no uh uh malicious thing about it. Okay. Um and that's that's our you know deep fake strategy on how we uh identify and uh analyze deep fake content.
26:36 | Rosie: It's so interesting because I think when thinking about deep fakes, um I'm sure everyone kind of listening and most people even like you said kind of scrolling on social media, LinkedIn, anything like that can it feels like it popping up constantly. It's even as kind of a basic level as like celebrities being kind of put in not the best situations. I know there's a few kind of springing to mind right now, but then all the way I've heard stories kind of like having conversations like we are now where people are kind of deep faking with like government authorities or people really high up in uh like banks for example and then it's resulting in transfers of money that's just like off the scale huge and it's like it can be so low level as maybe like me well not I say low level but like mentally maybe harming someone all the way up to like physically losing losing x amount of money.
27:31 | Oded: Yeah.
27:31 | Rosie: And it's I think it's just crazy to be honest this world we're living in. But it seems like you're doing something very different because I think obviously the deep fake market is or the identity space I should say more generally is kind of growing um and quite saturated. But you're right, it seems like every image is fake. So you need to—
27:49 | Oded: Yeah.
27:50 | Rosie: So you need to kind of be doing something a little bit different. Um yeah, it's really really interesting. It makes you like scared for for the world, doesn't it?
28:01 | Oded: Yes. But also like uh you know they used to uh manipulate images before they were computers. So I think there's a famous famous photo of of Stalin uh with two or three of his government uh friends or like a colleagues, other people high up in the Communist Party. So uh then one of them he suspected in one of them that he uh was a traitor. So he took an image manipulation expert which didn't have computers back then. He would use like scissors and whatever he used and removed the guy that was a traitor removed him from the photo. Then another guy was like he suspect you know he had a paranoia Stalin had a paranoia about uh everyone's a traitor. So then one one more from that photo was suspected the traitor and he killed him or murdered him or put him in jail and they removed him as well.
28:52 | Oded: So like photoshopping images or like manipulating images was since the first day of images when images just started. So but back then only like really really rich people or like really powerful governments can manipulate photos and now it's democratized. Now everyone can manipulate a photo. You can go on Gemini or on ChatGPT and you can change a photo. Some let you do it, you know, more easily or some with some you can just, I don't know, change the color of hair or uh, you know, put you in a different scenario in the woods instead of the in the street. So, if you think about it, it's something that was only for, you know, very powerful or maybe evil big bodies and now everyone can do it. So, it's democratized and, you know, in essence, it might be better this way. So it it sounds that it is scary that everyone can fake a photo. But now that everyone can fake a photo then everyone will know that photo can be fake or is most likely a fake. So people will not believe propaganda and will not believe you know harmful stuff as easily as they did before when only very secretive uh you know agencies can do things like that.
30:00 | Rosie: Yeah. That kind of awareness is there now. Um whereas I guess before it was—is it real? Is it real? Or you probably didn't even have that question to be honest.
30:13 | Oded: You didn't have that question. If it was on on the newspaper, you thought it was real.
30:13 | Oded: You didn't have that question. If it was on the newspaper, you thought it was real.
30:19 | Rosie: Especially if it's being like published in like a place that is so maybe globally believed as well. Um I guess to kind of continue on this as well, I think we've obviously spoken about kind of AI quite generally, but I think any conversation we have to really dive deep into like Agentic AI as well, especially in the security space. I think obviously it's only been around for the last kind of year to 18 months and very much growing and a lot of different companies organizations are adopting it and implementing it into their security kind of systems. How are you seeing kind of I guess Agentic AI play into like disinformation?
30:56 | Oded: Yeah. So um that's a great question. So it's it's everywhere uh in uh any part of our software and every part of uh the workflow that people do like our customers and even our employees they use agents all the time when they develop things when they want to check our system they use other agents sometimes they use JGBT sometimes they would mix it with Claude... it's part of our workflow. So agents—so we said okay uh we have some agents already in the system that are doing the analysis and passing the analysis to other parts, other agents that are doing image analysis or OCR and things like that.
31:38 | Oded: We said okay this this can be even more "agentic" if we just you know give the tools uh to our main agent to the main system and let it work with any tool that we let our uh client side work with. So we've developed this approach where we call it "Agent-First Design" when we do software. Back when the iPhone started, people thought that desktop is dead and every web page will only be an app... and then there was this Mobile-First design. I think today Agent-First design is a good way to approach things because if you build something for an agent... agents can talk with your API. So if you build an API (that you have to build anyway), you can give it to the agent first before you put it in the client side in the UI—the graphical user interface. We can build features in API before our customers can use them using buttons on the system. We could give it to our agent inside the app and tell it, "This is an API endpoint, analyze this post and check the risk," and the agent would use that tool even before we implement it in the graphical user interface. That actually saves you time.
34:15 | Rosie: This Agentic AI space... like you said you guys kind of use it every single day. Even away from—I don't work for a cyber security vendor—but we use these processes so we can talk to more people and free up kind of some time so that we're not just kind of sending out loads of emails all day. Taking it into the security setting, it's completely game-changing. I guess the question that a lot of people have: Is it replacing kind of humans? How does Brinker then kind of segregate what is AI led but what also still needs that kind of human touch to it?
35:20 | Oded: That's a great subject for a full podcast or like 20 podcasts. Will AI replace people? You can even take it a step back and say, will technology replace people? Because even when they started doing clothes with the machines, people were scared and started burning them. They were called Luddites. And they burned those machines because they said they would take our jobs. It proved that technology only creates more jobs. Back then only very rich people had a closet full of clothes. The reduction of cost only increases the demand.
36:38 | Oded: As a "Technoptimist," I think technology only increases the GDP. Let's say Microsoft has 100,000 developers and Google has 100,000 developers. They both have amazing LLMs. If Microsoft says, "Okay, now I can fire half of my workforce because of these LLMs," and Google stays with 100,000 developers—who will win? A company that has more people with that tool will always have a bigger advantage. If you're not using AI, you're definitely going to fall behind.
39:21 | Oded: There's also this law in economics... Jevons Paradox. It says that let's say as a security researcher, if I could find a hacker in 24 hours before AI, and now I can do the same in 20 minutes—it doesn't mean they will need only 20 minutes of my time. It means the demand for my service will increase exponentially because now hundreds of companies that couldn't afford a security researcher can suddenly afford one.
41:05 | Rosie: You're right. When people think of AI taking jobs, they think of the big scary robots, but actually, it's just creating a different line of jobs as well. People my age and under are coming out of universities with PhDs focused on AI. It's not replacing jobs, it's adding to them.
41:56 | Rosie: Fantastic. I guess final question—what's kind of next for growth? Product-wise? Expanding to the US and EMEA?
42:29 | Oded: Growth to more countries, more government bodies, and more enterprises. We're very customer-led. I can tell you that in the Mitigation part of our software, we have a lot of technology around how to answer an influence campaign, how to create a counter-narrative. We see a lot more demand there to create complex scenarios of prediction: How will my counter-narrative be liked by this kind of person? How would this part of the world react? Prediction, counter-narratives, and everything around deep fakes—that's a strong focus for the future.
44:12 | Rosie: Awesome. We will put loads of links as we post. If anyone wants to reach out, I'm sure they will. Really appreciate you jumping on.
44:30 | Oded: Thank you Rosie. I really appreciate it and thank you for the podcast.
