top of page

Interview with Daniel Ravner, Brinker CEO

  • Brinker Editorial
  • Oct 2, 2025
  • 33 min read

Zary Manning's discussion with Daniel Ravner explored the evolving landscape of disinformation and the innovative approaches being developed to combat it. Ravner drew a crucial distinction between disinformation—intentional, malicious deception designed to manipulate perception—and misinformation, which involves unknowingly sharing false information.





Transcript


0:00 Intro Clip Montage: Welcome to the Disinformation Tech podcast where we explore the role that technology plays in both spreading and combating disinformation. We've even tried to use pages. It's become a tool of war. Kind of a conversation around AI overlooks the issue of misinformation to AI and disinformation. It's one of my areas of greatest concern. Facebook and Google advertising linked to disinformation. Disinformation and disinformation. Fake news. Fake news. Fake news. Fake news. Fake news. Fake news. Well, a new venture aims to help people navigate through fake news on social media. So, one North Carolina startup is working to help people sort through all the clutter and get to the facts. Fighting fake news and misinformation, his app is called Logically. Can artificial intelligence fix the fake news problem? Well, a new online news startup called Nowhere is betting on it. Authenticate works such that it fingerprints the video as it's being recorded and then writes those fingerprints to a blockchain record. Factiverse. Our product instantly fact checks any claims made. The only thing that can filter AI is AI. Introducing Vigilant, the first real time AI truth checker when it's moving at a scale that humans can't manage on their own. Um, we need to bring intelligent machines into the picture. Right.

1:18 Zachary: Hey everyone. So today we are with Daniel Ravner. I don't want to spend too much on the introduction. I think the main reason people listen to podcast is so they can uh they can get to know who the who the guest is. What we're what we're trying to do here real quick though is just talk to people who are working on technology in the disinformation space in particular. Talk to people who are trying to um combat that that problem um and put tools in in in place that maybe lead to better journalism or help companies or or governments um kind of stop stop malicious malicious narratives. Uh, having said all of that, uh, without further ado, I think the best person to, uh, to explain, uh, what they're doing over at Brinker is going to be Daniel himself. So, so Daniel, um, how, how are we doing?

2:01 Daniel Ravner: Very good. Thank you. Thank you very much and thank you for inviting me and, uh, I appreciate the reaching out and appreciate the time and the stage that you're allowing us.

2:14 Zachary: Absolutely. Absolutely. So, first, Daniel, I I I would like to to ask and I I think it's something that you would probably agree is a good way to start. uh what what is disinformation? What is misinformation? Kind of how how do you frame that?

2:26 Daniel Ravner: Uh well, the major difference is that uh disinformation is when somebody knowingly lies maliciously and proactively comes up with a story in order to influence state of mind, public perception, you know. So that would be intentional. Misinformation is when somebody's sharing a lie or fake news or whatnot, but they don't know it's a lie. As far as they're concerned, is the truth. And that's that's the main difference.

2:56 Zachary: Okay. All right. And you know, we we we see the headlines, we see the news. Um, everyone's probably familiar with the disinformation buzzword, the fake fake news buzzwords. Um, you know, misinformation, malinformation. Uh, you know, we we hear about biases and um, you know, different news outlets kind of point fingers at at at one another and and everyone's covered this in some way, whether it's, you know, far from the left, far from the right, center, everyone everyone talks, you know, on this topic, but this is this is something that obviously has a long history. It's not like it's anything that's that's new. So if you had to explain you know to you know how how you see the history of disinformation, misinformation, malformation or the origin of it um how far back does this go? Is this a new problem?

3:52 Daniel Ravner: Uh it's as old as time itself. Uh I would arg you know the the Trojan horse probably one of the earliest known cases of disinformation where something that was was meant to look like a a prize or a present was actually a way for soldier to infiltrate uh the enemy. Um down the line you'll have uh Maruanet the French Revolution where she said uh you know let them eat cake which is in itself probably fake. Uh but in that case you had uh the revolutionaries wanted to create uh negative sentiments against the elite and they turned Marian to Anette into the symbol of everything that's wrong with the country. Uh that was not the case. I think now in France there is an attempt to to reframe her story, but that would have been disinformation. People were paid to be in bars and talk loudly about how horrible she is, how corrupt she is. Uh you have the case of uh the witch hunt. anywhere between, you know, 50,000 to 200,000 women in Europe died in horrible ways because uh uh because of the witch hunt, because of of a book that came out was completely misogynistic. Uh and and uh you know, it went from Europe all the way to Salem. Yep. Uh so again, there are countless other examples definitely in our time, definitely in the cold war. So that part of of geopolitical relationships where countries use disinformation, misinformation to influence that war, what would be considered soft power. Uh that has been around forever.

5:37 Daniel Ravner: What has changed recently is the fact that you're getting everything to your hand all the time, every day. Um so if in the past you know the the disinformation was part of the discussion part of the information that you would have received but right now you can get 100% you can live your whole life with 100% this information that you would probably would uh um probably agree with more that would feed into your psyche much better. So, so the technology allows for disinformation to be much faster, much more effective and I think it's it's imploding societies. I mean disinformation is the biggest problem in the world right now. Definitely according to me because you know along with my new partners I've started a a startup around it. But according to the World Economic Forum, according to the UN, number one problem, look wherever you put your hands on the globe, you will see polarization. You would see extremist getting center stage. You would see democracies under danger. Um and and because of this information today, all of the other important issues cannot advance as well because even something like climate change becomes a politicized issue. Um currently we're at the point where if we don't curve it we will continue on the downward spiral that we are that we are on the for me it's unmistakable. I don't think it's a personal impersonation of the world. I think this is how it feels to many people.

7:20 Zachary: Yeah I I would have to say I can't I can't agree more. That's that's a that's a good u a good summary and and start. Yeah. So I you know I whenever I look at history personally you know you see people like like Adolf Adolf Hitler um historical figures and individuals like Hitler the the thing that made them so dangerous is is to me anyway it seems to be what made them the most dangerous is they they didn't have the ability to convince you know bad people to do bad things necessarily. They had the ability to convince good people to do bad things. a lot of good people to do bad things and I I think part of that was was obviously fear force uh but I think some aspect of it is is they by altering people's perceptions of reality finding a scapegoat in society um they were able to affect people's morality. So you lead people to believe that there's this u this group of people who are the cause of all of your your economic suffering and and and now those people who are otherwise good uh believe that there's there's almost you know something conspiratorial happening that um that that needs to be needs to be resolved and they do bad they do bad things even though they might otherwise be be good good good good people.

8:40 Daniel Ravner: I I think what Hitler and Gables did well, I mean horribly and and well, is that they allowed good people to keep on seeing themselves as good people while participating or not act or not stopping horrible things from happening. And the way they've done it is that they they've caused huge populations of people be it Jews, gays, gypsies, whatnot. They allowed people to see them as subhumans or nonhumans. So a person was able to hurt them without feeling he's being unhumane because they're subhumans. Um and again I would add that a lot of the lessons learned in Nazi in Nazi Germany is being deployed currently you know in in in most advanced liberal western countries around the world as we speak.

9:32 Zachary: It just it goes to show if you can alter someone's perception of reality that can change change their behavior in a in a negative negative way. And um and I think we would all be naive if we if we said that uh that any one of us was uh you know invincible to disinformation. I'm sure I've probably found something online, thought it was interesting, shared it with with a a loved one or a friend or something without putting much thought into it. But um I think it's important we try to try to minimize the amount of times we do that and at least try to be more critical.

10:02 Daniel Ravner: And and it's because it's this information works well with human nature. Yes. Because if you're having a fight with somebody about an issue that you care about and now you care about everything because everything is polarized, right? So it could be about Trump, it could be about Netanyahu, it could be about Brexit, it could be about, you know, choose your hot topic. because you are constantly being exposed to to the same point of view because that's what social media does to you right so if in the beginning you're saying listen I think that something is slightly more toward that side on the moral scales then you know by the end of the day you've seen 20 people who are saying listen this is this is horrible and to think otherwise you have to be an idiot and immoral so you become polarized at the spend of a few hours uh Um, and then it becomes part of your identity. So if somebody would present other evidence to you about something that is not part of the your identity, it's it's going to become that much harder to the point of impossible to move away from that because it's not a political discussion that we're having. It's about you're asking me to change who I am. Uh, again, climate change would be an example. Where do you stand on vaccines would be an example? what do you stand on specific you know Russia, Ukraine, Israel, Gaza um so people so the ability to talk to to allow complexity into a discussion about things that are nothing but complex uh is eliminated.

11:37 Daniel Ravner: Uh so again but but in part it really talks to human uh frail to to we don't want to change our identity. We don't want to change our group. Uh if myself and everybody who are part who are part of my group, we're not a group by blood. We're not a group by uh uh necessarily by city. Let's say that you know in in a good scenario, all of us are part of the same group that really are huge fans of a specific sports group, right? So so that's that's the our ideas or our views is the is the the glue that holds us together. So if our glue is a polarized opinion, not a polarized extreme opinion about something that is political, for me to change my mind or even open my mind to that other point of view against disinformation in that case threatens my sense of security because I will no longer be part of the group, right? And couple that with with social media digital economy online then all of those mechanism are being hyper exacerbated in a major way. Uh and this is why you know there is a new subject very fast you know who's the who's the good guy who's the best guy according to your point of view and this information influence campaigns really really works well now um because you know an influence campaign rarely is about something that is completely fake. Fake news is not that big of an issue. Uh because most malicious actors would find what is already in discussion, what is already sensitive to a society and they will double down on that issue.

13:22 Zachary: It's it's definitely a problem and I I would I would agree with you. That's that's one reason why I I wanted to start trying to talk to people in the space and and begin a begin a conversation uh with with like-minded people and people who are trying to put forth solutions. It's it's it's one thing to to just talk about these things to know it's a problem, but it's another thing entirely to um to do something about it. And that and you know, I think there's a space for for discourse and conversation. conversation after all is it's it's all that all that we have in in many in many cases. But your team, you're you're taking a a practical approach. Um I've had a chance to to deeply research what you and your team are doing and I think it's extremely impressive and and I want to share that. Uh you guys have put together quite uh you know an impressive list of defenses uh that can help brands, help help governments. Um, so, so tell us what what you know I I know it's I know it's a tough thing to to describe in a short amount of time. We we only have so much time, but but what what is Brinker if you had to give us just like a quick, you know, a quick explanation of of of Brinker? What is Brinker?

14:35 Daniel Ravner: So, Brinker is um holistic platform that allows you to fight um influence and disinformation. We're doing three main things. The first one is collection. So we can bring information from all across the web, open web, uh social media, news sources, uh into one place and we have an unusually broad collection. Uh the second part is the automated investigation. So most fighting against uh misinformation, disinformation today is still mostly manual. So if nothing else just the ability to scale it up and automate much of it will allow you to fight the scale at which the problem is coming at you. So so our second part is is uh investigation. We are a narrative intelligence company which means that before we look at the very boolean specific distinct area of meta data of you know what time was the post launched uh who's connected to it etc. We look at what people are saying and we're trying to find who else is telling the same story and and the same story when you have a disinformation attack would not appear in the same way necessarily. Uh different people will will have their opinion in very different ways whether they are you know fake people bots avatars or or authentic people or swept away in this influence campaign. So we would be able to automatically understand what's discussion, what's discussion might be problematic to the community, to your brand, to your organization, etc.

16:13 Daniel Ravner: Uh and then break it down to specific uh stories and then for either one of those stories at the press of a button you will get a huge amount of of information and again the moment it is the moment it's uploaded you don't have to wait. So because because the the AI will do most of the investigation for you. So you'll be able to know where did it come from? Did it start in another language? What platforms did it cross along the way? Uh what's the chronology of it? Uh uh you know who was the first, who was the second? So that would be the investigation, right? So we had collection uh we had investigation. The third part is mitigation. So we would look at the findings of the automated investigation and come up with uh various mitigation options. So one might be a pre-legal approach where our default is to create a cease and desist letter which especially in the world of racism where you're dealing with bullies who are not afraid to to show themselves many times if they'll get a legal document saying that if you don't stop we're going to take legal means in most cases that will stab them. Um, another would be approaching the media, creating awareness because creating awareness for the fact that you might be manipulated is a very good way to stop the manipulation. Uh, we're also doing performing takedowns. We are also uh we have a counternarrative feature where uh based on a proprietary uh methodology that uses uh um behavioral psychology to create counternarratics because if you're coming at an angry map with facts, it wouldn't help. If you have an emotionally intelligent response, it might lower the levels of of of the fire and that might help. So that's another mitigation and and we're constantly working on more and more mitigations. We're constantly making uh uh I mean everything is we were a startup. We are in constant movement and and and I would say again the story for Brinker is that we are born after the genai revolution. So we're built towards this capabilities of AI from the get-go. AI didn't happen to us. Uh uh and and that allows us to develop some things fairly fast. Uh and and we believe that we are ready for the agentic AI revolution which is you know the the second the second wave of the AI uh the AI change in the industry. Um anyway so that's very broadly what it is that we do.

Transcript Part 2: [18:00 - 36:00]

18:00 Zachary: Yeah. No. So, so, so super super fascinating. I know the the thing that that stood out to me the from the first conversation that we had it was how how much you can actually do if you find yourself the subject of a disinformation campaign. I I I didn't realize how many options were available and how many tools were on the table to actually uh navigate through that crisis if if you will. Um, and so I think it's I think it's good for for companies to know. I think it's good for governments to know that there are options, right? You don't have to be completely helpless. You don't have to just watch it watch it happen and and and there are people people like yourself and and companies like Prinker that can uh help you navigate navigate around that around that problem. Um I've definitely seen it. I've I've worked for a company that's had had a disinformation campaign, so to speak, kind of kind of kind of launched against it. I won't won't go into it, but essentially it was a situation where um the company bought and sold a given type of vehicle and there were manufacturer defects with the vehicle. And even though they were uh you know more of a dealer, more in the dealer space, people were were blaming the dealer instead of directing their attention to the real problem, which was actually the manufacturer of the the vehicle. And and the dealer was actually at no fault. I mean, there was no way for the dealer to know uh these vehicles were were were bad. Um, and so you could go up against that with facts like you said, but um, really what you're dealing with is you're dealing with an angry buyer or someone who's not happy with what it is that they that they have and what it is that they they bought. And and you have to, um, um, you have to counter that with with an understanding of psychology. Um, you you know, in many cases, you can't you can't just just counter it with uh with with the information. Um, and you also have to have some sympathy. um for where they're coming from.

20:49 Daniel Ravner: I can tell you about uh research that we have performed with the CRC, Cyber Influence Research Center in Berlin. Um, so we found I think the number was 180 videos of what looked like uh a regular auto review video and we found within those videos that there were trends of criticism against US economy, against Trump. Uh there were and again very cleverly said and over time there was also uh driving people away to say that there are other cars from other countries that would uh for the same amount of money you will get much much much more. But the sheer size of those videos, the fact that they all shared similar narrative, uh the fact that there's something about production that felt very similar between one and the other when you know they would be disparate, but when you would see them together, you would realize, you know, it's the same kind of production language. And in the end, this was uh uh this was a disinformation campaign created by a state actor trying to hurt Western automobile brands. Um yeah, and and without being able to track the narrative across the entire web, across the chaos of the web, you wouldn't be able to make the dots and say, "All right, there is something coordinated here." But again, in this case, as is many of the use cases that we're dealing with, um, it's not just about angry customer. This is somebody deliberately trying to create a state of mind about an industry.

22:31 Zachary: It's interesting. So, so, so your team, you're you're up to a lot. Um I you know I did notice that there I don't know if there is any more if this is still a component of what you guys do but I did notice there's a Splunk app or a Splunk component to at least some of what you guys have done in the past.

22:48 Daniel Ravner: We have various integrations because you know we we're not born into the world existed before us and for the clients to be able to use us easily and for us also just from a standpoint of of a for-profit company that want to to make it easier to buy us So we are uh allowing different integrations you know pretty much you know according to the demands that are coming from customers. Customers come in they said listen you know we love what we do but we have our own sock center we have our own sim application we have we just need an API for this and that. So, so we are again as startups do what we make up in size uh we lack in size we make up in flexibility and we have built various uh uh integrations to allow to to allow customers to use those insights in a way that makes sense into their existing workflow.

23:46 Zachary: Nice. Nice. And and I think that's I think that's that's that's needed. So, it it's good to hear that you guys are are adaptive and uh and willing to to work with with clients on a one-on-one basis. I I did have to say I I found a lot of insight in the in the logo um and and your explanation of the logo on our last call. Um I had to ask, you know, there's this this logo of this hand that's that's pointing and it's it's a cool logo. It's kind of catchy, but you can't help but look at it and wonder if there's something more behind it. So um could you tell tell us a little more about the about the logo and how you came to the to the logo for breaker?

24:18 Daniel Ravner: Yeah. Yeah. Uh sure. Um so there is a famous Danish fable and again for some reason where where I grew up it was famous also here uh about the little boy with his finger in the dam. Right. Right. So the story is about a little boy who walks at night and he sees that there is a hole leaking water uh and his village is is is downhill and in the hopes that if he wouldn't put his finger in the hole stop the leakage then then you know it might become a huge break and then flood and then you know flood the whole village. So he puts his finger he's a little boy and he's brave and he's putting his finger and he stays there all night and he saves the village. So that's his story. Uh there is statue around it. It's it's again it's it's known it's fairly known not everywhere but it's fairly known and this boy's name was France Brinker. So we took the name from Brinker from him. We've took we've taken the logo from that idea of of trying to stop the flood. Uh we've recently released it's still in beta uh an AI agent and we've decided to call him Hans. Uh, and this is kind of how it all came together and that's the logo and that's and that's our name as well.

25:34 Zachary: So it's this this this idea based on this idea that if you see it see a problem you should do something.

25:40 Daniel Ravner: You should do something and and and and also the idea that there is a flat and there is an online poison that is kind of overflowing which I think everybody feels and also this idea that you can do something about it. Uh I think we're still at the position in the world where the most advanced of uh governmental entities understand that they can know much more. But still this idea of being able to mitigate it that's that is still still in developing still developing uh the world of mitigation. And our approach to it says that there's no silver bullet. I mean, there's no one solution. Here is how you stop disinformation because the people you're playing up against have been doing that for many, many years. Uh, they're playing the long game and you just showed up. So, so we think the way to deal with it is is by creating a variety of tools, which is which is a lot like cyber really. building a variety of tool, making them accessible at the press of a button, within the context of cyber, within the context of defense, uh, and allowing whoever needs it to to to play around with those different levers because again, the the the pre-legal solution is great with uh racism for the NGO use case, but doesn't carry a lot of value if you're being attacked by a state because who are you going to sue? But there the takedown option would make a lot of sense and and again and there is that everything in between. And it depends on when when do you when when do you become aware of the influence campaign because if it's early enough which is part of what we're offering you can also educate the public let them know that something is coming be aware of that. Um yeah so so you should have different mitigation options and and this is a major focus for us.

27:42 Zachary: Yeah, I you know I think it's I think it's interesting to ponder and consider that you could you could have a narrative brewing and not even not even know that it's out there. There's so many different platforms now. I mean I you know I don't need to to rattle off all the platforms. Uh I think everybody knows the the big names but um it's always interesting to me you know outside of X outside of Tik Tok outside of Facebook outside of all these these big household names at least over here um you know in the US I I still encounter these new platforms or I I'll hear someone talk about a new social media platform or a new uh group or or website online and and you you know you cannot possibly monitor what's being said about your, you know, about you as a person or or or about a brand. You can't you can't do it all um on your own. I mean, you the only way really to to have an alert of some kind of narrative uh that's spreading about you or to really track and and put a hand on that is is with tech with technology. Um, so you know, we we see bots, we see bot farms come up, we're seeing deep fakes. Um, obviously AI generated content now. Uh, before, you know, there was a there was a point in time whenever all of this had to be human generated. Now that's now that's no longer the case. Uh, so that that can exacerbate the problem. If you had to list something you're you're most concerned about or or technology that you're you're most concerned about, would it be deep fakes? Would it be bots? The the emergence of of of bot farms and bot accounts or I know it's a hard hard thing. Um I know they're all all pretty pretty bad. Um but what might you be most most concerned about if you had to pinpoint something?

29:40 Daniel Ravner: Um, I'm most concerned about people. Really? Who's the people behind it? Yeah. Yeah. And I'm most concerned about politicians making use of this information tools. Uh, and I'm most concerned about humans being really uh eager to adapt some truth because it caters to their sense of wholeness of of uh of morality. I would say that you know at the end it's it's people there were actually um I don't remember his name but I heard something interesting for somebody who uh Neil Degrass Tyson. So I saw something of him which kind of again I don't know uh you know know how to address it but but it's interesting enough for that to stay with me for a few days now. He said that we're getting to a point where there's going to be so much fake images, fake bots on social media that that people would stop believing it all together. People would know that whatever you're looking at is probably not real. So the whole trust level there is around social media is going to implode explode. Uh I don't know if this is going to be the case, but I think it's it's it's a valid interesting point. Uh which would kind of take us back into more conservative media, which I think is still much more important that what people seem to give it credit for. Um but you know, the malicious actors, if they've been anything, is that they've been a step ahead always uh in their ability to kind of figure out what's coming up next and make use of it. So, so again, as far as what I'm concerned about, it has to do more with people and what humanity is able to do and how far would it go before it realize, listen, we've gone too far. Um, yeah.

31:33 Zachary: So, yeah. Yeah. You know, it's it's it's definitely true that it to me it's harder to to trust things. Um, you know, I see AI content in my feeds. A lot of times you can tell, sometimes you can't. And I've I've found myself frequently now having to go and and fact check a video. Did this natural disaster in the world actually happen? Uh whereas previously if you saw a natural disaster video or something online, you could say it would, you know, pretty well, you know, pretty good certainty that it was probably, you know, probably real. Um, now it it seems there's there's an emergence of people creating these extreme videos just to try and get clicks because they know if they see a bridge collapsing or something like that online, they're going to click on it and watch and um and see if it's see if it's real real news. I um you know I going back to that I would say two things.

32:27 Daniel Ravner: I would say that you know for us as as a technological company you know we have the ability and this is something that we will keep on improving and optimizing as things will change is the ability to track it when it's as early as possible find it wherever that is you know the same video and then try to take it down wherever it is or trying to to help the client come up with a counternarrative um as fast as possible. So that's very much on the tactical level. But if you go down to the individual points of view, in many case even with AI, it's a very simple question. You have to ask yourself what is the intent? Somebody created contact, somebody put a contact in front of me, somebody shared the content. What is their intent? Because they want me to believe them. because they want to r me up because they want I mean and and when you go down to intent I think it becomes very clear where something is you know especially with something that is preposterous like or I can't believe it's true. So if you can't believe it's true stop might not be true before you share it before you do more with it. Um yeah anyway so so I think intent is is would always be you know even an analyst intelligence analyst we have quite a few of them in the company who are highly experienced when they're looking online they wouldn't look for uh is it real or not real that wouldn't be their first initial search initial understanding of if something is real or not would be almost instinctive why is somebody putting something online why is somebody saying something online who are they connected to? I mean, a lot of logical questions would would allow you to know fairly fast if something is disinformation or influence. Um,

34:12 Zachary: It's interesting. Would you say that the you know the you know the the first 48 hours of of a of a post or first 48 hours of disinformation coming online is an important time period. I know in the US we uh we actually have or we used to I don't know if it still airs but we had a a TV show called the first 48 where they would follow around law enforcement. They would follow around detectives and the and the the the theme there was is if you can't get a grasp on what the criminals doing in that first 48 hours after the crime uh then you you kind of lose you kind of lose lose some ground. You lose some ability to u to track them down and catch them. So, how important is that?

35:02 Daniel Ravner: That first initial post, those first few followers, that's all the difference between your ability to mitigate something and not mitigate something. Now, by the way, remember that the first 48 hours are often could be a year before the influence campaign starts. With most influence campaigns, especially the good ones, again, good on a professional level, necessarily on the moral level, they don't start when they need to exert influence. In most influence campaigns, uh the malicious actor would join groups, would create groups about baking, would create groups about nationalism, would create groups about 100 different things that are not related to the subject. uh they would actually end up promoting. So the first 48 hours could be a full year before before you're seeing anything. So, and when a story becomes live and huge and viral, that's that's much harder to I mean, you can tackle it, but definitely you're never completely helpless. But the first 48 hours could be really long, you know, a long time before anybody else notices it. But again, I think the help of AI technology, you can look at historical data, look at look at how people have done other things. If you can look at the metadata as far as connections and who's in a group with whom. Uh and you can definitely as said you can definitely see who's telling a story, find the similarities, start creating clusters and start following them. So so yeah, it's it's it's I would even say it's it's the first negative 48 hours the the the the time before before step one.

Transcript Part 3: [36:00 - 51:24]

36:41 Zachary: That's interesting. That's interesting. So, if I understand correctly, it sounds like a lot of your your more advanced influence campaigns. They start by building these uh what they what they call in the cyber security world sock accounts uh which are just sock puppet accounts like a fake sock puppet that you might have played with whenever you were a kid. But they create these fake accounts, these fake personas online and in a lot of cases maybe create groups, build trust with that group uh and with the followers of the account and they may do that. they may do the leg work for that per se, two years before starting to try and influence those groups. Is that is that correct?

37:23 Daniel Ravner: That is correct. This is something that we've seen around the Russian Ukraine war. We've seen that around the Romanian election. This is this is usage. So this idea of you first mobilize a demographic and then you weaponize a demographic. So during the mobilization you might talk about sharing recipes uh you might talk about cats you might talk about sports you might talk a hund different things but you would create the credibility of the avatar you would find a group you would create a group you will infiltrate the group uh and you know base yourself in it and only on the day you would start creating those uh messages that you were that the campaign was created to to to promote.

38:14 Zachary: It's interesting. Okay. Yeah. So, I would I would definitely uh have to agree with you on on ultimately uh the most concerning piece of this being the the people the people behind it. So your website your website describes a you know this this tsunami or this flood of of of disinformation and and and you say whether you know whether it's an individual or an organization or or government you can you can help them if if I'm an individual and I feel like I'm I'm being bullied online or I have a you know a kid or son or daughter in school I feel like they're you know maybe being bullied online or um or I see there's some nar narrative, you know, about a loved one or or myself. Um, you know, is is is Brinker a company that could potentially help an individual or are you guys mostly mostly just focused with NOS's, governments, um, and larger companies?

39:06 Daniel Ravner: So, we we are mostly focused on NOS's [NGOs], governments that would be and again we work with the banks, we work with some law agencies, da da da, we have that flexibility. We work with what's called high-profile individuals as well. uh we have taken cases on which are proono [pro-bono] so people finding their images running around social media uh people who found out that they are starring in a campaign for crypto app so we've done that we've done that mostly pro bono so I would say that at large the system is is you know the system is very much shaped around the use cases that that that come through the door which are the ones I've used but but we've been you know we want to be good to help to individuals uh in the capacity that we can do that. So, so yeah. So, so yes, we can that's not the nature of of the corporation, but we have and we will definitely.

40:02 Zachary: I think that that says a lot about about you and your team. Um, so you know, obviously obviously open source intelligence, open source investigation is is something that you guys are are using just tapping into what's been made publicly available on on the web u and trying to automate the process as much as possible. Uh, but it sounds like you guys have have some actual analysts on hand. It's not like you if you if you pay for Brinker services, you're not just paying for a tool. Is that correct? You also get—

40:32 Daniel Ravner: No no you're paying just for a tool. Okay. The analysts that we have on board are very much leaning towards the product. So what is what their job is to listen to the client and the client is the analyst. I mean our clients are usually the analyst. Social media analyst, the intelligence analyst, cyber analyst, threat intelligence analyst. Those would be our clients. The people on the ground who are doing the work. Their job is to listen to what it is that their use cases, understand how they use the system. Most importantly, understand if something that they need is missing from the system and then translate it to product.

41:15 Daniel Ravner: Okay. So they would say for example, all right, so I'm as an analyst manually, how would I approach the problem that the user is facing? Uh and then they would take that uh their methodology to the product to to the to the developers and then developers would come up with a way to automate that uh that task and that happens every day a few times a day over time. Some tasks take an hour to complete you know some small features some will take months it really depends. Uh so so so we have I think we have an amazing support right because we are we're on top of it and I think every startup has to have an amazing support because we would only understand what needs to be built if we listen to our clients every day. Uh it's not just about being gracious. It's not just about being customer support as a way to to to create to create stickiness. It really is about the you know there are 50 things if I had 500 developers uh and and two years in an empty lab I would have stuff to do so so as a startup you always ask the question so what's the priority what's the road map what are we doing tomorrow and what are we not doing tomorrow it's a more important questions and we try to find the answer to those questions from the client not not from anything that we think we don't want to come up with an answer and then try to figure out who asked the question and and and that uh link that turns clients demands into features because they understand the manual process of the investigation. Those are the analysts that we have.

43:02 Zachary: I I have have to say that's that's an interesting proposition. Um so so before we before we we wrap up, I want to I want to first say thank you. Um, I, you know, I love what you guys are doing. I'm I'm I'm continually impressed. Uh, the more I more I talk to you, the more more impressed I am. Um, you know, just to just kind of kind of close us up um and kind of kind of wrap wrap things up, I'll say to anyone who's who's watching, what we're what we're trying to do here is we're we're trying to highlight companies that are putting forth solutions um rather than um you know, just just talk about the problems. We we we want to um we want to proactively support uh the companies that are that are trying to uh be a light um um um against this u against this this disinformation and misinformation. Um but to to kind of finish with with a with a question um you know you've you've you've started on this on this this path. Obviously it's something you're passionate about. I think the passion shines through. It's easy easy to see that that you're passionate uh about u about your work moving forward. You know what what do you think what do you think the the future holds for this for this this space? What you know you know what technologies do you think we might might see um enabled uh that weren't weren't previously weren't previously possible? Um obviously techn is moving very fast these days especially with with artificial intelligence and um I know since I've been in this in this space I have seen artificial intelligence become better at finding logical fallacies in text. I've seen it become better at pinpointing u uh discrepancies of reasoning uh in in text. And so so looking forward on a on a on a positive note, you know, what are you what are you most excited about? what what are you hopeful for um going in going into the future?

45:04 Daniel Ravner: Um so as far as AI although it's a buzz word uh you know I think this buzz word has you know it has cloud it's really substantial and we are at the onset of that revolution. So whatever the future might bring, I think AI is going to be a major part of it. And the point is not to use the, you know, the eye [AI] of the shelf because the eye [AI] of the shelf by nature, it's about generalization. It's about uh, you know, it's it's the whole point of AI companies is to is to train AI specifically for specific tasks. First on the logicalsight level and talked about the future. the future is is those agents that don't only provide better insight and allow you to digest data. They are also able to execute tasks based on it. So if you'll say listen I have the major head of another country coming to my country next month. What do you think are the risk associated with what narratives might come up? A narrative might be somebody's planning to protest or somebody's plan to to hurt him. Uh can you please you know create the relevant automatically create the relevant environment uh intelligence environment find the best narrative create a report send the report to the relevant people send the physical threat report to the police send the the the other narrative report threat intelligence I mean I think we're heading at a direction where the I don't think AI would replace analyst and the creative thought that goes into it definitely as malicious actors also get better all the time. But I think we will be able to um to automate as much as possible from the menial work which I think would makes up 80% of an analyst time allow them to use their creativity uh and then because again right now this is a this is the information war is one it's like a football match when only one group showed up and they're playing you know an empty field. This is where we are still today. So even if we just align the playing field, if we level up the playing field, that would have been a huge jump forward and I think the technology is there.

47:27 Daniel Ravner: Uh I think that uh uh again the NOS's [NGOs] have already been there. I'm seeing more and more companies that are entering that field. A lot of them with their own specific niche. Um and and I think that the power of the corporate, the power of the entrepreneurship spirit is something that really needs to be added to the NGO spirit uh in order to execute a real change. Um because again they're fairly different and and and we need more of that uh of that period. those companies I'm seeing already that VCs are kind of leaning more towards that area which is important because you know if if it's more easy for me to get an investment then it's more easy for me to get five more R&D people and I can create better solutions faster etc etc so so the entire ecosystem is kind of building up around and it's still nent [nascent] still nent although the problem is burning up the world although So we are surrounded by unbelievable risks every day when you look at the news. The industry is still nent. But I I think we're seeing uh a fairly rapid movement forwards even the two years that Brinker has been alive. I'm I'm I can feel it the the the the market buzzing around us. It's it's it's it's much different now than it was a year ago and and much different than it was two years ago. So uh again so so so I feel that humanity at the end finds a way to save itself hopefully from this risk is by companies like us and there are other wonderful companies in the space. Uh and I think there is much work for all of us honestly. So so I hope that this entire industry will will grow better. Um I hope that regulation would allow to make us life easier and and and drive um and drive societies away from danger. So I think regulation has a huge part in what we do and but it all fits into everything what we're doing right now is another part of that link where we get more awareness to the problem to the industry uh to the solutions than regulation. I mean, you know, it moves together. It's it's uh it's Yeah. That kind of uh so so on the optimistic part, I can't help but hear the roar. Uh and and I think that that humanity will step up.

50:08 Zachary: I love it. I love it. Excellent closing. Uh Daniel, well, I am I am excited to uh to watch where uh where Brinker goes, see the good work you guys are you guys are doing. Thank you again. This is an awesome awesome conversation to have. Um truly honored. Uh you have a you have a good team, uh a good product, good service. Um and and and whenever I say, you know, the more I talk to you, the more more impressed I am. I really mean that. You know, coming into to our conversations, I didn't realize how many tools um were available and and how much there is that that someone can can do. Uh and that's that's a that's a sign for hope. So, if anyone's out there watching this, listening to this, and you've you've got a disinformation campaign that you you want to um put a put a hand on um and understand on a on a deeper level and maybe maybe take action, um Rinker [Brinker] Rinker might be uh might be a good phone call to to make. Um we'll put some links in the description below for anyone who is uh who is interested in in learning more. And uh and again, yeah, hopefully we can have you on at some point in the future. And yeah. Yeah. Again, I can't can't thank you enough, Daniel.

51:11 Daniel Ravner: Zary [Zachary], thank you very much for the platform, for the stage that you've given us, for your passion into again what we're doing, what the industry is doing. Uh, and I think it's important. I think it's it's an important part of of of this whole ecosystem. So, thank you.




See it in Action
Talk to us. 

One of our represntaitves will be in touch soon

Disinformation Threat Mitigation 

Brinker is an end-to-end disinformation threat mitigation platform that serves the public sector and major enterprises. It combats disinformation attacks and influence campaigns, using proprietary narrative intelligence technology, AI-enabled detection, and automated OSINT investigations.

Address: 398 Kansas Street
San Francisco, CA 94103

  • LinkedIn
bottom of page