burgerlogo

Large Language Models and How To Use Them

Large Language Models and How To Use Them

avatar

IoT For All

- Last Updated: June 15, 2023

IoT For All

- Last Updated: January 1st, 2020

featured imagefeatured imagefeatured image

https://youtu.be/4PPx8-ub0vE

What are LLMs? Scott Sandland, Founder and CEO of Cyrano.ai, joins Ryan Chacon on the IoT For All Podcast to discuss large language models and how companies can use LLMs. They cover why LLMs are a big deal, the value of large language models, ethical considerations, foreseeing ethical challenges, why moving fast and breaking things doesn't work for AI, prompt engineering, and AI hype.

About Scott

Scott Sandland is the former world's youngest hypnotherapist and the former CEO of a mental health clinic helping at risk teens and drug addicted adolescents. Because of his vision to help more people at scale, Scott shifted his focus and has become a multi-patent inventor in Artificial Intelligence. Now, as the CEO of a company focusing on strategic empathy and linguistic analysis, Scott uses his AI system to empower people having high-value conversations. He has been published in numerous peer-reviewed medical journals and has had his work at Cyrano mentioned in the Harvard Business Review, Psychology Today, Forbes, Entrepreneur Magazine, and more. Many tens of thousands of people have used his AI software to date.

Interested in connecting with Scott? Reach out on LinkedIn!

About Cyrano.ai

Cyrano.ai uses proprietary language models to understand a person's values, priorities, motivations, and commitment levels in real time. From there, it gives actionable insights to increase rapport and understanding as well as strategic advice to increase conversion or follow through. While the commercial applications of Cyrano are obvious in sales, the primary goal is to empower the conversations around healthcare and mental health.

Key Questions and Topics from this Episode:

(00:53) Introduction to Scott and Cyrano.ai

(01:48) What are LLMs?

(03:03) Why are LLMs a big deal?

(04:01) How can companies use LLMs?

(05:52) The value of large language models

(09:03) Ethical considerations for AI

(13:17) How can companies foresee ethical challenges?

(14:45) Move fast and break things doesn't work for AI

(16:14) Prompt engineering

(17:16) AI hype and what to focus on

(19:37) Learn more and follow up


Transcript:

- [Ryan] Hello everyone and welcome to another episode of the IoT For All Podcast. I'm Ryan Chacon, and on today's episode, we have Scott Sandland, the CEO and co-founder of Cyrano.ai. They are a company that uses proprietary language models to help understand a person's values, priorities, motivations, and commitment levels in real time.

We're gonna spend a lot of today talking about LLMs. What are LLMs? The ethical roadmap versus a technical roadmap when it comes to AI implementation and how companies can handle that, be thinking about it and so forth, and then why that common phrase of move fast and break things may not be best for the AI world. Prior to getting into this, if you're watching this on YouTube, give this video a thumbs up, subscribe to the channel, and hit the bell icon. If you're listening to us on a podcast directory, please subscribe so you get the latest episodes as soon as they are out. Other than that, let's get on to the episode. 

Welcome Scott to the IoT For All Podcast. Thanks for being here this week.

- [Scott] Thanks for having me, Ryan.

- [Ryan] Absolutely. Exciting conversation we have planned. Wanted to have you kick this off by giving a quick introduction about yourself and your company for our audience.

- [Scott] Sure. So my name is Scott Sandland. I'm the CEO of a company called Cyrano.ai. The best way to think of us is we are an API that does linguistic analysis for new stuff that most people haven't been looking at. Most people have been looking at like sentiment or keyword and things like that. And what we look at is more empathy, soft skills, how to relate to people better, and what we think of as lifetime value of the relationship.

So we built out a tool that does that, analyzes a person, and then provides insights to a human user on how to have a better relationship with that person in the given context, whether that be sales, technical support, or mental health support.

- [Ryan] So I know we wanna talk about a lot of different things today. One of them is around LLMs. This is a topic that I think, at least an acronym and phrase that's been going out a lot lately in the AI space. For our audience, since we are new to covering more AI related topics, can you just explain what LLMs are and what's exciting and why people should pay attention.

- [Scott] Sure. So LLM stands for large language model. That's easy. The most famous example of this is now what's called ChatGPT. The idea of a large language model is they've got a ton of language and they've put it into a system and they put it up against, let's call it a billion parameters, and it's auto complete on steroids. So the same way that your email, like Gmail, has like those auto complete, auto finish the sentences, text messaging are doing it now. It's that, but just way better. And that's really all it is. So when you're using ChatGPT or any LLM, it's just predicting what the best, statistically the best next word is.

And probability. So usually if you say will you go to my birthday. So it like fills in that sentence for you. That's what an LLM does. But as you do that, it means it's really good at understanding what you are trying to say and understanding what it can say in response.

- [Ryan] So how does this- it's interesting you bring this up because I wanted to ask about, we spend a lot of time, atleast developers spend a lot of time learning computer language, but this sounds more like computers are learning our language. Talk about that and why that's a big deal.

- [Scott] It's a huge deal because the reason we learn computer languages so would be so that we could have a greater influence on what the computer does. So we could code them, so we could program them, so we could order them around and get predictable outcomes. This is now the most important computer language, programming language is now English, and that also means that the computers can now get better outcomes out of us.

And so our ability to go back and forth with the computers is no longer bottlenecked by who knows code. It's just, do you know English? You can interact with a computer better.

- [Ryan] And how can organizations like, or how should organizations be thinking about LLMs in their I guess daily life, how can they be utilizing them in their business operations, in the products and services that they launch, in just different elements of their business, whether it's utilizing things like ChatGPT, or can they build their own LLMs, or how should we be thinking about how this fits?

- [Scott] Yeah, so the trick of it is you can put it into a lot of places because LLMs can write code. It's not perfect, you still need human supervision and all that, but it means getting an MVP or a pilot up can be much, much faster, where a person in the marketing department could get something stood up without having to go develop a budget.

So as you're talking about SMBs and up, it means that you can have internal pilots be championed much more easily. Can they build their own? Yes. And this really gets into some interesting ethical considerations. There's the ChatGPTs, Google Bard but the other LLMs that are more opensource-y and at the time of this recording, by the way, the Google document was just released on what's happening with Facebook or Meta's LLM that's called LLaMa, the open source variations on that now called Vicuna,

I think 13B is the one that most people are talking about, but the idea of that is it's an open source, build it yourself, go have fun, do whatever you need to do. Hugging Face also has a great one with a bunch of plugins available. So the ability to build off those tools that are open source and have your own language model is- it's crazy that something that no one could have six months ago, you can have your own today for free.

- [Ryan] Fantastic. Okay. And let me ask when companies obviously do continue analysis throughout the evolution of their business, I imagine LLMs will be able to come in and help organizations make better decisions, provide better types of output, more connected to humans as opposed to maybe the product side of things or thinking about in that way. How do you think about the output from that perspective?

- [Scott] It's a great brainstorming tool where you can say, Hey, what are the possible repercussions of this? Or what are the possible applications of that? And say, give me 20, and it will just spit out 20 and maybe 17 of them are useless and bad. But three of them you might not have thought of and it costs you nothing to get that input.

And it just, that sounding board and that rapid iteration is really valuable.

- [Ryan] I've seen the value, it helped not necessarily just with copywriting, but being able to provide prompts and get output that gets you very close to all the way there when it comes to writing descriptions about products or talking about or trying to build things that are aimed at a certain type of customer or words that were aimed at a certain type of customer and in some kind of emotion that you're trying to have.

It's very interesting how it can relate to or create things that are more human-like than just code that does X and then outputs Y kind of thing.

- [Scott] Yeah, so you can say things like, write it with a fifth grade reading level. Write it with a college reading level. Write it very formally, write it informally, and it does a good job of that. It's still, to your point, a little bit robotic. Yeah. It's amazing, but it's not quite human yet, but it's a fantastic draft and I use it when I'm writing stuff.

I'll say, here are the things I want to put into an article. Help me organize this into an outline. It gives me an outline, gives me a couple of examples of examples and then I say, great, write a draft, make it like this, make it in my voice and it knows my voice. And so it writes it fairly similarly, and then I'm editing that instead of having my fingers on keyboards for 20 minutes.

- [Ryan] Yeah, it's funny because like my sister, she's a teacher, and she just started making her own candles. So she wants to try selling it and see what kind of feedback she can get. And the other day she was like, how can I write a description for my Instagram page? How can I- what should I be doing to help promote it?

And I said go sign up for ChatGPT. She's never used it before. And within 10 minutes, she had a very well-written description in the number of characters that she needed to fit into certain areas to promote her new candle brands. So that was neat to see someone who is not a technical person utilize these LLMs for the benefit of what they're trying to do with the business.

- [Scott] That barrier of entry has dropped dramatically over the last six months and the kinds of things that we can all now produce at scale quickly is totally different than it was in 2022.

- [Ryan] Yeah, so let me ask you, you mentioned how fast things have been moving in this space, and there's obviously been ongoing discussion about how fast they should move and continue to move, which ties into the ethical conversation that is being had at times. A lot of big name people were attached to a letter that was asking the AI world to slow down.

But let me- when you're talking to an organization, or you're working with an organization or just somebody listening to this is thinking about building something that has- that's connected to AI. How important is the ethical discussion, the ethical roadmap when as compared to obviously the technical roadmap, which we're all very familiar with, whether you're building software, hardware, you name it.

I feel like the ethical piece is more top of mind or should be more top of mind in the AI world than necessarily for other types of software solutions that you could relate to it.

- [Scott] Yeah, it absolutely is. And there's a few things that- the overarching idea is you can't get the toothpaste back in the tube. So before we're executing things, we really need to think about unforeseen consequences. The laziest example is when they were first- OpenAI was actually training, and DeepMind from Google, were both training AIs to get good at Atari video games, two dimensional simple things.

And it was a great learning sort of playground. But when they said, Hey, play Pac-Man and get good at it, and then they say what's- the AI said what's good mean? And it says don't get game over. So the AI learned how to press pause. And I said, Hey, nailed it. So it's very literal, right? And so the unintended consequences of your requests are really important.

And the permutations of what might possibly go wrong with AI is greater than with other types of code because it's going to potentially experiment.

- [Ryan] Thinks for itself in a sense, right. It's figuring things out. It's evolving as you're- as it's working.

- [Scott] Yeah. And so as it's optimizing for the desired outcome, there are collateral damages that it would see as okay, because you didn't address it. And so you really need to think about some of those. Easy examples are racial bias, sexist bias, things like that, that have been well documented. If you wanna check out a quick TED Talk, Weapons of Math Destruction.

It's a perfect example of people trying to use algorithms to make things more efficient and fair and objective, and it just completely backfiring left and right.

- [Ryan] Yeah, I've seen a lot of people on social media posting like Snapchat's AI tools and things like that, and even ChatGPT, they'll ask it questions, and it very clearly has a bias in a lot of those areas that you mentioned. Even the political side too. It'll ask questions about current presidents, past presidents, different parties, and it will give very different answers to the same exact question when you just change out a name or an affiliation.

Which is super interesting, and I actually had a guest two guests on a couple weeks ago we were talking about this and it was a big deal, like talking about biases and talking about how do you work to try to remove them because they are something that is in a lot of these tools and something that people need to be be aware of.

- [Scott] Or just identify them. Sometimes there are some biases that when you use the tool right, the bias can become a feature if you can account for it. So an easy example is there's a ton of healthcare training data that's optimized for white guys in their forties like me, which is really good news for me.

But it's not very good news for people in other ethnic and demographic groups. But if we say, okay, now we've built a model that does great healthcare for Scott. We don't need to break that. We just need to only apply it as a tool to this demographic group. Now let's build one for the other groups.

And so it's not about vilifying that algorithm, it's about targeting that algorithm and making sure that we have parity in those other groups as well. So we don't need one model to do all of it. We just need to know what those biases are so that we can account for it in other resources.

- [Ryan] One of the things I wanted to ask you about the ethical discussion and how can you or how can a company thoroughly prepare for things that may be unforeseen or things how- you know, how do they know what they don't know? That could happen. There's only so much I feel like you can do to protect yourself from things that could happen with a lot of these situations.

But how do you think about that or how should people be thinking about that and approaching it?

- [Scott] Yeah. So I think, and going back to the thing you said, there was that letter. Let's pause everything for six months, and that's because this arms race idea makes all these organizations think about moving fast, which means they will be sacrificing quality ideation time that is required to do this right.

So the simple answer is if a company is going to be working on any sort of AI initiatives, they need to have a tech roadmap and companies have a tech roadmap. They need to have an ethics roadmap that is in parallel to that and say, we can't start deploying things until we have answers to these questions, these frameworks, these guidelines.

There's a great book called Ethical Machines that's a really good book on AI ethics for people who are looking in that space. It's a great jumping off point and it's a relatively easy read that is full of value.

- [Ryan] And you mentioned something about moving fast, and I know the common adage when it comes to tech for forever is always move fast and break things. But it sounds like for companies, especially startup companies in AI, that may not be what's advised, and if that's fair to say, why is that something that has maybe changed and a little bit different?

- [Scott] Yeah, move fast and break things doesn't really work well with AI. So I just wrote a blog post, I put it on my LinkedIn about AI and Jurassic Park being really paralleled and scene for scene. There's these moments, you're like, oh my God, we can make dinosaurs.

That's amazing. These are giant creations more powerful than ourselves. Oh wait, we didn't think of this piece or that piece or that piece. And there's these known unknowns and there's these unknown unknowns, and you need to go slowly through those and discover them one at a time so that they don't cascade on themselves.

Because as those unforeseen consequences stack, that multiplier becomes really hard to unwind. But if you think about it deliberately in advance, you save yourself millions of dollars just by doing it right the right way the first time.

- [Ryan] Absolutely. Yeah, it's definitely an interesting discussion to have and something that just is a different way to think about things than I think a lot of us are accustomed to in the tech space that we probably played in before before today.

- [Scott] It's worth saying that because we get such a multiplier on the execution of this, that it- we can save time still net of everything if we're more deliberate now. And prompt engineering is something that a lot of people are playing with, which is basically just talking to ChatGPT in a way that gets it to get you what you really want, right?

And so you're going back and forth with it and going back and forth with it, and then you finally find the right combination of prompts that get you that result. We're talking about that- and that's the idea, like you're still way faster. But putting in that work, in the ideation, in the boardroom, just chatting and brainstorming and working on it, that needs to be where we move slowly because the execution, the writing of the codes, and the rolling out, all that is so dramatically accelerated that we need to not just build, find what breaking and iterating, the thought process needs to be earlier in the process now than it used to be.

- [Ryan] One last thing I wanted to ask you before I let you go here is there's obviously a lot of AI hype right now in the world. A lot of conversations around AI. From your perspective, of the popular things that are being discussed in the mainstream world, how much of the current AI conversation is hype versus not hype?

And I think there are definitely different perspectives to take, but just from your side of things, what should people really be paying attention to at a high level? And maybe are there things that people think they should be, but maybe are not as important right now to focus on?

- [Scott] Sure. So I would say don't worry about jobs being taken away right now. Your job will be replaced by a person who knows how to use AI, but it won't be replaced by the AI. And you can learn the AI so that you don't get replaced. So that one's, for now, pretty straightforward. I would say what we're getting at a lot of- remember when chatbots became a thing and everyone had a chat bot and everyone was talking about that, and there was just this hype thing.

A lot of people are taking an LLM off of the shelf, ChatGPT or something like that, putting their own colors on it and then calling it their own AI. And there is going to be a ton of hype of people saying they've built something special, and they've done nothing. And so having people wade through that is going to be really confusing for a lot of people.

Obviously there's gonna be stuff that needs to be created in terms of frameworks for deep fakes and misinformation and all that stuff, and that for the election cycles coming up, that is something that smart people are addressing and needs to be resolved increasingly, but I would say for the immediate fears and concerns that people have, the robots aren't coming for our jobs right now.

They just aren't. It's they, what we now have is much better tools, but what we don't have is a done for you solution. So we have new tools that we all have to learn. Yeah, so I'd say that's where we are in the hype cycle.

- [Ryan] Well, Scott, thank you so much for taking the time. Great conversation. I think our audience is gonna get a ton of value out of this. There's a lot of topics we covered today that I think people have a lot of questions about and curiosities about, so it's great to hear from somebody who's in the space and really knows their stuff.

So I appreciate you taking the time and excited to get this out to our audience.

- [Scott] Awesome. Thanks for having me.

- [Ryan] Yeah. And last thing I want to have you do is just for our audience who wants to learn more about your company, follow up on any discussion points or topics, what's the best way they can do that?

- [Scott] You can go to cyrano.ai, that's c y r a n o.ai, or just find me on LinkedIn. My name's Scott Sandland and if you find me, I'm happy to have conversations with any of you who are interested.

- [Ryan] Fantastic. Well Scott, thanks again so much for your time and hopefully we'll talk again soon.

- [Scott] All right. Looking forward to it.

Need Help Identifying the Right IoT Solution?

Our team of experts will help you find the perfect solution for your needs!

Get Help