(Trumpet Music) Hello and welcome to the Thinking Elixir podcast, where we cover the news of the community and learn from each other. My name is Mark Erickson. And I'm David Bernheisel. Let's jump into the news. All right, hey, first up, we got Jose Villan created a video that shows a lot more about how to deploy a live book, how to deploy it, you know, like out in the world. Yes. How it works and the different options that you have with it. So we've got a link to the YouTube video. This is mostly in line with the live book teams kind of feature that they're working on. So it is a free beta. You do need to like kind of sign up for it. So we've got a link to that Google form that they're using for that, for tracking that. It is free to join the beta, by the way. And I love some of the examples that it gives for how this is helpful. So it's not just like a how-to video, but it's like a why-to video. (Both Laughing) Love it. A lot of them can be internal tools for a company and for example, like migrating data from one database to another. Of course you can do analytics. You can create some little dev tools in there. Lots of cool things that you can do with a live book that's deployed for your team. Again, it's just a YouTube video. They're pumping up their live book teams feature. And it's a great reason why you might be interested in this. Yeah, we've talked before about some of these live book examples where you're able to host a little user interface and deploy that little user interface, like that's been shown several times. And this is like, how do I actually do that? Where I can create the live book, but actually wrap a user interface around it and present that to my accounting department and let them activate and run a report that's off of our database and put it out in a nice little Vega-lite graph or something like that. That's a pretty cool use case. And this is where he's digging into how do you actually do that and deploy that. So cool that you can do that with live book. I need, I need, I want, I need. Well, continuing on with that, Chris McCord shared a really cool demo of a project that combines a number of cool topics together. So he created a demo project called Pause-itively. So P-A-W like pause for little pets, right? It's around taking the idea of sentiment analysis around content moderation for user created content. And he's running it through a Mistral large LLM. It's self hosted, it's 123 billion parameter. So really big. And he's hosting that on fly.io. Then when a user submits or tries to submit something, we can first check for content to see does this meet our guidelines and whatever requirements we have. Maybe it's spammy, maybe it's aggressive or threatening or just insensitive, things like that. Or maybe they're trying to throw in their ETH wallet in what otherwise looks like a valid message. So he goes through all of that. But then one of the really cool bits was he was showing how you can take that Livebook demo project. So the whole thing that he's doing with this whole sentiment analysis, it's all written in the Livebook. We've got a link to the gist. It's just this single file. And it's like 50 lines of code for the Liveview portion that builds this and runs the check against the LLM and displays that it's approved or not approved and why. The other little cool bit is showing in Livebook this option called manual Docker deployment. So there's also the option for deploying to Livebook teams, but he goes through and shows the manual Docker deployment, which generates a Docker file locally, which you can then deploy. He deploys it to fly and uses a fly Toml file to show how do I expose that running server inside this Livebook. And we've got a link to where you can actually interact with this hosted Livebook and play with it. Anyway, I thought the idea of making it so easy to say, I wanna take whatever this Livebook user interfaces and wrap that up in a Docker file and deploy that wherever I need to within my own organization, that was really neat. So that was just continuing on with what Jose was talking about with what you can do with these deployment of Livebooks. So anyway, got links to both of those very, very cool stuff, exciting stuff about Livebook. Yeah, I am super pumped. All right, moving away from Livebook and liveness and LLMs and such. We've got another topic with Ziglar. Zig, Zig, Zig, Ziglar, Ziglar 0.13.1 was released. This is a pretty awesome project by Isaac Ganomoto. Seems like he's working to tie the version of Ziglar to the released versions of Zig. So for example, Ziglar 0.13 corresponds with Zig 0.13. Not so sure about the patch versions that that might be independent. But in any case, if you don't know what Ziglar is, Ziglar is an Elixir library that makes it easy to create Zig niffs in Elixir. Zig is kind of like this, I don't know if new age is the right word, but it's this new way, newish way, modern way, I'll put it that way. Modern way of dealing with C language scenarios. Zig is like a good replacement-ish of C and it has great cross-compilation tools there. And Ziglar is the Elixir library that allows you to basically embed Zig-Lang stuff inside of your Elixir application. So there is a sigil Z and you put in your Zig code in there and it will compile it and it will link it correctly. It'll do all the memory things for you. It'll do all the Rustler type things for you, right? Like Rustler is to Rust. So that was Ziglar. Zig is described as a general purpose programming language and tool chain for maintaining robust, optimal, and reusable software. That's their quote. It also, Ziglar, also comes with its own formatter. Well, Zig has its own formatter and Ziglar works with it. So Ziglar includes a plugin, a mixed plugin, mixed format plugin. So all of your sigil Zs in there can also be formatted by Zig, it's pretty great tooling here. More about Zig, it was started by Andrew Kelly back in 2015. So we are approaching 10 years old at this point. Doesn't feel that old to me, but timelines don't lie, I guess. So the project's ambition was to become the heir to the C programming language and it compiles to native binaries. It's easier to write and thanks to Ziglar, easy to embed small chunks of that highly performant native code into Elixir. So in Elixir, the Zig code is accessed through a NIF or the being feature for executing those native implemented functions, right? That's what a NIF is. So back quite a while ago, maybe we should have them back off. Back in episode 83, we talked to Isaac Yanomoto about Zig and Ziglar in greater depth. So if you're interested in this, you can go check out that old podcast episode that we did with him, or you can read up on ziglang at ziglang.org. To recap all that, ziglar.13 is out and if you need native stuff, that is a pretty compelling option. And next up, Herman Valesco, following up from one of his previous tips, talking about macros and getting some understanding of quote and unquote. This one continues and talks about macros. Sometimes if you're looking at macro code and you're in IEX, what it actually is represented as is Elixir data structures with lists and lists of tuples and things like that. And that's actually what the AST is. And it's a set of instructions and a data structure for expressing that. But sometimes when you're looking at this big blob of AST, that can be quite hard to just look at and know what it's doing. So Herman was just sharing that he likes to use macro.tostring, where you can give it that blob of AST and it will just generate that back into, this is the Elixir code that would actually generate this AST. A whole lot easier when you're trying to debug and mess with and write your own macros to understand what is this gonna look like now when I have this AST structure this way. So nice tip. ASTs, macros, I love and hate them. (Both Laughing) It's almost like I'm writing another language, you know? I don't know. There's definitely a mode shift. It's difficult to grasp the first time around. This is one of those topics for me. I have to like, I gotta go through it five times before it really settles. Yeah. All right, next up, a short item. Error Tracker 0.2 was released. We've mentioned what Error Tracker was in a previous episode, but the TLDRs at Error Tracker is a century-like, honey-batcher-like library that you can host on your own. In your Phoenix application, for example, you mount Error Tracker and then you have your own internal error reporting application inside of your application. Instead of these exceptions being reported to a third-party platform like Sentry, you can report it to yourself. So that data never really leaves and it comes with its own UI. What are the changes in 0.2? Well, they now support SQLite 3, so that's great. Some UI improvements because that's always necessary. And then lastly, maybe the bigger part here is telemetry events are also included now, which is pretty interesting. So if you're curious of just basic metrics of what errors are happening versus doing database queries of what's in the database for Error Tracker, now you can admit that to metrics platforms. So we got a link to the tagged release in GitHub, but if you're interested in Error Tracker, it looks like there is active development, which is the whole reason why we put it here. So that's good news. And next up, a free consulting tip was shared by Jose Vellim on Twitter slash X. What it is is that he was sharing this blog post from the Evil Martians blog that's titled Hard and Soft Deletion, a Brief Intro and Comparison. But really, this is a neat Postgres-specific technique that makes it easy to add soft deletes to an existing code base without having to go around and find all the places in the code where you would have to filter out and say, well, here's where I need to filter out that something was deleted, I don't want those. And you're inevitably going to forget some or miss some or add new code and forget to filter out deleted and all the problems that that can cause. This is a solution that doesn't do any of that, but this uses features that are already built into Postgres. Here's how it can kind of work. I'm on both sides of this, by the way. I love it and I don't love it, but here's how it can work. So Postgres has this thing called a rule. So you can create a rule, which is essentially like a function or a macro applied with a query executor to modify your query. So that rule can be like, instead of deleting orders, do this other thing instead. So it's kind of like overriding an operation, depending on what that rule is saying. And so that's how this is actually working. So you create a rule called soft deletion or whatever you want, and you tell it, so on delete to this table orders, do instead this other thing. And this other thing is update orders to set deleted at or delete a true or some other, whatever you need it to do, where the old ID is not already kind of deleted. So it's just logic inside of SQL. And it's a kind of, I don't know, I like to think of it as a callback. In a way, it's kind of like the macros we were talking about. It's like, you said you want to delete, but I'm going to turn that into an update statement. Right. Yeah. So there are rules are interesting. For what it's worth, I think the Postgres SQL, like wiki, says don't use rules. So that's why I'm like, on one hand, I'm like, don't do this because they said don't do it. And they might have good reasons why not to do that, that I don't understand. We'll have a link to that as well. So you can read that for yourself. Maybe I'm misinterpreting, but do instead, like create rule this thing on delete and do this other thing instead. You could also say do also. So you can kind of like attach things to it. Right. So anyway, so that that's the basics of how this works. The rule lets you issue a normal delete, but it kind of rewrites that into an update. Instead, the article goes on to show how hard deletes can still happen. You can temporarily disable the rule in a transaction. So you can actually like really delete if you have like a GDPR kind of like request that you have to take care of. There's more details in the article about how you do the migrations and the setup of this whole thing. The iffy part, I just, I can't recommend this myself. And it feels weird to say that because of course, why shouldn't I recommend something that Jose Villem recommends, you know? But in a big team setting, people aren't gonna go look in SQL for logic, for business logic, you know? Like that's a couple of layers of abstraction away that I just, I think will be easily forgotten. Now, if it's straightforward, cool. Here's the other thing, cascading deletes. Because now as soon as you have a soft delete in one place and you've got a thousand relationships, like a tree of, you know, rows that all kind of depend on each other, now that soft delete kind of like bleeds into all of the other relationships. Now you have to handle all that because now cascading doesn't work. Because they're not deleted, you know? Like how do you, anyway, you have to commit to it if that's what you're gonna do. Yeah, so the blog post does go into how to do cascade deletes and how to handle the migrations part of it too. Yeah, which the TLDR of that is a bunch of rules. (Both Laughing) It's like, as soon as you have to do a cascade, it turns into a bunch more rules to like handle, like to replace that cascade, you know? Which isn't bad, just again, where's your logic at and where's the first place your team's gonna look? All right, so Jose explains why he shared it this way. You know, first off, he just tweeted about it and said that since he keeps on telling clients do this thing and he references the Evil Martians Ruby-oriented blog post, they went ahead and blasted it out publicly on Twitter, but then also they published a blog post on the Dashbit blog. That kind of goes into it in more detail. References the Evil Martians blog post a lot, but then re-orient it to Ecto and how to do that in Ecto. It's a good consolidated place. If you don't wanna go find this on Twitter, I don't blame you, the Dashbit blog is gonna have that all in one place. So good resource there and if you need more detail, the Evil Martians blog post has even more detail. Anyway, I thought that was very interesting. I love SQL, I love the Postgres tricks and stuff like that, but man, this is the first time I'm like, I don't know about this one. That's really cool though. One of the really cool things that Jose shares is that this works really well with Ecto because Ecto allows you to swap out the table that it's referencing. And so normally when you would query just using the table, the rule would say to exclude all of the soft deleted things. And just by this little tweak where you can say, I want all orders from the orders table, then without changing like that you're now selecting from a view or something else like, or some other schema, you're still using the same Ecto schema and just be able to pass through that, I want all the records, all the soft deleted ones too. Yeah, there's a, today I learned from that blog post is that there's an option in Ecto to ignore or no, allow stale, I didn't know about that option. And so the idea is that if you're referencing a view, you can't really delete from it, right? And so allow stale says, it's okay, don't worry about that error, it's okay. And then the other one is I learned this a couple months ago is that the concept of editable views, database views, I didn't know this was a thing. And so as long as there, it's a simple view and it's not really computing anything or joining anything. And in this case, the soft deletion fits that scenario. You can create a database view that will filter on the deletion criteria and you can still use it like a normal table and pass updates to it and stuff like that. So that's the part that makes this interesting. And I actually use this technique in a big schema migration where I had to change the type of a column from string to decimal. And so I used a view to wash over the difference for a while while the deployment went out. I got more details in the safe ECTO migrations repo on that. But anyway, I love this topic, I talk about it all day, but this is really interesting, definitely an interesting and cool deep dive. And next up, Sean Moriarty and Andreas Alejos released a new AI centric library called Honeycomb. It is described as fast LLM inference with Elixir and Bumblebee. It looks like a nice, maybe like a middleware or a little helper for accessing hugging face models to be able to download those using your credentials and things like that. You can actually make the request directly through Honeycomb to Bumblebee, or you can just use it to help set up the Bumblebee side. It's a pretty new library, not a whole lot of documentation around it just yet about maybe the best way to use it, but you can use like there's a mixed task to where you can have Honeycomb serve up an LLM, getting started with things a lot smoother that way, just taking out a lot of the boilerplate. Since Andreas said it was important to note, I'm just gonna copy him. It's important to note that he came up with the name, so congrats Andreas. (Both Laughing) Yeah, and he said, "Last week while trying to work "with Bumblebee, I realized three things. "Bumblebee isn't exactly a drop-in replacement "for other providers. "It requires a lot of boilerplate." This is absolutely true. And we're still missing a lot of popular features like guidance and quants, et cetera. So yeah, we can't currently use quantized models and maybe this will be something that can make that a little easier or happen faster, I don't know. It sounds like Honeycomb has some good promise and I look forward to seeing what's gonna happen there in the future. Maybe it's already got everything it needs and we just need a little bit more documentation, understanding about how to leverage this tool. Yep, all right. Moving on, we got a follow-up from the last episode. We talked about Bob, good old Bob. Bob is the Elixir's Hex's release builder and we had mentioned that Vojtech Mac had merged in a PR to start adding OTP macOS builds to it. So a little update on that. So Vojtech Mac reached out, explained more about the direction here. In fact, his PR to add OTP builds for macOS may end up getting reverted. The direction here is that Hex may not be the best place for that and instead there's a proposal opened up on the Erlang Ecosystem Foundation for the build and packaging work group to start moving some of the builds over to them instead of Hex, which would be Bob at this point. And so Elixir itself is using GitHub workflows to do builds now, which is nice. And the builds here that we're talking about are the builds that are referenced from version managers like ASDF. This is what ASDF would be pulling down. And Bob will continue to focus on the Docker images like what gets published up to Docker Hub for the Hex PM organization for Elixir images up there across Alpine, Linux and Ubuntu Linux and all the combinations of things up there. So Bob will hang around, but Bob will take less of responsibility of compiling for the native binaries for our local dev kind of computers, right? The direction here is clarifying and this is a proposal. So we'll see what ends up happening, but nothing really changes for anybody. Here in this moment, it's just interesting to know where things will go. Thanks for the follow up, Wojtek. We're looking forward to that proposing to seeing where that goes. Very well written proposal by the way. Should go check it out just for that. (Both Laughing) And next up, Apple, the company, has an open job post for an Elixir developer in both Cupertino and Seattle. It is working with their environmental systems. It's using Elixir, Phoenix and Live View. It's not that we're normally posting job positions on the podcast. That's not what we do. We just thought this one was notable because it's Apple. It's just fun to see that, hey, there are big companies that might otherwise not be using Elixir in where you would normally see them and just seeing, hey, they are using it there too. Yeah, oh, I wonder if they should submit themselves to builtwithphoenix.com. Woo hoo. Somebody's gotta do that. Very interesting. All right, last up, we've got Elixir Conf. We're gonna round it up a little bit. We are just a couple of, from the time of this release, I think we're like a week away. So, oh, you're getting pretty close. If you don't have your ticket, you better go get it now. The change for today though, the big news for Elixir Conf is that the schedules have been posted. So it's time to go to 2024.elixirconf.com. You'll see the schedule there. You can see all the talks and who the speakers are competing with and try to pick out the ones that are most interesting to you and remember too that you will have access to the recording so you won't have to miss any of it, even if there are two talks at the same time that you wanna go see. So no worries about that. Just pick the one, maybe pick the most entertaining one you think, right? If it comes to a tie. So there's that. The schedules are posted and also a reminder that Elixir Conf is having a weekly hangouts on Twitter at 11 a.m. central time with speakers and trainers. So if you are hovering and hanging out on Twitter, you might pay attention to the Elixir Conf Twitter folks there and see if they're doing the hangout. And if they are, go join in. It's fun to listen to them. Speaking of Twitter, there is a Twitter list of Elixir Conf speakers. So if you wanna go follow them all at once, Thomas Millar created a Twitter list. You can go follow them all at once. It's very easy. So last up, I should've talked about this first, I guess, but Elixir Conf, when is it? It is close. It is on August 28th and 30th. It is a Wednesday through Friday and it is two and a half days jam packed full of Elixir infos. Very well balanced, I think, this year of disciplines. So I'm very excited about that. All the keynotes have been posted. All of the talks have been posted. So the schedule is up there. Everything is known at this point until somebody has to bow out for last minute stuff. We'll see. So that's it about Elixir Conf. I hope to see you there. I will be there in person. And I think everyone is going to see me. So I'll be happy to chat in the hallway track with anyone and everyone, but I will also be glued to the main stage, I think, for a bit, as I have some MCing responsibilities this year. So I am very pumped about that. Yeah. That'd be fun. I'll be sure to talk about all the things I shouldn't talk about and make Jim real nervous. Just kidding, just kidding, won't do that. Well, that's all the time we have for today. Thank you for listening. We hope you'll join us next time on Thinking Elixir.