Daniel (00:01.196) It is amazing how this show has already started, Dave, hi! Dave (00:05.966) Hello. We are live indeed. And you man. I'm in a slightly different position today. My studio's moved 90 degrees. Not that you can really tell when it's blurred out, but. Daniel (00:07.758) It's fantastic to see you again. We are indeed live. I... Daniel (00:20.452) But I did recognize the background was slightly different. So yeah, I also have, I have new lamps behind me. So it's slightly brighter. They're in the same spot. They're just like different technology and better quality, which is good. I'm very bright, at least light wise. Hey, welcome to Waiting for Review, a show about the majestic indie developer lifestyle. Dave (00:24.802) Yeah. Dave (00:29.174) Mm-hmm. Dave (00:36.001) Well, all the better to see you. Daniel (00:48.666) Join your scintillating hosts to hear about a tiny slice of their thrilling lives. I'm Daniel, really good platonic kisser, and I'm here with Dave, chief enabler and encouragement artist at the Developer Self-Help Group. Join us while waiting for review. You asked for that title. Dave (01:10.752) I know I did, I know I did. Specifically as to that title because you were nerding out in our pre-show call about an absolute rabbit hole of an idea, which was cool. And I was doing nothing to save you from it. That was definitely like. Daniel (01:27.226) Right, I was saying like, I'm sooth programming. Like, I'm stressed out by a lot of things. So I'm like, I should really work. So I sit down on my computer, start programming, but I'm not programming the hard stuff that has like constraints and boundaries and needs to be communicated. I'm programming the stuff that is like fun to do. And so, so that just makes me more stressed at the end of the day because I haven't done any of the things. Dave (01:33.39) Mm-hmm. Daniel (01:53.217) I should do. What I'm doing one thing that I should do was which is talking to you, so that's awesome. And I did one other thing that I should do. I made a TikTok. Dave (02:04.0) Ooh, what happened in the TikTok? Daniel (02:05.631) I will totally link it in the show notes, of course, but it is an idea that I had with me that I was carrying with me for a while now. And I finally got the mojo together to actually do it and write a tiny little script at the beginning and then go into freeform, which is I want to do a weekly check-in where I just tell people in 60 seconds what I've done that week, especially on telemetry day. Dave (02:28.216) Mm-hmm. Daniel (02:34.797) so that I don't have to cram everything into this podcast. And also that like TikTok's huge audience of tens of people will also see what I did. And also it's kind of cool. And I finally settled on a format for it. I don't, like, I mean, it might change of course, but the format that I started doing is I want it to feel like a cliche, agile, standup thing, you know? Dave (02:35.586) Love it. Dave (03:02.836) Mm-hmm. No blockers. Daniel (03:03.971) But I hate standup because, and exactly that's the hashtag. That's exactly the hashtag that I gave it. Hashtag no blockers. Because I was like, I don't think standup meetings are, like as they are done right now and in most of the environments that I've worked in are very effective because people are just telling them, telling them what they did. And then at the end they say the ritualistic phrase no blockers, like regardless if they have any. Dave (03:11.086) Brilliant. Brilliant. Dave (03:24.046) Mm-hmm. No, they're not. Daniel (03:33.049) think that's a blocker or not. I have my text here. My intro text is, in the olden days, software developers used to do a thing called Agile, where every morning they'd stand around and share what they did in the previous day. They would always end their statement with the ritualistic phrase, no blockers, because that's what the Agile manifesto demanded. Dave (03:35.074) Yes, love it. Daniel (03:57.056) And I think I kind of like that level of insanity. Ludicrosity. Dave (04:02.838) It's also because we've all got PTSD from the last time we raised a blocker, Like that's the other thing. Daniel (04:10.062) Also, the first time that I did that, multiple people complained in the comments that, wait, do mean in the olden days? Which raises engagement. Dave (04:18.84) Mm-hmm. Yeah, but it's I would actually take that and run with it. Like, yeah, how many places are really doing agile as was in a lot of ways now, but things are shifting, I think. Daniel (04:37.228) Yeah, at Telemetry Deck we're not doing agile and we're not doing set up meetings, but every Monday we have a meeting where everybody's telling the others how they are feeling. You can, after you've told everyone how you are feeling, you can also talk about what you did, but it's kind of not necessary. Dave (04:54.094) Kinda like that, because you'll also potentially catch bits that are to do with the work as well. know, like I'm feeling really glad that we got that release out last week because of XYZ, other thing that can now happen this week. Like you might get bits of that pop out of it, but yeah, I love that. Daniel (05:13.229) Yeah. Also, you don't have to go into detail. don't have to open up your deepest soul to your coworkers, but it's just like, what energy are you bringing today? so far, it's been pretty good. It's been pretty good. People seem to like it. It seems to increase cohesion. So yeah, I think I want to continue doing that. Dave (05:23.146) I'm feeling effing awesome, thank you! Dave (05:31.886) Mm-hmm. Mm-hmm. I love it. love the vulnerability of it. How, how coming back to your tick tock, how has the, the engagements gone so far with your hashtag no blockers. Daniel (05:49.901) Not huge. got 800 views, I think, which is about in line for what I usually get from TikTok. like 800 is more than like this, this thing called the 200 jail, where basically if you get to TikTok, we'll get exactly 200 views because that's what the algorithm does. It will show your video to 200 random people to figure out what its target audience is. And if no one is really interacting with it or watching it really, then it will just like not. Dave (05:54.446) Mm-hmm. Dave (05:59.138) Okay. Dave (06:17.774) Mm-hmm. Daniel (06:19.617) continue showing it. So the fact that it escaped the 200 jail means that it got something. But I think I want to, I just want to continue doing that. I think I want to continue doing that and see if it sticks. And if it doesn't, after a few weeks, I can just stop. But so far, I'm having fun with it, especially with the olden days bit. Dave (06:24.45) Yeah. Dave (06:30.51) Are you now in the 800 jail, Daniel? Dave (06:37.901) Yeah. Dave (06:44.908) nice that's really cool Daniel (06:45.994) And also, can just look at a list of things I did. I just like rattled them off really quickly. Like at first I was like thinking, should I capture B-roll? And then, you know, let's like have some video with the audio and stuff like that. it turned out that is even with, I thought that would be easier with CapCut or TikTok. But like, I can only do, like I can talk over existing video, but if I want to If I want to talk first and then add B-roll, like cut to what I'm saying, that is actually, or maybe I'm like, maybe I'm holding it wrong, know, like maybe I haven't found a way to do that yet, but that doesn't seem to be possible easily with the TikTok app or the CapCut app. So I don't want to do it because like I have enough on my plate. Dave (07:35.692) It's effort. It's enough. Yeah. I don't know if you remember as I was getting into the... Sorry, dude. I was just going say if you remember when I was getting into the Instagram stuff a while back, which I've kind of slowed down on recently, but I was posting videos where I was talking about what I was up to with Govj as I stood in the river that's near my house was a thing. And I need to get back to some of that, but yeah, my whole... Daniel (07:38.909) I'm okay with that. Daniel (07:59.62) yeah, I remember. Dave (08:04.97) ethos for that was like run and gun, like just record the thing. If it looks good enough, just ship it straight out. Otherwise it loses whatever that moment was when I recorded it. And you know, I've done that before and then I've looked back and I've gone, no, I don't like it. And then I've wasted that time, right? You know, it's sort of, so there's also that, that elements, I think with some of these very short pieces of content of just not. not getting too hung up on it being perfect, right? So anyway, I look forward to seeing your TikTok. Daniel (08:38.143) Yeah, feels right. I can send it to you. Dave (08:45.183) I uninstalled everything last week, so I need to go back and have a look, but that's a whole other conversation. Daniel (08:49.419) I mean, I can also send you the video of it if you want to stay off TikTok. Dave (08:55.022) No, I'm good. I'm good. Send me the link. I'll add another one so can have 801. Daniel (09:02.496) Fantastic Lisa said the same thing like I was I was telling her about it and she was like, I forgot to look at your tick-tock So I would I would have been the 801st. So Like depending on who watches it first You might be a little too So, yeah, but like next week I want to do that again and Yeah, I mean take talk is hard because I'm so used to just writing short text text like Dave (09:13.166) 802. Yeah. You Dave (09:23.662) Mm-hmm. Daniel (09:31.947) Twitter or Macedon style. Twitter, if you don't know, in the olden days, there used to be a social network where you could post text and it would be non-fascist, which was pretty cool. I don't think it exists anymore. Utopia, I tell you. So yeah, that's just the kind of social media that I kind of grew up on. It's hard to switch to a video where I feel like I have to control Dave (09:38.51) Mm-hmm. Dave (09:45.24) What's that like? Yeah. Dave (09:57.336) Yes. Daniel (10:02.057) Like because with text I can control everything basically, right? But with video there's so much that's uncontrollable, or I have to create specific conditions. So. Dave (10:12.364) Yes. Yeah. I hear you on that. mean, you know, there's something to be said when we record video for this podcast, right? I am always in this room. I'm always at my setup. Yeah. Various bits of my setup fight me every so often. Like my mic stops working or whatever, but like, know, by and large it's a controlled environment versus yeah. You know, I mean, going out and recording content wherever is, is just so much more variable. So. Daniel (10:21.258) Mm-hmm. Daniel (10:39.915) That's the other thing. I'm basically either in the office or in the telemetry office. There's not too much variety right now because I'm not traveling so much. Dave (10:43.81) Mm-hmm. Dave (10:51.054) You're not recording down, walking down the street going, shut up, I'm an influencer. Let me record and yelling at people. Daniel (10:55.626) I hate those. I hate those so much. But anyway, I want to tell you about one thing that I did that I've been thinking a lot about, which is the telemetry query language. like everybody, like every single person in the world, this is not hyperbole or anything, has told me that the telemetry query language is too complicated and they are completely right. But the thing is, it is kind of heavily intertwined with how everything works. And it's also incredibly complex because it's incredibly powerful. So because like a few people have been like, we could like, can we just replace it with this simpler instruction set? And I'm like, yes, but then like 90 % of the things that this thing can actually do are suddenly no longer possible, which is kind of like, no, like that's not the point, right? So, but I've been... talking to various people. And I think the way to go forward is to continue doing something that I've already started doing years and years ago, is like, so, tenement-unit queries, they have a compilation stage on the server where the server will just basically look at the whole query and see if it needs to do any modification. Like one of the things in compilation is actually converting... How are they called? Relative dates? That's what you call it. Relative dates into absolute dates, for example, because queries on the server need to be run on absolute dates. And so I think I want to push this forward a bit more forcefully and think about this a bit more in the future. And so I want to have various levels of abstraction in that language. Dave (12:32.429) Yes. Daniel (12:46.867) just so that it's easier to use. And I want to start where it's easiest. For example, telemetry query language has this concept called aggregators. And that is basically sort of instruction to get something from the database and aggregate it in some sort of way. Like give me the sum of all the things, give me the average, give me the first of them. That's an aggregator, but they are a... very inconveniently named. Like for example, what does the theta sketch aggregator do? And right. They're kind of badly named. They're also kind of complicated. Like for example, the theta sketch aggregator takes various parameters and properties that you don't really want to deal with if you are not fine tuning Apache Druid for high performance. Dave (13:22.126) Yeah, I could not tell you that. Daniel (13:45.221) And also, sometimes you need for you to reach your goal, you need to stack various aggregators. For example, I was making a histogram the other day. For histogram, I need a quantiles data sketch, then I would need an aggregator. Then I need to put the output of the quantiles data sketch aggregator into a quantiles data sketch to histogram post aggregator, but also at the same time run a min aggregator and a max aggregator to get the min and max value for the histogram. That is actually Dave (14:07.221) Mm-hmm. Daniel (14:15.09) Kinda complicated and... Dave (14:16.652) It is, it is. And it's one of those things where like, I'm thinking as a user, honestly, Daniel, A, I don't really know what I'm doing there when you're telling me. And B, like all of that stuff, I'm like, no, that's just completely throws me. Daniel (14:39.88) Yes, and that's been the experience for many people. The thing is, like, for a long time, it didn't matter so much because most of the interaction you had with Telemetry Deck was the simple query, the simple inside editor, which just asks, like, hey, what do you want to count? Do you want to count users? Do you want to count signals? How do you want to filter them? And that's kind of the extent, right? So that would abstract it away lot. But as people keep exploring, they want to have the complicated queries. Dave (15:01.56) Mm. Daniel (15:09.554) They wanna have like the, all the power. And so they stumble into the documentation for TQL and they're like, what the heck is a ThetaSketch aggregator? So I can't really remove the ThetaSketch aggregator, which is by the way, ThetaSketching is an algorithm. It's based on the HLL algorithm because, God, I'm so deep into these things, you that will give you the approximate number of different things that this, different values that this variable can be. So in other words, you can use it to count the distinct number of users, but without having to look at every data row. Because if you have 700,000 million rows of data, you can't look at everything. You look at x percent, and then the algorithm will give you a close enough answer that is accurate to one or two micro percent. Dave (16:06.016) Mm-hmm. You need to care about this stuff. Your users kind of just want to see the output. Like, yeah. Daniel (16:10.199) Right. Right. I'm starting to add, and I'm going to do that more in the future, but I'm starting to add what I internally call convenience aggregators that will get compiled like so. I've added an aggregator that is called user count, and that will compile to a theta sketch that will combine on the client user property and give you the... But on the front end, you just say, give me the user count. and then it will do the thing. I have also created another one that's called Event Count that will give it a number of signals. This is because over the next few months, I wanna slowly move away from the word signals and move towards events because that's more of an industry standard. Dave (16:56.908) I mean, signal made sense, right, for telemetry and that sort of paradigm. Within all of this, Daniel, I'm really sorry I have to suggest it. OK, this is going to feel a bit, maybe just a little bit out there, but I'm wondering, is this a use for an LLM in the middle? Right, you know all this stuff. Yeah. Daniel (17:01.468) Right. Daniel (17:09.639) You Daniel (17:19.176) That's another thing that I'm considering. Hang on, I will actually talk about this because that's another thing that I've been rather seriously considering and researching a bit into it. But let me finish my thought about, or my third convenience aggregator is actually a histogram aggregator that will create the whole change of aggregators and post aggregators for you. And as a parameter optionally takes the number of buckets. If you don't supply it, it's just 10 buckets. And I hope that will already push the language slightly, a bit towards a slight ease of use. I also want to go, like I'm thinking, can I rename aggregators? Like is there a better word to use for aggregators? At least maybe not aggregator, but aggregation, you know? Dave (17:50.263) Nice. Dave (18:01.262) Mm-hmm. Dave (18:13.9) Yes. Yeah. Daniel (18:16.198) Because then I wouldn't have to rename anything really, just like this is a user interface thing. if I like, because the concept is an aggregation, right? Anyway, so these are things that I want to be working on a bit more. Like slowly trying to abstract away the complexities of the language without obscuring access to them. Like for example, if I really want to use the theta sketch algorithm because I can make a more efficient query with it if I access the same aggregation multiple times from different post aggregations, which is huge performance improvement. But if I don't care about that, I just do user account. But if I do care about it, I can still use the original things that are in the background. And I think that's the way forward. We're doing a similar thing with our funnel queries. We have a whole query type called funnel. That is actually not a Dave (19:06.146) Okay. Daniel (19:13.945) not a native query type, like this just get compiled into like a lot of group buys and stuff. And I think, I think, yes, I think that's something that I really want to do more of. And I've been playing around with these a lot in a feature that I recently kind of accidentally built, which, you know Jupyter Notebooks? Dave (19:19.054) Yes. Dave (19:37.314) Not very well. I think I'd appreciate a refresher actually. I know the name. Daniel (19:38.726) Right. So it's a thing, it's a tool for data scientists mostly. But what you do is like you have a, I think usually a browser window and in that you have a, excuse me, you have a markdown editor or a WYSIWYG. Like you have a document that you can like input text in here. But then you can also input Python code and specifically like, Python code that has access to various data science libraries like pandas. with that, can have like a, with that you can have like a, right, right. Dave (20:11.064) further pandas. No, seriously. Yeah, that's a Python thing, Daniel (20:20.357) Yes, it is like a number crunching library that can be like, that's a very efficient, like very large amounts of data. And so yeah, you can have like a document, like a mathematical treatise of some sort, where you don't have formulas in there, you have like, actually runnable code that will run live in real time for you. So you can like play around with the data. And so because I really wanted to play around with the data and because I had all the kind of pieces laying around, I made a thing that I just pushed onto the telemetry deck front end, which is called TQL notebooks, where you can just like, like you type in markdown. And then once like, once you like start a code block that is like that, like if it's like, if it's, if the language is something else, which I render as code, but if you have as language is if you add the language tag TQL, it will actually convert that query into a live executed chart that will show you. And that's kind of neat. that helps me a lot exploring that whole thing. And that was kind of one of the soothing programming things because no one asked for this. No one asked for this. But I was stressed and I needed an outlet. It is kind of cool. Dave (21:24.302) That's cool. That's really cool. Dave (21:35.054) He he. No, but it's still cool. It's still damn cool. You're making me think of a whole bunch of things here, Daniel, and I'm wondering, all right, okay, so the notebooks idea I quite like as a sort of playground, right? To me, it's a sketch pad for trying these things out. I feel like all of your default telemetry deck insights and everything that you can add Daniel (21:48.888) Mm-hmm. Dave (22:08.682) right, that should exist in TQL notebooks. Like if I was to go and have a look in that as a user, I would be like, well, okay, where can I see the defaults that I know and love already and use already expressed here? Because that would let me look under the hood a little bit. You're sharing your screen, so maybe I'm... Yeah, yeah, yeah, yeah. Daniel (22:32.389) Yeah, I'm just showing you what it looks like. So this is a notebook. And yeah, this has all the different. This is all the different queries. So hang on, let me just go to. So I'm looking at a, like basically at example notebook that you get when you just create an untitled new notebook. You can edit the stuff. This is live, like this is on the server, you know? So yeah. Dave (22:36.824) Excuse me, I'm moving you to my other monitor that's so, there we go. Dave (22:54.42) Mm-hmm. Have you shipped this already, Daniel? Is that literally there and available? Yeah. Awesome. Daniel (23:02.52) I can just edit this. I get a marked on text editor and I get like live charts next to it. Dave (23:13.26) No, I love this. Dude, your soothing coding activity is to build a new feature that looks like that. I'm blown away. Daniel (23:13.985) And I have like a thing that's called daily active users, for example. Yeah. And then hang on. Let me just add a base filter that's called this app. I don't know. Does this work? So yeah, I'm getting a, the chart is kind of updated. Or let me say like, oh yeah, I want to group this by Wiki. So I edit the chart and it recalculates. Dave (23:30.232) Yeah. Dave (23:43.554) Nice. Daniel (23:43.972) And also, can, because that's not a new feature for any charts that are generated kind of on the fly, not from directly using the editor. I can also copy this. can take this and copy it to a dashboard of my choosing. Daniel (24:04.1) So yeah, that is. Dave (24:04.792) Nice. So you can literally link it straight back through into your dashboard. That's yeah. This is awesome, right? I know you were like, I've been going off off off road. Yeah. But that's that's not that far off road. That's useful. See my title for the show was the title that you gave me in the introduction is clearly bearing out. Daniel (24:14.148) Cheers, mate. Tungence. Daniel (24:25.291) I think so too with the exploration. Daniel (24:31.907) You're just a chief enabler in chief. Dave (24:37.262) that, no, I love it. I'm looking at that though, and like they've got one gnarly little question in the middle of all of this, which you may not want to share the answer to, or may not even have the answer to, but I'm wondering how many people go off the off-road themselves today with telemisrdac? Do you know, I have any events and signals that would indicate... people using things that are not default. It's anything they've edited to some degree. And you might not, because that might be too far against your privacy and ethics and all of that. But I'm wondering, is there a flag anywhere in it? OK. Daniel (25:19.393) I could give you negregation. Daniel (25:24.611) So no, the indication I have for the fact that like for the fact that how many people are just using the default stuff and how many people are actually like using that kind of stuff like advanced queries and that kind of stuff is twofold. Like one of the strongest indication is actually like lots of people are sending support requests and telling us, hey, I want to do that. And so I looked at the documentation. Dave (25:41.784) Mm-hmm. Daniel (25:53.493) and the documentation had lots of this query language and I don't fully understand it. So am I should I do this or should I do that? And so that is already a huge indication that people are very frustrated by this language, but they still really, really want to use it. And so I think it's a good indication to make this easier because of course we can give you a lot of like pre-built queries and charts and Dave (26:01.571) Yep. Dave (26:15.427) Yeah. Daniel (26:20.994) We're doing more of that as well. Actually, just the other day, we've released another, oh yeah, we've increased the Inquisition query thing. If you update it to the newest SDK, you will get a comparison, new users versus existing users, which is kind of cool. And so yeah, we're trying to give more pre-built charts, but at the same time, people are, they kind of graduate from the pre-built charts to the, Dave (26:40.654) awesome. Daniel (26:49.597) easy chart editor, which is very limited because it tries to be very easy. They very quickly graduate towards, okay, I want something bit more powerful, but the learning curve just becomes incredibly steep suddenly. And so yeah, so that's the one thing, like the emails and other questions that we get. But the other thing is, like I have analytics for the telemetry deck frontend, obviously, and I see navigation paths. And so, Dave (27:15.0) Yes. Daniel (27:16.865) Navigation paths are like they are there's a significant number of people just like going into the into the easy editor of Course, but there's also a like reasonable high number like 30 % or so of users to actually go into the Go away from the tiny editor and go to the into the big person editor, but I feel like yeah, yeah Dave (27:38.094) That's a fairly strong signal. That feels like a good percentage. Yeah. Daniel (27:44.118) Right. And I completely get that. This is very hard to do right now. So yeah, I'm trying to build a better programming language by slowly putting in different layers of abstraction so that you can start at a lower level or a higher level of abstraction and hopefully only dive down when you really need it. And also what this enables us way more is to have like a kind of in-between editor, you know, like I have, like we have an editor that is able to edit the complicated queries directly. But of course, because the queries are so complicated, the editor is very complicated as well. And if the queries get easier, then like it doesn't really matter if you type in count users or if you select the count user aggregation. Dave (28:30.196) Yes. Daniel (28:41.365) Like both are reasonably easy, right? And way better to, way easier to understand than going to the docs and then kind of like struggling around with the theta sketch and then finally realize, theta sketch means counting the users, I guess. Dave (28:41.826) Yes. Dave (28:56.078) Yeah. I like my first reaction to Theta sketches. Oh, well that sounds mathy and hard. Whereas like if you say, uh, select count or, you know, it's count users or whatever, that's like, well, cool. That's meeting the need or the purpose. I was just trying to go and do, um, you've reminded me, I've got an effect in my video mixing app. That's called, I've literally just lifted it straight from the core image filter name. And it's called something like canny edge. and has all these parameters that are like really quite involved for sort of edge detection. And it's been on my mental backlog for a while that I think nobody really engages with it because like, even when I look at it, I'm like, yeah, all right, that's kind of cool, but I've just got to mess around with these settings to figure out how it works because they're not intuitive, Yeah. Daniel (29:50.25) Yeah, I get that. So, how's it going with that anyway? Dave (29:55.878) great question. I have had, I've got beyond some of the feature and support requests that were coming through. So the bump of an update to the, iOS existing iOS app at some point. But otherwise I spent last weekend like down a bit of a, bit of a rabbit hole with Kotlin Multiplatform that we spoke about on the last show that I was going to be looking at this. am kind of like. inching towards it. Last weekend, I set up the Git repo, installed everything that I needed to get going with Kotlin Multiplatform development, created a brand new project, which is kind of my hello world. And then I started mocking up my existing app in Compose. So I have a project that compiles for Android, iOS and desktop now. Daniel (30:53.61) Nice. Daniel (31:00.138) Fantastic. How far along is it? Like is it just a click dummy or does it do effects or? Dave (31:04.11) It doesn't do anything yet. It's literally a layout dummy right now with like red, blue, yellow panels so I can see which bit is what and yeah. It's very early days, but I hit a few edges actually, sort of straight up. So one of things I wanted to do is the app has got a grid where you load the video content from it. Daniel (31:07.398) Nahahaha Daniel (31:25.322) Mm-hmm. Dave (31:33.206) It's got a grid of four by four tiles. And I have my grid already mocked up. And my next step was to go, well, I want to put some media in there just to see what it takes to actually have, typically, a lot of what goes on behind the scenes with my app is things like you've got a directory full of videos. I'll have a some sort of manager object service objects in the background that will be linked to that folder and will then serve the media from it back to the UI or any view models, et cetera, in the way. So, okay, I'm going to build my first little folder service worker that is multi-platform and I run head first into file system stuff works differently. Daniel (32:28.287) Ugh. Dave (32:28.362) Yeah, there we go. I've just got a thumb up on the video there thanks to reactions. But I ran headfirst into this works differently on every platform to some degree, like at least between iOS and Android. And I wasn't fully expecting to run into that limitation of the language and the tooling at that stage. Daniel (32:31.583) Hmm hmm hmm. Dave (32:58.56) I was expecting that to kick in a little bit more when I started actually trying to work with the video content rather than just enumerating the files. long story short, I now have my first interface with platform specific implementations underneath it. And a very bright, hello world view of my app where everything is in funny colors. Daniel (33:03.87) Yeah. Daniel (33:25.853) Very nice. Dave (33:28.226) Yeah, and I'm enjoying it, right? So I'm learning the language and learning how these things work. And some of it feels like a little bit like when I move from UI kit to SwiftUI, right? There's very much that vibe of like, bits are similar, but there's edges here and I don't know what they are yet. So I'm sort of building that mental map. I'm using... Daniel (33:42.995) Mm-hmm. Dave (33:56.546) ChatGPT sort of sparingly. So I've not integrated Copilot or any of those sort of tools into the IDE yet. I may in the future. I don't know what that looks like for me. But I've been going to ChatGPT and going, hey, I've got a grid of four by four buttons. They are colored yellow and say the text test in the middle of them. you know, give me the, the, the compose code to go and do that. And then pulling it back into the ID, editing it, fixing bits that don't look how I want them to. And that's, sort of like learning, learning by proxy, how some of the, the setup works. That works for compose. That works well for the UI bit I've found for me. Like I'm getting results that I'm happy with, and then I'm checking things out on documentation and everything else. And it's like, yep, cool. the tools helping me. And I'm going off track every now and again. I don't just keep going back to chat GBT for every time I want to tweak something. I'm editing it myself. Like it's a starting point. It falls straight over with one or two of these edges that I talked to before, like interfaces and actual implementations and stuff. It will give you our version, but not necessarily the best version right now for the tools. Yeah. Daniel (35:21.286) Yeah, obviously. Is there a reason why you are using Chat GPT specifically? Or would you also check out other models like Claude Sonnait especially? Because, all right. Dave (35:30.956) I'll check out all the models. Yeah, I've just not bothered so far. So it's like, okay, I've got that. I can use that easily in another window and just keep moving. But yeah, there's no reason why not to look at other models and those sort of things. I think for me, it was more a case of, just wanted to get going. This is a starting point. I don't want to rely on it for everything at all. And like I said before, I'm kind of thinking down the line, maybe I start looking at copilot type tools and linking to something. But for me, it probably looks like trying to see if there's a model I can run on my Mac that I can link into, for example. I don't really know enough yet to know whether that's practical, but I quite like that. So it's not online all the time. Daniel (36:13.469) Yeah, that makes a lot of sense. Daniel (36:18.299) I mean, do you have like lot of like RAM and video card power? Dave (36:25.166) I've got a M2 Studio, it's not got shed loads of RAM, it's got 32 gigs though so it's not completely without. So yeah, that leads me to other ideas I've got cooking at the moment Daniel. Daniel (36:43.901) Do you want to talk about it? You asked me before about LLMs. Why do I not just build an LLM that will answer our users' questions? That's a question that I've been asking myself for a year or so. I haven't really pushed on it a lot, but every time I talk to someone who seems to be knowledgeable about generative AI specifically, I kind of... Dave (36:50.83) Mm. Daniel (37:12.572) ask around and like I have, I also built a few prototypes by now running also locally on this Mac, which has less run than yours, but enough to run like a small llama model or whatever. usually what you want to do is like if you want to have an AI specific for your knowledge base and or programming language for programming languages, what you can in theory do is like, Dave (37:20.033) Okay. Mm-hmm. Daniel (37:42.31) take an existing AI model that has open source and then train on top of it. Just continue the training with lots and lots of code samples exactly for your query language or programming language. The thing is, there's not even remotely close to enough TQL code to even consider that. If I had 10,000 different pieces of TQL code, then I could start, but I would still have to find an algorithm to kind of like make a million out of them just like by varying them in a syntactically correct manner just to have like to approximate them. even then you would introduce a lot of bias because you would kind of algorithmically inflate the number. don't have 10,000. I have maybe 1,000 or maybe closer to 500 pieces of like code that I could reasonably use. Dave (38:18.222) Mm-hmm. Dave (38:31.031) Yeah, you would. Daniel (38:40.124) So this is out of the question. The other way how you can make an AI understand new concepts that it didn't specifically understand when it was trained is retrieval augmented generation, REG. Sometimes also called RAT, retrieval augmented transformation, I think. Anyway, so what you do is the following. Dave (39:01.368) Yep. Daniel (39:07.428) like LLMs, they work specifically with a vector space, right? So every word or every token rather, is like a huge n dimensional vector. So the word, I don't know, king, and the word queen, they're probably somewhere, somewhere close in this multi dimensional vector space. And the difference of direction between them is probably similar than the word actor and actress because like, and so you have like, one direction that kind of means maleness or femaleness or whatever, like gender is probably a horrible example, but that's the one that a lot of the example videos kind of use. so you have like this N dimensional space that a lot of these concepts live in. And what you can do is you can, like these LNMs, they usually have an API where you can just give the thing a piece of text. And instead of like asking the thing, give me an answer, you're asking. Give me a set of vectors that describes the text I just gave you. Like how does this text fit into your vector space? And so you do that for each, let's say, article in the telemetry documentation set. And then you have a set, are, you have like, I have like 50 articles maybe, so I would have a set of 50 vectors and I would put those into a database, like just a Postgres database with like a vector add-on. Dave (40:08.568) Okay, yep, yep. Mm-hmm. Daniel (40:34.049) and have like the vector and then also the URL of the article and maybe the title or something like that. Or maybe even the full text, it doesn't really matter. And so next what I'm doing is a user asks me a question like, how do I find out how many users I have per week or whatever? So I take that and I put it, I give it to the LLM but I don't like ask it to answer it just yet. I just ask it for vectors again. And now I have like a set of vectors that hopefully like lives in the vicinity of some of the articles that I also already put into my database. So I just like have a regular database query that uses these vectors to calculate like distance. And like, I, maybe I then get like, I say like, okay, limit three, me the three closest articles to this point in the dimension space. And so now I have three articles that hopefully have something to do with Dave (41:18.296) Yes. Daniel (41:31.45) the problem that the user has. And now what I can do is like, can put these three articles, if they're not too large, because it's context windows, I can like construct a query to the LLM that says, all right, the user asked this, also, here is the full content of these three articles. Now please try to answer it using this combined information. The problem with that is, like you can't make the articles too long. So there might be and the other thing is like most of the actual solution is actually like a search engine. You just build a search engine. Like it just like kind of clothes the output into like human-ish language. So what you then like and how you improve that is slicing. So you try not, you don't feed in whole articles which probably wouldn't work because of Dave (42:09.954) Yes, that's what I was thinking there. is a search engine. Yeah. Daniel (42:27.758) Context window size anyway, what is a context window? Context window means you can ask a LLM, you can ask it a certain number of letters or text or whatever, maybe four kilobytes, let's say. And the LLM will forget everything that is bigger than that. So this is the maximum number of data you can put into the LLM at one point and get an answer out. Dave (42:53.794) Yes. Yeah. Daniel (42:58.542) What you want to do is you want to slice up these articles. And that is also very complicated because of course you don't want to just slice along each paragraph because then they might become disjointed and the answer that or the pieces of information that you then offering to the LLM to like piece together an answer might not be the correct one. And also what you want to do is like if you have a lot of like code examples in your documentation, which I have, you probably don't want to slice Dave (43:06.616) Where do you slice? Yeah, yeah. Dave (43:19.96) Yes. Daniel (43:28.356) horizontally, what you instead you want to do is you want to slice by having a you want to have a slicer that is aware of the syntax tree of this language of JSON in this case. So you want to have a slice that is like that is that somehow says this is a query and the query has the property filters and the property aggregations. Dave (43:40.952) Yes. Dave (43:54.925) Yes. Daniel (43:56.312) And then you have a different slice that says filters look like this. And then you have a different slice that says an aggregation can be any of these, for example, a user count aggregation. And then you have a different slice that says user count aggregations count users. And so now you have slices that can kind of help the LLM maybe hopefully produce a meaningful output that might even be close to actual working code. Dave (44:21.784) But those, so those slices are very context specific, Yeah, yeah. Daniel (44:24.388) But by then you've put a lot of work into, right, you have put a lot of work into actually generating those slices. And at the same time, your vector search might not actually pull out all these slices. Dave (44:39.118) I think I've just uncovered the reason why you're not doing this right now. Daniel (44:43.674) So my plan is to ask the people who work on telemetry deck, including me, but also our fantastic documentation writer Marina, and also our like our coworkers who work on the SDKs and the query language and everything, like to continue working a lot, like a lot on the actual documentation, because the more just sheer volume we have of documentation, and the better the documentation is also like Dave (45:09.358) Mm-hmm. Yep. Daniel (45:12.033) the easier it becomes at some data point to actually build something out of that. But right now, I don't have the hands. Like I have, I need more, I would need at least like three, four more hands, like one, two more people who actually do this project because they would sit down and start working and then six to eight months later, it would be a super cool project. But I don't have time to. Dave (45:27.596) Yeah. Daniel (45:40.266) from the actual code project for six to eight months right now. Dave (45:44.494) It's actually easier to do what you are doing, which is to simplify the writer simplified higher level kind of language or interface to the TQL. Yeah. Okay. Okay. that, A, you've just clued me in on a whole bit of nuts and bolts as to how LLMs work there, Daniel. So thank you. Daniel (45:56.473) Right, exactly. Dave (46:10.702) My knowledge of this sort of stuff is limited at the moment. I've had quite an aversion to the technology just because of how it's been implemented and used. It's only recently that I'm very much leaning into, like I say, using ChatGBT as a starting point. And even then I sort of feel a little icky still because of OpenAI. I'm not particularly a big fan of OpenAI, so I probably should look for different things to use there. Yeah, this has been it's helped me kind of bridge some of my mental model there. So hopefully that's helped some some listeners there, Daniel (46:48.258) Fantastic. And yeah, I completely understand your unease about AI technology, especially with all of these technologies being in one hand and the hallucination problems and everything. I'm not terribly deep into the thing, but what you can do is you can have a tiny LLM on your own machine. If you just look at Olaama, which is the Facebook's Dave (47:01.23) Mm-hmm. Daniel (47:17.942) a model which is open source. like, like no love for Facebook, but this is one of the things that is open source. And also is it's also a repository of different models. So you can go to OlaMa. I'm posting a link to the showhouse later. And then like install this command line thing on your computer and then also download different modes to your computer and run them locally. And they will run mediocre because they don't have enough like space really to like expand into your RAM. Dave (47:31.662) Mm-hmm. Daniel (47:47.672) but they will run. And so you can play around with them. They won't have the guardrails. You can ask the self-hosted version of DeepSeek about Tiananmen Square. You can ask the self-hosted version of Llama. You can ask that about things that Facebook doesn't want you asking about, that kind of things. But also you will have less of a horrible environmental impact. But the answer quality might be decreased just because your computer doesn't have 256 gigs of RAM or Dave (47:58.862) You Yeah. Dave (48:12.142) Mm-hmm. Dave (48:18.318) Yeah, I can live with that. That feels like if I'm getting the actual, if I'm getting anything productive out of the other end of it, which for, you know, mocking up bits of compose views to learn how compose works, for example, that probably fits that quite easily. And yeah, this links, this all links to part of how I'm thinking through how I'm coding and developing over the rest of this year, Daniel, to be honest. Daniel (48:47.893) Right. You're not very mech-focused these days. Dave (48:48.288) actually. Nah, not at all. Not at all. So I've been using Asahi Linux on my MacBook Air. So I've got an M1 MacBook Air. It's actually got 16 gigs of RAM and it's still a pretty good machine. However, Android Studio does not run on it very easily because of the way Android Studio is distributed. and compiled and everything else getting it to then run on a a Sahi Linux is just hard. And I'm sure there's people out there that have done it, but it is fiddly as I've got I've got the UI loading. I've put a chip specific version of the OpenJDK in and it runs so can get the interface. But getting an SDK installed is not really easy, if possible at all. Daniel (49:29.164) Okay. Dave (49:51.17) somebody please correct me, but I've hit walls with it and I can't seem to find an answer. So I've got this machine that is running Linux really well, which is cool. And it's a lovely build of a machine. know, the MacBook Air is still a lovely, lovely laptop to work on, but I can't use it for this multi-platform thing that I'm trying on right now. It's incredibly frustrating to me. So. Yeah, I don't know. I'm looking and I'm going maybe I need to, you know, trade it in, get an x86 laptop that can run Linux well, and just kind of move on in life. Like the thing about this is that I am using my, I have been using my laptop more and more because I'm not always in this room, right? So I'm actually kind of starting to shift away from desktop only for this and into wanting to be in different rooms of the house. It's the middle of summer here. So I've been like, Oh, it'd be really lovely to sit outside and code. Actually. I've got a nice spot in the garden. can be right. So yeah, all of this adds up to, I'm sort of going, maybe, maybe trade in hardware that is not useful to me right now. Maybe then go and buy hardware that is. And then I'll have, I've got my Mac studio here and I don't plan on trading that. And right. really, it's a lovely machine. And regardless of how I feel about Apple and this, that and the other, like, I'm going to need a machine to build for iOS. I'm still using iOS. I want to support my iOS users. So I'm looking and going, maybe the Mac Studio is more of a build server. Maybe I move most of my day-to-day dev life, if you like, to a Linux laptop. And I guess all of that to say, then, it is sitting there serving Olamas stuff for me as well, and I'm using that locally, that's bloody good use of a Mac Studio sitting there ticking over, right? So yeah, and there's other ideas here as well. I kind of plan to have a bit of a build farm going on in this room. I've got a Windows. Daniel (51:57.141) Totally, yeah. Daniel (52:08.778) Mm-hmm. Dave (52:11.128) desktop set behind me that's not particularly great, but it will compile stuff and eventually get there. So if I do start doing a Windows version of my app, it could do that. I've various Raspberry Pis, and I actually want to test how things run on that. So the vision, if you like, if this actually bears fruit over the next few months and I've got anything usable, is to then go, well, OK, let's set up my stack. let's make sure things are building. Let's actually have tests running and that end of stuff and have that going on every time I commit code up. So, you know, the farm springs into action and kicks the tires. Yeah. And it's a lot of effort and I'm not really sure where I'm going with this other than it feels possible and it feels kind of So. Daniel (53:01.855) Very nice. Dave (53:01.934) Right now, that's the motivation. If somebody wants to say, you're wasting all your time, you've got an existing iOS app, just lean into that. I don't care. Right now, I'm having fun. And that's the point. Yeah. But I really wish I could just get Android Studio working on the MacBook Air, because I'd rather just keep using hardware I've got. Daniel (53:20.895) Yeah, I get that. Have you considered using other IDEs like just Visual Studio or something? Like VS Code I mean. Dave (53:29.23) keep putting. yes, a good shout. Maybe I should be looking at something like that because I'm not tending to get any real use out of, out of previews and, and, and that sort of thing. The main, yeah. Yeah. Daniel (53:43.733) I mean, it might just work just as badly because it will still execute code in the JVM, right? And if that's the problem that you're running into, then think it might. Dave (53:52.366) well, this is the thing. think the main thing is, is I won't be able to, the main issue I've got is I can't run any emulators. and actually I probably can get to a stage of using Android Studio to compile the code. But the next stage is, is then actually linking to a device to test on. I can't just keep coding headless and assuming my views are working. I've got to, got a link to a real device and I could potentially code, not headless and have it run locally on the machine in desktop mode, but that's not the primary use of my app right now. So yeah, all of this to say, I guess I've hit some of these blocks with it and I'm looking and going, should I be making my life easier? And that's, that's the, yeah, well, well. But trading in first, can't just buy a new laptop just because. yeah, these are all things I'm thinking of. I need to find the right hardware. dude. Daniel (54:59.645) You're not the only enabler on this podcast. So come on, go buy the new hardware. Like Tuxedo Computers, Tuxedo Computers, please sponsor this podcast. You're from my hometown of Augsburg, Germany. Like you should totally sponsor us. Like next time I see the people of, the very awesome people of Tuxedo Computers, I will totally ask them to sponsor this podcast. Dave (55:15.991) I'd love her. Dave (55:24.096) I mean, you know, we've worked this one out. Tuxedo don't ship to New Zealand, so I'd need a favour there, Daniel. You'd need to be a literal enabler of this. Daniel (55:33.811) Anytime. Dave (55:36.302) Yeah, but let's see, right? I mean, at the moment, I'm trying to figure out how this is all coming together. What I've defaulted back to is switching my MacBook Air back to being a Mac. Daniel (55:51.155) And then it suddenly runs Android Studio, assume. Because that works. Dave (55:54.614) Yep, yep, yep. And that is the real right now what I should be doing. Because then that lets me actually carry on with with the projects of exploring KMP. So that's what I've done for now. But yeah, I feel like it's on borrowed time. I kind of really want to be playing with Linux and actually using it in anger a lot more. yeah. Yeah, and Mac Studio doesn't go anywhere, right? That's non-negotiable for me right now. So, still a Mac boy here and there. Daniel (56:32.743) Me too, but I mean, we've talked about this before and I think Apple is more and more like just a company. Like Apple used to be like kind of special and they're not just like, they're like Sony. They make good stuff, I guess, but like I'm not like, like fanboy anymore. Probably not as much as this. I was thinking of one more thing where that I wanted to recommend to you like in the whole LLM spacing. Dave (56:40.962) yeah. Yep. Dave (56:46.467) Mm-hmm. Dave (56:49.95) No, not as much. Okay. Daniel (57:01.171) No, okay, she just she kind of wanted to unplug my laptop, which is rude. I like it's actually their feeding time, but like they can survive for a few more minutes. But no, she's like super into the braided cables and sometimes she will gently nibble on them and I want to discourage them that because no, do not nibble on the like on the lightning, the lightning string. Dave (57:07.455) It's a sign, you need to feed her. Dave (57:19.565) Mm-hmm. Dave (57:29.956) Don't fight the lightning, you may get thunderbolt. Daniel (57:32.787) Anyway, there's this editor called cursor, and it's basically a fork of VS code. So it's like if you use VS code, it's exactly the same UI, but it has bolted in a very tiny locally running LLM that is about slightly better than copilot, I want to say. I think it runs locally. I'm reasonably sure it locally. But the thing that makes it better than copilot is Dave (57:50.094) Mm-hmm. Daniel (58:02.451) it is multiline. So if you change something on one line, will suddenly suggest changing like, I don't know, if you change a variable name, it will suggest changing the same variable name like three lines down. And it makes it surprisingly efficient at the small tedious stuff. Like I'm still very cautious. Dave (58:14.819) Nice. Dave (58:21.998) Thanks to your description earlier, Daniel. Thanks to your description earlier. My mind's eye is now going, because it's actually pulling together vertical slices of the code in front of it and turning those into context that it's then supplying to the LLM to try and get meaningful answers back out, right? Daniel (58:25.415) Yeah. Daniel (58:31.908) Hahaha Daniel (58:39.442) I don't think it uses RIG. I think it's just an LLM trained on lots of... Dave (58:42.008) Okay. Dave (58:46.872) Sure. Daniel (58:47.73) Um, but yeah. Um, so if you are like, I'm very skeptical about like, um, use relying too much on code generation, but sometimes it's really helpful either if you already know exactly what you want to do and like want to skip a bit of boilerplate or if you are very new and just want to see what is the consensus of doing things, doing, doing this, and then you can edit it later for both of these things. Cursor is kind of nice. Dave (58:57.058) Mm-hmm. Daniel (59:16.562) Like, ah, you hear my hesitation. It is kind of nice. You should probably try it out. It also has cloud integration. So if you have something bigger, you can open a window and actually describe, hey, look at my whole project, like the whole project, and make this. And it will crunch numbers for a while and actually call out to a server. then, ah, I want to say 50 % of the time, it will give you a really, really good answer. That is like all of the code that you need to actually build the feature. Dave (59:17.102) Mm-hmm. Dave (59:21.358) Yep. Daniel (59:45.914) And then the other half of the time, will give you pretty much garbage and you can salvage like a few lines here and there, but other than that, you can have to reassign yourself. But still, you're using ChaiTPT and copilot anyway, this might be a good look at it. I'm not a sponsor. And stay skeptical. Take everything they say with a grain of salt. Dave (59:56.43) 50 % of the time it works 100 % of the time, Dave (01:00:03.342) Mm-hmm. Daniel (01:00:12.741) That being said, I do pay them 20 bucks a month and I don't regret it, especially for JavaScript. Dave (01:00:13.102) Mm-hmm. Daniel (01:00:20.753) That's my glowing endorsement. Dave (01:00:22.242) I need to get I need to I need to give it a look. I really do. I just give them that a bit of a bit of a search now. We'll link that in the show notes and I will have a bit of a play. wow, you are way ahead of me. Daniel (01:00:40.753) I have linked it in the show notes. While we're talking, I snuck in there. I snuck in there and like snoked in a few links. Dave (01:00:57.029) Brilliant. Daniel, I am going to have to wrap the show with you and start my day because for listeners who don't know, I'm in New Zealand. Daniel is in Augsburg, Germany today, right? yes, and time zone wise, it is the start of my day and the finish of yours. So... Daniel (01:01:01.925) Yes. Daniel (01:01:16.049) It is correct. Daniel (01:01:23.013) Yeah, by now my cats will be writing you messages like, like, finish the podcast day, finish the podcast like our, our, our feeder has to get off the air. Dave (01:01:27.95) Yeah. No, they, they, they, go through an intermediary. My cat has had words with me and apparently you are being terrible right now and we need to just sort the feeding situation out. Uh, so for, uh, yeah, for listeners of the show, I recommend go and take a look at the show and look at the, um, guests on the show because you may have a furry surprise. But Daniel, read us out, Daniel (01:01:57.604) Yes, fantastic. I will. Thanks for listening. Please, Raiders on iTunes and YouTube, send us emails at contact at waitingforreview.com and also join our Discord. The link is in the show notes. And Dave, where can people send you fan mail? Dave (01:02:13.11) Oof, fan mail. Maybe come bother me on Mastodon. So I am at Dave at social.lightbeamapps.com. And it won't be a bother. Say hello. But Daniel, how about yourself? Daniel (01:02:29.176) Say hi to me at daniel at social dot telemetry deck dot com Dave (01:02:35.884) Nice. Daniel (01:02:36.368) I liked how I said that telemetrydeck.com. I do have like a big Patreon for a Formula One podcast and I pay them, I want to say 40 bucks a month to like to for them to read out their main Patreon payers at the beginning of each episode. And this guy just says like telemetrydeck.com in such a fantastic voice. I love it so much. Dave (01:02:45.709) Mm-hmm. Dave (01:02:50.819) Wow. Dave (01:03:03.79) Ha ha ha ha ha ha ha ha Daniel (01:03:05.328) It's there is the shift F1 podcast also had a link to that but I don't want to I don't want to keep you waiting too long Dave (01:03:15.278) Daniel is showing me the fact that he has a Sondrine telemetry decks mascot printed hoodie and it's shiny. It's so cool. Daniel (01:03:19.76) And it's glittery! Daniel (01:03:24.378) Shiny AF! I wanted to talk about it on the show, but now it's too late. Now we're gonna go. Bye! Dave (01:03:29.048) But now you have. Bye. Dave (01:03:35.19) I'm actually still here. Daniel (01:03:35.44) There was a dog barking in the background like exactly the correct time.