DiscoverSoftware Sessions
Software Sessions
Claim Ownership

Software Sessions

Author: Jeremy Jung

Subscribed: 17Played: 252
Share

Description

Practical conversations about software development
56 Episodes
Reverse
Mayra Navarro is an organizer of WNB.rb and Ruby Perú. Mayra shares how the Ruby community helped her get to RubyConf, going from project manager to developer, and the different ways people learn and communicate. This is the final interview recorded at RubyConf 2023 in San Diego. -- Mayra's Github Peruvian Digital Platform Codeable bootcamp Groups Ruby Perú WNB.rb Atlanta Ruby People Cody Norman Stefanni Brasil of hexdevs Dave Kimura of Drifting Ruby Conferences RubyConf RailsConf -- Transcript You can help correct transcripts on GitHub. [00:00:00] Jeremy: I hope you've been enjoying the conversations from RubyConf. Before we get started. I just want to say thank you to everyone. I met at the conference, all the guests who were so generous with their time. And to Irene from RubyConf for arranging a space and helping me connect with guests. [00:00:16] This final interview is with Mayra Navarro. She's an organizer of the Ruby community in Lima, Peru and a member of the women and non-binary Ruby online community. She's going to tell us how the community pulled together. Both friends of hers and strangers she had never met. To get her to RubyConf this year, we start the story in April where she's just finished attending the Ruby on rails conference, RailsConf in Atlanta. Getting to RubyConf [00:00:44] Mayra: So the thing is, in the last RailsConf, the last day that I was in the US, um, the day that I was returning to Peru, I got fired. (laughs) Yes. So I was with all the stress. [00:01:01] All the luggages that I had to pick or maybe overweight or something like that. And then I received that [00:01:08] Jeremy: Oh my gosh. [00:01:09] Mayra: And I, oh my gosh, what I can do? [00:01:13] Jeremy: That's terrible. [00:01:15] Mayra: It was awful. Since that, I think it was May 1st or something like that. And I was looking for job like everybody else who were fired all these times It was a difficult time for me. my plan was just before September I get a job so I have enough money and not using my, my savings for, for going to the RailsConf. Sorry, the RubyConf. but, uh, eventually at the end of September, October, I didn't get anything. [00:01:44] I'm Christian. So I, well, God doesn't want me to come here. [00:01:50] it there must be a reason, but there was something inside me that. I just to have to do something else. and I thank to my mom because she's someone that is always fighting for what she wants. [00:02:02] So I say, okay, I went to sleep that day that I say, okay, maybe I don't want to go. So next day I have the idea. Maybe you didn't use your last card. There is something else. That is something that I have from my mom. I can feel that. I say, what if you ask for money? Well, like a fundraising, I learned about that word later, and I say what, what else could you lose you don't have anything to lose right now. [00:02:30] So I say, okay, I'm going to write something. I asked Cody Norman, that is someone that I really appreciate right now. I asked him about suggestions, if that's a good idea or no, maybe not. He said, yes, you can do it. Uh, and I asked him if he could help me with the speech because I tried to write something and also I'm not good at writing things on Twitter and especially asking for money because I had to be open myself and be vulnerable to do that. [00:03:02] And it was like, uh, the last break for myself [00:03:05] a... I sent the speech to Cody, he helped me to update some things that I have to just improve. And I did it. I, also, I didn't know how to open a GoFundMe campaign because that's only for the US and Mexico. I think it doesn't exist in Peru. [00:03:23] So he said, Oh, there is another page that you can go. [00:03:27] I did it. So I just published that. I didn't open that until three, four hours later, because I was like, no, I don't want to see. And then I, I open it and I started to contact with the people who. [00:03:44] Well, who knows me because I like to be connected with a lot of people. I'm part of the FL RB even being in Peru, I am part of the FL RB. I go attend to the Atlanta Ruby Group. I go, I, I know a lot of people because of the conference. I try to help to the woman and non binary community also. I am organizer of the Ruby Peru, but I didn't want to ask them money for them, but I have some close friends from the conference that I, that I go for all. [00:04:13] All these two years. So they helped me to share that. And in two days, I got the money. [00:04:20] It was like a, I can't believe it. It is what, and I'm not good open myself for things like that. I love helping people, but it's difficult when I, you have to help yourself. So. All these people who I could see their names because it's, it's transferred to PayPal. [00:04:39] So I could see their name is like, uh, I really appreciate the thing that they don't know me. Some people, they don't know me, but, but I know them. I know who you are, if you're listening to this and I thank you appreciate for doing that. I also had the opportunity because I need to talk about this. I got a ticket from HexDev, uh, from Stephanie. [00:05:01] Jeremy: Oh... Hexdevs. Yeah. [00:05:02] Mayra: Yes. And, also I applied for the volunteer positions, just in case But I got the volunteer position. So what I did is, besides all my expense, I mean, that trip and also the hotel, expenses, I don't know, does it work? I, I said, I'm going to give this ticket that I have left to one of the women in the wnb community. [00:05:25] So I did that and say, I have a ticket and also I can share the room. I don't want to say her name because She's trying not to be too connected to social media, but she, she accepted sharing the room with me. [00:05:40] Jeremy: Yeah. [00:05:43] Mayra: So she's already here with me and I feel so happy because People not only helping me, they helping me to help and it feel like, wow. [00:05:53] Yeah. And that is, that is my story. And I still, well, I accepted to come to talk about this today because I received a job offer in the morning that I accepted. [00:06:06] So I wanted to. Send a happy ending for all my story about this. Yeah. And especially because I know that in the next conference I'll be with my own money, I could say, expenses. Asking for help [00:06:20] Jeremy: So was this all this year, the RailsConf? That was this year where you, you went to the conference, you enjoyed the talks and you were employed. And so it was the day that it was over that you, you found out that you got laid off. Wow. it's this, you have this high, right, you've met all your friends and, you know, you're learning all this stuff and you're really excited. [00:06:42] And then you get this notice and it's like, what, what happened? [00:06:47] Mayra: Yeah. That is what's happening. [00:06:50] Jeremy: then you, you kind of, like you said, you opened yourself up but yeah, it takes courage to say like, Hey, I need, you know, I need some help. [00:06:57] Mayra: Yeah, it's, this is something that I learned about this is always asking for help. This is something that I have been I bring into my life is always asking for help. I know as a woman, uh, as a woman, I have the thing that Try to be strong sometimes. I can do it by myself. I don't need help. [00:07:16] I don't need help, but sometimes you need, as human, you can open yourself. It's not something related to [00:07:23] gender it's more like humanity. That's how I feel right now. That is the feeling that I have and that is what is going to keep with me. For the rest of my life, I know. (laughs) [00:07:33] Jeremy: Yeah. Cause I, I think when, when people don't know, they, they might assume because you're so involved, right. With your, your local community and then the community internationally where people just assume that, Hey, Mayra is doing great. Right? She's, she's got no problems, no issues, and, there's just no way for people to know unless, you know, you, you share, and then that way people can help you, [00:07:58] Mayra: Yes. [00:07:59] Jeremy: That's a great story. I'm, I'm, I'm glad that, like, getting laid off is never good, right? That's never fun. But at least... Uh, things positive came out of it in terms of people coming out to support you, but also like you said, being able to support another, you know, another member of our community, [00:08:19] Mayra: And I would do it, and I would do it again. [00:08:21] I know that. Attending conferences [00:08:23] Jeremy: You know, now that you've gotten to come out, how, how has the, the conference been for you? Like, [00:08:29] Mayra: It was really good. I feel less insecure than the last time that I was here. (laughs) Actually, my first RubyConf was RubyConf Mini in Rhode Island. So this is like a, the Ruby real not the real one. It's just my more it's different. [00:08:48] Mayra: But, uh, the same time is. It's closer. That's how I feel it. I mean, this is my fifth conference. My first one was in Colombia. RubyConf Colombia. And I got a ticket as a scholarship. but until now I can say that it's like a, the feeling of the Ruby community, not only Ruby on Rails. Ruby community is like a It's really positive. It has changed my life so much since the first time that I joined to community that it's, I'm so happy to be developer instead of what, you know, everybody switched jobs. [00:09:20] I did too so it's like, uh, I won't regret. From project manager to developer [00:09:24] Jeremy: Hmm. Tell, tell me a little bit about that. How did you get interested in Ruby or, or have been involved with the community? [00:09:31] Mayra: I wasn't, I it was because money. Because it is. [00:09:34] Jeremy: That's a good reason. That's a good reason. [00:09:36] Mayra: Yeah. The thing is, I am graduated, uh, of the university. Uh, in Peru, but I was project manager before, well, I've switched a lot of careers because I was looking what I wanted to do and eventually I was project manager also. but I hated me in that position wasn'
Mike Perham is the creator of Sidekiq, a background job processor for Ruby. He's also the creator of Faktory a similar product for multiple language environments. We talk about the RubyConf keynote and Ruby's limitations, supporting products as a solo developer, and some ideas for funding open source like a public utility. Recorded at RubyConf 2023 in San Diego. -- A few topics covered: Sidekiq (Ruby) vs Faktory (Polyglot) Why background job solutions are so common in Ruby Global Interpreter Lock (GIL) Ractors (Actor concurrency) Downsides of Multiprocess applications When to use other languages Getting people to pay for Sidekiq Keeping a solo business Being selective about customers Ways to keep support needs low Open source as a public utility Mike Mike's blog mastodon Sidekiq faktory From Employment to Independence Ruby Ractor The Practical Effects of the GVL on Scaling in Ruby Transcript You can help correct transcripts on GitHub. Introduction [00:00:00] Jeremy: I'm here at RubyConf San Diego with Mike Perham. He's the creator of Sidekiq and Faktory. [00:00:07] Mike: Thank you, Jeremy, for having me here. It's a pleasure. Sidekiq [00:00:11] Jeremy: So for people who aren't familiar with, I guess we'll start with Sidekiq because I think that's what you're most known for. If people don't know what it is, maybe you can give like a small little explanation. [00:00:22] Mike: Ruby apps generally have two major pieces of infrastructure powering them. You've got your app server, which serves your webpages and the browser. And then you generally have something off on the side that... It processes, you know, data for a million different reasons, and that's generally called a background job framework, and that's what Sidekiq is. [00:00:41] It, Rails is usually the thing that, that handles your web stuff, and then Sidekiq is the Sidekiq to Rails, so to speak. [00:00:50] Jeremy: And so this would fit the same role as, I think in Python, there's celery. and then in the Ruby world, I guess there is, uh, Resque is another kind of job. [00:01:02] Mike: Yeah, background job frameworks are quite prolific in Ruby. the Ruby community's kind of settled on that as the, the standard pattern for application development. So yeah, we've got, a half a dozen to a dozen different, different examples throughout history, but the major ones today are, Sidekiq, Resque, DelayedJob, GoodJob, and, and, and others down the line, yeah. Why background jobs are so common in Ruby [00:01:25] Jeremy: I think working in other languages, you mentioned how in Ruby, there's this very clear, preference to use these job scheduling systems, these job queuing systems, and I'm not. I'm not sure if that's as true in, say, if somebody's working in Java, or C sharp, or whatnot. And I wonder if there's something specific about Ruby that makes people kind of gravitate towards this as the default thing they would use. [00:01:52] Mike: That's a good question. What makes Ruby... The one that so needs a background job system. I think Ruby, has historically been very single threaded. And so, every Ruby process can only do so much work. And so Ruby oftentimes does, uh, spin up a lot of different processes, and so having processes that are more focused on one thing is, is, is more standard. [00:02:24] So you'll have your application server processes, which focus on just serving HTTP responses. And then you have some other sort of focused process and that just became background job processes. but yeah, I haven't really thought of it all that much. But, uh, you know, something like Java, for instance, heavily multi threaded. [00:02:45] And so, and extremely heavyweight in terms of memory and startup time. So it's much more frequent in Java that you just start up one process and that's it. Right, you just do everything in that one process. And so you may have dozens and dozens of threads, both serving HTTP and doing work on the side too. Um, whereas in Ruby that just kind of naturally, there was a natural split there. Global Interpreter Lock [00:03:10] Jeremy: So that's actually a really good insight, because... in the keynote at RubyConf, Mats, the creator of Ruby, you know, he mentioned the, how the fact that there is this global, interpreter lock, [00:03:23] or, or global VM lock in Ruby, and so you can't, really do multiple things in parallel and make use of all the different cores. And so it makes a lot of sense why you would say like, okay, I need to spin up separate processes so that I can actually take advantage of, of my, system. [00:03:43] Mike: Right. Yeah. And the, um, the GVL. is the acronym we use in the Ruby community, or GIL. Uh, that global lock really kind of is a forcing function for much of the application architecture in Ruby. Ruby, uh, applications because it does limit how much processing a single Ruby process can do. So, uh, even though Sidekiq is heavily multi threaded, you can only have so many threads executing. [00:04:14] Because they all have to share one core because of that global lock. So unfortunately, that's, that's been, um, one of the limiter, limiting factors to Sidekiq scalability is that, that lock and boy, I would pay a lot of money to just have that lock go away, but. You know, Python is going through a very long term experiment about trying to remove that lock and I'm very curious to see how well that goes because I would love to see Ruby do the same and we'll see what happens in the future, but, it's always frustrating when I come to another RubyConf and I hear another Matt's keynote where he's asked about the GIL and he continues to say, well, the GIL is going to be around, as long as I can tell. [00:04:57] so it's a little bit frustrating, but. It's, it's just what you have to deal with. Ractors [00:05:02] Jeremy: I'm not too familiar with them, but they, they did mention during the keynote I think there Ractors or something like that. There, there, there's some way of being able to get around the GIL but there are these constraints on them. And in the context of Sidekiq and, and maybe Ruby in general, how do you feel about those options or those solutions? [00:05:22] Mike: Yeah, so, I think it was Ruby 3. 2 that introduced this concept of what they call a Ractor, which is like a thread, except it does not have the global lock. It can run independent to the global lock. The problem is, is because it doesn't use the global lock, it has pretty severe constraints on what it can do. [00:05:47] And the, and more specifically, the data it can access. So, Ruby apps and Rails apps throughout history have traditionally accessed a lot of global data, a lot of class level data, and accessed all this data in a, in a read only fashion. so there's no race conditions because no one's changing any of it, but it's still, lots of threads all accessing the same variables. [00:06:19] Well, Ractors can't do that at all. The only data Ractors can access is data that they own. And so that is completely foreign to Ruby application, traditional Ruby applications. So essentially, Ractors aren't compatible with the vast majority of existing Ruby code. So I, I, I toyed with the idea of prototyping Sidekiq and Ractors, and within about a minute or two, I just ran into these, these, uh... [00:06:51] These very severe constraints, and so that's why you don't see a lot of people using Ractors, even still, even though they've been out for a year or two now, you just don't see a lot of people using them, because they're, they're really limited, limited in what they can do. But, on the other hand, they're unlimited in how well they can scale. [00:07:12] So, we'll see, we'll see. Hopefully in the future, they'll make a lot of improvements and, uh, maybe they'll become more usable over time. Downsides of multiprocess (Memory usage) [00:07:19] Jeremy: And with the existence of a job queue or job scheduler like Sidekiq, you're able to create additional processes to get around that global lock, I suppose. What are the... downsides of doing so versus another language like we mentioned Java earlier, which is capable of having true parallelism in the same process. [00:07:47] Mike: Yeah, so you can start up multiple Ruby processes to process things truly in parallel. The issue is that you do get some duplication in terms of memory. So your Ruby app maybe take a gigabyte per process. And, you can do copy on write forking. You can fork and get some memory sharing with copy on write semantics on Unix operating systems. [00:08:21] But you may only get, let's say, 30 percent memory savings. So, there's still a significant memory overhead to forking, you know, let's say, eight processes versus having eight threads. You know, you, you, you may have, uh, eight threads can operate in a gigabyte process, but if you want to have eight processes, that may take, let's say, four gigabytes of RAM. [00:08:48] So you, you still, it's not going to cost you eight gigabytes of RAM, you know, it's not like just one times eight, but, there's still a overhead of having those separate processes. [00:08:58] Jeremy: would you say it's more of a cost restriction, like it costs you more to run these applications, or are there actual problems that you can't solve because of this restriction. [00:09:13] Mike: Help me understand, what do you mean by restriction? Do you mean just the GVL in general, or the fact that forking processes still costs memory? [00:09:22] Jeremy: I think, well, it would be both, right? So you're, you have two restrictions right now. You have the, the GVL, which means you can't have parallelism within the same process. And then your other option is to spin up a bunch of processes, which you have said is the downside there is that you're using a lot more RAM. [00:09:43] I suppose my question is that Does that actually stop you from doing anything? Like, if you throw more money at the problem, you go like, we're going to have more instances, I'll pay for the RAM, it's
Sara is a team lead at thoughtbot. She talks about her experience as a professor at Kanazawa Technical College, giant LAN parties in Rochester, transitioning from Java to Ruby, shining a light on maintainers, and her closing thoughts on RubyConf. Recorded at RubyConf 2023 in San Diego. -- A few topics covered: Being an Assistant Arofessor in Kanazawa Teaching naming, formatting, and style Differences between students in Japan vs US Technical terms and programming resources in Japanese LAN parties at Rochester Transitioning from Java to Ruby Consulting The forgotten maintainer RubyConf Other links Sara's mastodon thoughtbot This Week in Open Source testdouble Ruby Central Scholars and Guides Program City Museum Japan International College of Technology Kanazawa RubyKaigi Applying mruby to World-first Small SAR Satellite (Japanese lightning talk) (mruby in space) Rochester Rochester Institute of Technology Electronic Gaming Society Tora-con Strong National Museum of Play Transcript You can help correct transcripts on GitHub. [00:00:00] Jeremy: I'm here at RubyConf, San Diego, with Sara Jackson, thank you for joining me today. [00:00:05] Sara: Thank you for having me. Happy to be here. [00:00:07] Jeremy: Sara right now you're working at, ThoughtBot, as a, as a Ruby developer, is that right? [00:00:12] Sara: Yes, that is correct. Teaching in Japan [00:00:14] Jeremy: But I think before we kind of talk about that, I mean, we're at a Ruby conference, but something that I, I saw, on your LinkedIn that I thought was really interesting was that you were teaching, I think, programming in. Kanazawa, for a couple years. [00:00:26] Sara: Yeah, that's right. So for those that don't know, Kanazawa is a city on the west coast of Japan. If you draw kind of a horizontal line across Japan from Tokyo, it's, it's pretty much right there on the west coast. I was an associate professor in the Global Information and Management major, which is basically computer science or software development. (laughs) Yep. [00:00:55] Jeremy: Couldn't tell from the title. [00:00:56] Sara: You couldn't. No.. so there I was teaching classes for a bunch of different languages and concepts from Java to Python to Unix and Bash scripting, just kind of all over. [00:01:16] Jeremy: And did you plan the curriculum yourself, or did they have anything for you? [00:01:21] Sara: It depended on the class that I was teaching. So some of them, I was the head teacher. In that case, I would be planning the class myself, the... lectures the assignments and grading them, et cetera. if I was assisting on a class, then usually it would, I would be doing grading and then helping in the class. Most of the classes were, uh, started with a lecture and then. Followed up with a lab immediately after, in person. [00:01:54] Jeremy: And I think you went to, is it University of Rochester? [00:01:58] Sara: Uh, close. Uh, Rochester Institute of Technology. So, same city. Yeah. [00:02:03] Jeremy: And so, you were studying computer science there, is that right? [00:02:07] Sara: I, I studied computer science there, but I got a minor in Japanese language. and that's how, that's kind of my origin story of then teaching in Kanazawa. Because Rochester is actually the sister city with Kanazawa. And RIT has a study abroad program for Japanese learning students to go study at KIT, Kanazawa Institute of Technology, in Kanazawa, do a six week kind of immersive program. And KIT just so happens to be under the same board as the school that I went to teach at. [00:02:46] Jeremy: it's great that you can make that connection and get that opportunity, yeah. [00:02:49] Sara: Absolutely. Networking! [00:02:52] Jeremy: And so, like, as a student in Rochester, you got to see how, I suppose, computer science education was there. How did that compare when you went over to Kanazawa? [00:03:02] Sara: I had a lot of freedom with my curriculum, so I was able to actually lean on some of the things that I learned, some of the, the way that the courses were structured that I took, I remember as a freshman in 2006, one of the first courses that we took, involved, learning Unix, learning the command line, things like that. I was able to look up some of the assignments and some of the information from that course that I took to inform then my curriculum for my course, [00:03:36] Jeremy: That's awesome. Yeah. and I guess you probably also remember how you felt as a student, so you know like what worked and maybe what didn't. [00:03:43] Sara: Absolutely. And I was able to lean on that experience as well as knowing. What's important and what, as a student, I didn't think was important. Naming, formatting, and style [00:03:56] Jeremy: So what were some examples of things that were important and some that weren't? [00:04:01] Sara: Mm hmm. For Java in particular, you don't need any white space between any of your characters, but formatting and following the general Guidelines of style makes your code so much easier to read. It's one of those things that you kind of have to drill into your head through muscle memory. And I also tried to pass that on to my students, in their assignments that it's. It's not just to make it look pretty. It's not just because I'm a mean teacher. It is truly valuable for future developers that will end up reading your code. [00:04:39] Jeremy: Yeah, I remember when I went through school. The intro professor, they would actually, they would print out our code and they would mark it up with red pen, basically like a writing assignment and it would be like a bad variable name and like, white space shouldn't be here, stuff like that. And, it seems kind of funny now, but, it actually makes it makes a lot of sense. [00:04:59] Sara: I did that. [00:04:59] Jeremy: Oh, nice. [00:05:00] Sara: I did that for my students. They were not happy about it. (laughs) [00:05:04] Jeremy: Yeah, at that time they're like, why are you like being so picky, right? [00:05:08] Sara: Exactly. But I, I think back to my student, my experience as a student. in some of the classes I've taken, not even necessarily computer related, the teachers that were the sticklers, those lessons stuck the most for me. I hated it at the time. I learned a lot. [00:05:26] Jeremy: Yeah, yeah. so I guess that's an example of things that, that, that matter. The, the aesthetics or the visual part for understanding. What are some things that they were teaching that you thought like, Oh, maybe this isn't so important. [00:05:40] Sara: Hmm. Pause for effect. (laughs) So I think that there wasn't necessarily Any particular class or topic that I didn't feel was as valuable, but there was some things that I thought were valuable that weren't emphasized very well. One of the things that I feel very strongly about, and I'm sure those of you out there can agree. in RubyWorld, that naming is important. The naming of your variables is valuable. It's useful to have something that's understood. and there were some other teachers that I worked with that didn't care so much in their assignments. And maybe the labs that they assigned had less than useful names for things. And that was kind of a disappointment for me. [00:06:34] Jeremy: Yeah, because I think it's maybe hard to teach, a student because a lot of times you are writing these short term assignments and you have it pass the test or do the thing and then you never look at it again. [00:06:49] Sara: Exactly. [00:06:50] Jeremy: So you don't, you don't feel that pain. Yeah, [00:06:53] Sara: Mm hmm. But it's like when you're learning a new spoken language, getting the foundations correct is super valuable. [00:07:05] Jeremy: Absolutely. Yeah. And so I guess when you were teaching in Kanazawa, was there anything you did in particular to emphasize, you know, these names really matter because otherwise you or other people are not going to understand what you were trying to do here? [00:07:22] Sara: Mm hmm. When I would walk around class during labs, kind of peek over the shoulders of my students, look at what they're doing, it's... Easy to maybe point out at something and be like, well, what is this? I can't tell what this is doing. Can you tell me what this does? Well, maybe that's a better name because somebody else who was looking at this, they won't know, I don't know, you know, it's in your head, but you will not always be working solo. my school, a big portion of the students went on to get technical jobs from after right after graduating. it was when you graduated from the school that I was teaching at, KTC, it was the equivalent of an associate's degree. Maybe 50 percent went off to a tech job. Maybe 50 percent went on to a four year university. And, and so as students, it hadn't. Connected with them always yet that oh, this isn't just about the assignment. This is also about learning how to interact with my co workers in the future. Differences between students [00:08:38] Jeremy: Yeah, I mean, I think It's hard, but, group projects are kind of always, uh, that's kind of where you get to work with other people and, read other people's code, but there's always that potential imbalance of where one person is like, uh, I know how to do this. I'll just do it. Right? So I'm not really sure how to solve that problem. Yeah. [00:09:00] Sara: Mm hmm. That's something that I think probably happens to some degree everywhere, but man, Japan really has groups, group work down. They, that's a super generalization. For my students though, when you would put them in a group, they were, they were usually really organized about who was going to do what and, kept on each other about doing things maybe there were some students that were a little bit more slackers, but it was certainly not the kind of polarized dichotomy you would usually see in an American classroom. [00:09:39] Jeremy: Yeah. I've been on both sides. I've been the person who did the work and the slacker. [00:09:44] Sara: Same. [00:09:46] Jeremy:
David was the chief software architect and director of engineering at Stitch Fix. He's also the author of a number of books including Sustainable Web Development with Ruby on Rails and most recently Ruby on Rails Background Jobs with Sidekiq. He talks about how he made decisions while working with a medium sized team (~200 developers) at Stitch Fix. The audio quality for the first 19 minutes is not great but the correct microphones turn on right after that. Recorded at RubyConf 2023 in San Diego. A few topics covered: Ruby's origins at Stitch Fix Thoughts on Go Choosing technology and cloud services Moving off heroku Building a platform team Where Ruby and Rails fit in today The role of books and how different people learn Large Language Model's effects on technical content Related Links David's Blog Mastodon Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today. I want to share another conversation from RubyConf San Diego. This time it's with David Copeland. He was a chief software architect and director of engineering at stitch fix. And at the start of the conversation, you're going to hear about why he decided to write the book, sustainable web development with Ruby on rails. Unfortunately, you're also going to notice the sound quality isn't too good. We had some technical difficulties. But once you hit the 20 minute mark of the recording, the mics are going to kick in. It's going to sound way better. So I hope you stick with it. Enjoy. Ruby at Stitch Fix [00:00:35] David: Stitch Fix was a Rails shop. I had done a lot of Rails and learned a lot of things that worked and didn't work, at least in that situation. And so I started writing them down and I was like, I should probably make this more than just a document that I keep, you know, privately on my computer. Uh, so that's, you know, kind of, kind of where the genesis of that came from and just tried to, write everything down that I thought what worked, what didn't work. Uh, if you're in a situation like me. Working on a product, with a medium sized, uh, team, then I think the lessons in there will be useful, at least some of them. Um, and I've been trying to keep it up over, over the years. I think the first version came out a couple years ago, so I've been trying to make sure it's always up to date with the latest stuff and, and Rails and based on my experience and all that. [00:01:20] Jeremy: So it's interesting that you mention, medium sized team because, during the, the keynote, just a few moments ago, Matz the creator of Ruby was talking about how like, Oh, Rails is really suitable for this, this one person team, right? Small, small team. And, uh, he was like, you're not Google. So like, don't worry about, right. Can you scale to that level? Yeah. Um, and, and I wonder like when you talk about medium size or medium scale, like what are, what are we talking? [00:01:49] David: I think probably under 200 developers, I would say. because when I left Stitch Fix, it was closing in on that number of developers. And so it becomes, you know, hard to... You can kind of know who everybody is, or at least the names sound familiar of everybody. But beyond that, it's just, it's just really hard. But a lot of it was like, I don't have experience at like a thousand developer company. I have no idea what that's like, but I definitely know that Rails can work for like... 200 ish people how you can make it work basically. yeah. [00:02:21] Jeremy: The decision to use Rails, I'm assuming that was made before you joined? [00:02:26] David: Yeah, the, um, the CTO of Stitch Fix, he had come in to clean up a mess made by contractors, as often happens. They had used Django, which is like the Python version of Rails. And he, the CTO, he was more familiar with Rails. So the first two developers he hired, also familiar with Rails. There wasn't a lot to maintain with the Django app, so they were like, let's just start fresh, fresh with Rails. yeah, but it's funny because a lot of the code in that Rails app was, like, transliterated from Python. So you could, it would, it looked like the strangest Ruby code in the world because it was basically, there was no test. So they were like, let's just write the Ruby version of this Python just so we know it works. but obviously that didn't, didn't last forever, so. [00:03:07] Jeremy: So, so what's an example of a, of a tell? Where you're looking at the code and you're like, oh, this is clearly, it came from Python. [00:03:15] David: You'd see like, very, very explicit, right? Like Python, there's a lot of like single line things. very like, this sounds like a dig, but it's very simple looking code. Like, like I don't know Python, but I was able to change this Django app. And I had to, I could look at it and you can figure out immediately how it works. Cause there's. Not much to it. There's nothing fancy. So, like, this, this Ruby code, there was nothing fancy. You'd be like, well, maybe they should have memoized that, or maybe they should have taken that into another class, or you could have done this with a hash or something like that. So there was, like, none of that. It was just, like, really basic, plain code like you would see in any beginning programming language kind of thing. Which is at least nice. You can understand it. but you probably wouldn't have written it that way at first in Ruby. Thoughts on Go [00:04:05] Jeremy: Yeah, that's, that's interesting because, uh, people sometimes talk about the Go programming language and how it looks, I don't know if simple is the right word, but it's something where you look at the code and even if you don't necessarily understand Go, it's relatively straightforward. Yeah. I wonder what your thoughts are on that being a strength versus that being, like, [00:04:25] David: Yeah, so at Stitch Fix at one point we had a pro, we were moving off of Heroku and we were going to, basically build a deployment platform using ECS on AWS. And so the deployment platform was a Rails app and we built a command line tool using Ruby. And it was fine, but it was a very complicated command line tool and it was very slow. And so one of the developers was like, I'm going to rewrite it in Go. I was like, ugh, you know, because I just was not a big fan. So he rewrote it in Go. It was a bazillion times faster. And then I was like, okay, I'm going to add, I'll add a feature to it. It was extremely easy. Like, it's just like what you said. I looked at it, like, I don't know anything about Go. I know what is happening here. I can copy and paste this and change things and make it work for what I want to do. And it did work. And it was, it was pretty easy. so there's that, I mean, aesthetically it's pretty ugly and it's, I, I. I can't really defend that as a real reason to not use it, but it is kind of gross. I did do Go, I did a small project in Go after Stitch Fix, and there's this vibe in Go about like, don't create abstractions. I don't know where I got that from, but every Go I look at, I'm like we should make an abstraction for this, but it's just not the vibe. They just don't like doing that. They like it all written out. And I see the value because you can look at the code and know what it does and you don't have to chase abstractions anywhere. But. I felt like I was copying and pasting a lot of, a lot of things. Um, so I don't know. I mean, the, the team at Stitch Fix that did this like command line app in go, they're the platform team. And so their job isn't to write like web apps all day, every day. There's kind of in and out of all kinds of things. They have to try to figure out something that they don't understand quickly to debug a problem. And so I can see the value of something like go if that's your job, right? You want to go in and see what the issue is. Figure it out and be done and you're not going to necessarily develop deep expertise and whatever that thing is that you're kind of jumping into. Day to day though, I don't know. I think it would make me kind of sad. (laughs) [00:06:18] Jeremy: So, so when you say it would make you kind of sad, I mean, what, what about it? Is it, I mean, you mentioned that there's a lot of copy and pasting, so maybe there's code duplication, but are there specific things where you're like, oh, I just don't? [00:06:31] David: Yeah, so I had done a lot of Java in my past life and it felt very much like that. Where like, like the Go library for making an HTTP call for like, I want to call some web service. It's got every feature you could ever want. Everything is tweakable. You can really, you can see why it's designed that way. To dial in some performance issue or solve some really esoteric thing. It's there. But the problem is if you just want to get an JSON, it's just like huge production. And I felt like that's all I really want to do and it's just not making it very easy. And it just felt very, very cumbersome. I think that having to declare types also is a little bit of a weird mindset because, I mean, I like to make types in Ruby, I like to make classes, but I also like to just use hashes and stuff to figure it out. And then maybe I'll make a class if I figure it out, but Go, you can't. You have to have a class, you have to have a type, you have to think all that ahead of time, and it just, I'm not used to working that way, so it felt, I mean, I guess I could get used to it, but I just didn't warm up to that sort of style of working, so it just felt like I was just kind of fighting with the vibe of the language, kind of. Yeah, [00:07:40] Jeremy: so it's more of the vibe or the feel where you're writing it and you're like this seems a little too... Explicit. I feel like I have to be too verbose. It just doesn't feel natural for me to write this. [00:07:53] David: Right, it's not optimized for what in my mind is the obvious case. And maybe that's not the obvious case for the people that write Go programs. But for me, like, I j
Episode Notes Rachael Wright-Munn (ChaelCodes) talks about her love of programming games (games with programming elements in them, not how to make games!), starting her streaming career with regex crosswords, and how streaming games and open source every week led her to a voice acting role in one of her favorite programming games. Recorded at RubyConf 2023 in San Diego. mastodon twitch Personal website Programming Games mentioned: Regex Crossword SHENZHEN I/O EXAPUNKS 7 Billion Humans One Dreamer Code Rom@ntic Bitburner Transcript You can help edit this transcript on GitHub. Jeremy: I'm here at RubyConf San Diego with Rachel Wright-Munn, and she goes by Chaelcodes online. Thanks for joining me today. Rachael: Hi, everyone. Hi, Jeremy. Really excited to be here. Jeremy: So probably the first thing I'll ask about is on your web page, and I've noticed you have streams, you say you have an interest in not just regular games, but programming games, so. Rachael: Oh my gosh, I'm so glad you asked about this. Okay, so I absolutely love programming games. When I first started streaming, I did it with Regex Crossword. What I really like about it is the fact that you have this joyful environment where you can solve puzzles and work with programming, and it's really focused on the experience and the joy. Are you familiar with Zach Barth of Zachtronics? Jeremy: Yeah. So, I've tried, what was it? There's TIS-100. And then there's the, what was the other one? He had one that's... Rachael: Opus Magnum? Shenzhen I/O? Jeremy: Yeah, Shenzhen I/O. Rachael: Oh, my gosh. Shenzhen I/O is fantastic. I absolutely love that. The whole conceit of it, which is basically that you're this electronics engineer who's just moved to Shenzhen because you can't find a job in the States. And you're trying to like build different solutions for these like little puzzles and everything. It was literally one of the, I think that was the first programming game that really took off just because of the visuals and everything. And it's one of my absolute favorites. I really like what he says about it in terms of like testing environments and the developer experience. Cause it's built based on assembly, right? He's made a couple of modifications. Like he's talked about it before where it's like The memory allocation is different than what it would actually look like in assembly and the way the registers are handled I believe is different, I wouldn't think of assembly as something that's like fun to write, but somehow in this game it is. How far did you get in it? Jeremy: Uh, so I didn't get too far. So, because like, I really like the vibe and sort of the environment and the whole concept, right, of you being like, oh, you've been shipped off to China because that's the only place that these types of jobs are, and you're working on these problems with bad documentation and stuff like that. And I like the whole concept, but then the actual writing of the software, I was like, I don't know. Rachael: And it's so hard, one of the interesting things about that game is you have components that you drop on the board and you have to connect them together and wire them, but then each component only has a specific number of lines. So like half the time I would be like, oh, I have this solution, but I don't have enough lines to actually run it or I can't fit enough components, then you have to go in and refactor it and everything. And it's just such a, I don't know, it's so much fun for me. I managed to get through all of the bonus levels and actually finish it. Some of them are just real, interesting from both a story perspective and interesting from a puzzle perspective. I don't wanna spoil it too much. You end up outside Shenzhen, I'll just say that. Jeremy: OK. That's some good world building there. Rachael: Yeah. Jeremy: Because in your professional life, you do software development work. So I wonder, what is it about being in a game format where you're like, I'm in it. I can do it more. And this time, I'm not even being paid. I'm just doing it for fun. Rachael: I think for me, software development in general is a very joyful experience. I love it. It's a very human thing. If you think about it like math, language, all these things are human concepts and we built upon that in order to build software in our programs and then on top of that, like the entire purpose of everything that we're building is for humans, right? Like they don't have rats running programs, you know what I mean? So when I think about human expression and when I think about programming, these two concepts are really closely linked for me and I do see it as joyful, But there are a lot of things that don't spark joy in our development processes, right? Like lengthy test suites, or this exhausting back and forth, or sometimes the designs, and I just, I don't know how to describe it, but sometimes you're dealing with ugly code, sometimes you're dealing with code smells, and in your professional developer life, sometimes you have to put up with that in order to ship features. But when you're working in a programming game, It's just about the experience. And also there is a correct solution, not necessarily a correct solution, but like there's at least one correct solution. You know for a fact that there's, that it's a solvable problem. And for me, that's really fun. But also the environment and the story and the world building is fun as well, right? So one of my favorite ones, we mentioned Shenzhen, but Zachtronics also has Exapunks. And that one's really fun because you have been infected by a disease. And like a rogue AI is the only one that can provide you with the medicine you need to prevent it. And what this disease is doing is it is converting parts of your body into like mechanical components, like wires and everything. So what you have to do as an engineer is you have to write the code to keep your body running. Like at one point, you were literally programming your heart to beat. I don't have problems like that in my day job. In my day job, it's like, hey, can we like charge our customers more? Like, can we put some banners on these pages? Like, I'm not hacking anybody's hearts to keep them alive. Jeremy: The stakes are a little more interesting. Yeah, yeah. Rachael: Yeah, and in general, I'm a gamer. So like having the opportunity to mix two of my passions is really fun. Jeremy: That's awesome. Yeah, because that makes sense where you were saying that there's a lot of things in professional work where it's you do it because you have to do it. Whereas if it's in the context of a game, they can go like, OK, we can take the fun problem solving part. We can bring in the stories. And you don't have to worry about how we're going to wrangle up issue tickets. Rachael: Yeah, there are no Jira tickets in programming games. Jeremy: Yeah, yeah. Rachael: I love what you said there about the problem solving part of it, because I do think that that's an itch that a lot of us as engineers have. It's like we see a problem, and we want to solve it, and we want to play with it, and we want to try and find a way to fix it. And programming games are like this really small, compact way of getting that dopamine hit. Jeremy: For sure. Yeah, it's like. Sometimes when you're doing software for work or for an actual purpose, there may be a feeling where you want to optimize something or make it look really nice or perform really well. And sometimes it just doesn't matter, right? It's just like we need to just put it out and it's good enough. Whereas if it's in the context of a game, you can really focus on like, I want to make this thing look pretty. I want to feel good about this thing I'm making. Rachael: You can make it look good, or you can make it look ugly. You don't have to maintain it. After it runs, it's done. Right, right, right. There's this one game. It's 7 Billion Humans. And it's built by the creators of World of Goo. And it's like this drag and drop programming solution. And what you do is you program each worker. And they go solve a puzzle. And they pick up blocks and whatever. But they have these shredders, right? And the thing is, you need to give to the shredder if you have like a, they have these like little data blocks that you're handing them. If you're not holding a data block and you give to the shredder, the worker gives themself to the shredder. Now that's not ideal inside a typical corporate workplace, right? Like we don't want employees shredding themselves. We don't want our workers terminating early or like anything like that. But inside the context of a game, in order to get the most optimal solution, They have like a lines of code versus fastest execution and sometimes in order to win the end like Lines of code. You just kind of have to shred all your workers at the, When I'm on stream and I do that when I'm always like, okay everybody close your eyes That's pretty good it's Yeah, I mean cuz like in the context of the game. Jeremy: I think I've seen where they're like little They're like little gray people with big eyes Yes, yes, yes, yes. Yeah, so it's like, sorry, people. It's for the good of the company, right? Rachael: It's for my optimal lines of code solution. I always draw like a, I always write a humane solution before I shred them. Jeremy: Oh, OK. So it's, you know, I could save you all, but I don't have to. Rachael: I could save you all, but I would really like the trophy for it. There's like a dot that's going to show up in the elevator bay if I shred you. Jeremy: It's always good to know what's important. But so at the start, you mentioned there was a regular expression crossword or something like that. Is that how you got started with all this? Rachael: My first programming game was Regex Crossword. I absolutely loved it. That's how I learned Regex. Rachael: I love it a lot. I will say one thing that's been kind of interesting is I learned Regex through Regex Cros
Dr. Daniel Zingaro and Dr. Leo Porter are co-authors of the book Learn AI-Assisted Python Programming. Leo will teach an introductory computer science course this quarter at UCSD using this book. We discuss how tools like GitHub Copilot let people new to programming focus on breaking down problems instead of language syntax. Dr. Zingaro is an Associate Professor of Computer Science at University of Toronto Mississauga and Dr. Porter is an Associate Professor at University of California San Diego. This episode was originally posted on Software Engineering Radio. Topics covered: Making programming more accessible Teaching problem decomposition instead of language syntax The importance of reading and testing untrusted generated code The rise of throwaway or one-off code Concerns about relying on commercial tools Rethinking how to assess students Related Links Learn AI-Assisted Python Programming Leo Porter Daniel Zingaro GitHub Copilot Transcript You can help edit this transcript on GitHub. Note the timestamps and audio for this transcript will not completely match. Intro [00:00:00] Jeremy: Today I'm talking to Dr. Leo Porter. He's an associate teaching professor of computer science at the University of California San Diego, and he co-founded the computing education research laboratory there. I'm also joined by Dr. Daniel Zingaro who is an associate teaching professor of computer science at the University of Toronto. And he's also the author of the book, learn to Code by Solving Problems and the Book, Algorithmic Thinking. They are co-authors of the book, learn AI Assisted Python programming. Leo and Dan, welcome to Software Engineering Radio. [00:00:37] Leo: Thank you for having us, Jeremy. I really appreciate your podcast, so thanks. Great to be here. [00:00:41] Dan: Thanks Jeremy. Writing a book for Leo's CS1 class [00:00:43] Jeremy: The first thing we could start with is, is why this book? And, and why now? How did you decide on like, okay, this is the thing we need to do now. [00:00:51] Leo: So, uh, this is Dan. Uh, so Dan, um, like really early when LLMs first kind of were coming out and being seen on the scene for programming, uh, he started playing with them, uh, for programming projects. And I think Dan really quickly realized that they'd had this, a big impact on how we teach programming. so he reached out to me, uh, and said, I really need to give em a try. And, uh, after I played with them for a little while, I had the exact same realization that this is gonna change, uh, how we teach programming, uh, in a pretty dramatic way. So having realized that, having realized that we had to change our, uh, introductory CS1 courses, we knew we needed to do that, but in order to teach that class, we'd have to have a book that we could assign our students that that would go along with the class. And so we knew we had to change the class, but we also knew we had to have a book for it. And given the, the timeline to write books, we started in the book first. Um, and so that's how it got started. LLMs for Syntax, Humans for breaking down problems [00:01:45] Dan: I guess we figured out that our course had to change first, before we knew exactly, um, how it had to change. One thing we, um, learned early on was that the kinds of assignments we give in our introductory courses, they're just solved by, by these tools like ChatGPT and copilot. So, uh, we knew something had to change, and then it is just a matter of figuring out what. And so we spent, um, quite a bit of time with these tools and we started to realize that what's gonna change is the skills that our students need to learn, uh, to be effective using these tools. So like b before these tools, we would spend a lot of time teaching syntax. Um, and students struggle quite a bit with learning syntax, which I mean, it's very, it's, it's very frustrating, right? Cuz you can't even do anything until you get the syntax right? And you're getting all these errors like missing colons and, you know, mismatched braces and stuff like that. Uh, so it's actually good, that, the LLMs are doing the syntax for the students. But you know, just because that skill's, uh, not needed as much, uh, doesn't mean that there aren't still skills for students to learn. So instead of syntax, other things become more important. Uh, so for example, uh, Leo and I, realize that reading code is gonna be extremely important even more so than before. I think if, if that, if that's even possible. Uh, and that's because sometimes you're gonna get back code that just doesn't work. And so we realized that students are gonna need to be able to read, the response that they get to see if the code looks reasonable, or not, right? And then if the code, uh, I is unreasonable, then they need to read more code, uh, and look at other solutions, right, that they get from the, uh, LLM. Uh, there are other, uh, things they can do as well, like messing around with the prompt and so on. But they're gonna need to be able to read code, uh, throughout the process. And then, so we just kind of kept on using these tools and documenting the skills that students are gonna need. And we just kinda realized that all the skills students are gonna need are skills we would want to teach anyway. So like, uh, one more example is testing, right? So, students may now not have, uh, an understanding of every last detail of, you know, the Python language like they would before. And so then that makes testing even more important, right? Than it was they need to verify that the code they're getting is correct. And so they have to be very good at writing test cases. and, and, you know, similar, similar for debugging, we need our students to have strong debugging skills, again, even potentially stronger than before, right? Because if the code isn't working, they need to first determine what the code is doing to be able to fix it. And then I guess one more I'll mention is problem decomposition. And this is a big one. I think this is gonna come up a couple times probably in our talk today, but LLMs struggle when you give them tasks that are too large and students need to know how to break problems down into small components so that, that, LLM can solve each one and, you know, have a good chance of getting it right. [00:04:56] Leo: Yeah, I, I think, um, kind of to, to piggyback off of that, you, you may be hearing these skills and saying, oh, these are absolutely essential skills. Every software engineer should know, uh, these are being taught right now. Right? Um, and the answer is not really, like these aren't core topics in a lot of introductory CS classes because so much time is spent on syntax. And so fairly early on when we kind of realized these skills would be so essential, Uh, we got really excited because these are skills we want to teach in our classes, and the LLMs are now giving us the ability to do that more. [00:05:27] Dan: Mm-hmm. [00:05:28] Jeremy: I think that's interesting about the syntax comment because you were saying how reading is gonna be more important than ever because you have LLM generating the code. Um, and you need to understand that code that's being generated and understand that it does what it, uh, you think it does. And so I wonder if when you say you spend less time on syntax, is it because you feel like they're gonna generate this code and they're sort of organically gonna pick up syntax that way versus having to focus on it at the start? I'm just trying to picture what you see changing there. [00:06:05] Dan: Yeah, Jeremy. So, uh, I, I was, I guess speaking specifically about syntax errors, which don't generally happen when you're using LLMs, and I also agree with you, you need to know what the code is doing, but, um, you can do that without worrying about each specific piece of syntax. Like, um, you're gonna need to know what the keywords do for sure, but, missing, you know, brackets and colons and, uh, oh, there needs to be like a blank line here. indentation, uh, a lot of this kind of thing. Is done for the most part, correctly by the LLMs. So yeah, I agree with you. You need to be able to identify the structures. So in our, in our book actually, Leo and I have, um, a couple of chapters on reading code and, I don't think we ever break breakdown, a line of code into its individual tokens. We do talk about the main structures, like ifs and loops and functions and all that. but compared to other books, I, I think or other, uh, other ways of teaching where you would focus on the micro level, we try to focus on the line level now, cuz we want our students to be able to grasp what each line is doing, I guess more than each token. [00:07:27] Leo: Yeah, maybe to, to add to that a bit, it's almost, uh, if you think about the advent of block-based languages, it was to make sure that the, essentially the, the author can't make syntax mistakes, right? Is the whole purpose of kind of block-based languages. And they're, they're huge for introductory programming, especially in like K through 12. in a sense, LLMs do this because they'd never give you back wrong syntax, or they almost, almost never give you back wrong syntax. And so it takes away that kind of cognitive burden of making sure you handle the, the token level. as uh Dan was saying LLM generated code needs test cases to catch logical errors [00:08:00] Jeremy: I, I'm curious, so you said the syntax is correct, but what are the, the typical mistakes you see coming back from these LLMs? Is it a, a logical mistake or is it ever something that. Actually doesn't compile. I'm, I'm kind of curious what your experience has been. [00:08:19] Leo: I think the, uh, more common errors that we've been seeing are logical. So it misinterprets the prompt that you're giving it. It essentially tries to solve a problem that's different than what you're trying to solve. It may have bugs in it, so it is in fact trying to solve the right problem, but it, it's off by one, um,
systemd is a service manager for Linux. It is the first process that runs on many Linux distributions and manages all other user processes. It includes utilities for logging, process isolation, process dependencies, socket activation, and many other tasks. psystemd is a python library to communicate with systemd over dbus from python as an alternative to shelling out from an application to control services. Anita Zhang is an engineerd managerd at Meta and Alvaro Levia is a production engineer at Meta. I attended their systemd workshop at the Southern California Linux Expo. Topics covered: What's systemd? Giving talks and workshops cgroups and namespaces systemd timers vs cron Migrating from CentOS 6 to 7 Production engineers need to go lower in the stack to debug applications Meta's Linux userspace team Use of public cloud at Meta Meta's bootcamp Pystemd Mastodon Anita Zhang Alvaro Leiva Workshop systemd workshop Conference talks Journey into the Heart of systemd - Scale 19x Systemd: why you should care as a Python developer - PyCon 2018 Move Fast without Breaking things - Scale 18x Solving All the Problems with systemd - LISA18 Using systemd to high level languages - All Systems Go! The Curious Case of Memory Growth - Scale 19x Related Links systemd psystemd systemd-run systemd-timers Transcript You can help edit this transcript on GitHub. Introductions [00:00:00] Jeremy: So today I'm talking to Avaro Leiva and Anita Zhang. Avaro is the author of the pystemd library and he's a production engineer at Meta. And Anita is an engineerd managerd at Meta, and I'll let her explain that further. [00:00:19] Jeremy: But thank you both for joining me today. [00:00:21] Anita: Yeah, thanks for having us. [00:00:24] Jeremy: I guess where we could start, Anita, maybe you could explain a little bit your, your title that I just gave you there. engineerd managerd [00:00:31] Anita: Yeah, so by default I, I should be a software engineering manager, but when I transitioned to management, I was not, Ready to go public with, um, my transition. So I kind of hid it by, changing the title. we have some weird systems in place that grep on like the word engineer. So I had to keep engineer in there somehow. and so I kind of polled my friends what I should change my title to, and they're like, oh, you're gonna support the systemd team, so you should change it to like managerd. So I was like, sounds good. engineerd, managerd. I didn't wanna get kicked out of any workplace groups, for example, that required me to be an engineer. [00:01:15] Jeremy: Oh, okay. [00:01:17] Anita: Or like engineering function, I guess. [00:01:19] Jeremy: Yeah. Yeah. And you just gotta title it yourself, so as long as you got engineer in it, you're good. [00:01:24] Anita: Yeah, pretty much. Some people have really fun titles like Chief Potato Officer and things like that. [00:01:32] Jeremy: So what groups does the, uh, the potato officer get to go in? [00:01:37] Anita: Yeah. Not the C level ones. (laughs) What's systemd? [00:01:42] Jeremy: I guess maybe to, to start, we should explain to people who aren't familiar, uh, what systemd is. So if either of you wanna wanna take that one. [00:01:52] Alvaro: so people who doesn't know, right? So systemd is today is your init system, right? Is the thing that manage your, your process. and the best way to understand this, it is like when your computer, it needs to execute something. And that's something is what we call pid one. And that pid one is the thing that is gonna manage everything from now from there on, right? Uh, in the most basic level, if you remember how to, how does program start, how does like an idea becomes a program? Uh, you need to fork exec, right? So that means that something has to be at the top of that tree and that is systemd. now that can be anything, right? So there was a time where that was like systemv and there was also like upstart, uh, today's systemd is the thing that, uh, it's shipped in most distributions. [00:02:37] Jeremy: Yeah, because I, I definitely remember when I first started working with Linux, uh, it was with CentOS 6, and when I would want to run a service, I would have to go and write a bash script and kind of have all these checks for, is this thing running? Does it have permission to these things, which user is it running as, and so there was a lot of stuff that I remember having to do before systemd came out. [00:03:08] Alvaro: The good old days as we call them, [00:03:11] Jeremy: Or the bad old days. [00:03:13] Anita: Yeah. Depending on who you ask. [00:03:15] Alvaro: Yeah. So, so that is super interesting because, um, During those time, like you said, you have to write a first script. That means that you were basically yourself, your own service manager, right? So ideas as simple as, is my program running? There was no real answer. You have to figure it out, right? So if you run a program, uh, you maybe would create a pid file which hold the p or the pid of the process, of the main process, right? And then something needs to check, oh, is this file exist? Does the file exist and does the content of this file actually match to a process? And then you grab the process. So it was all these ideas that you had to do, and then for, you have to do it for every single software that you would deploy on your machine, right? That also makes really hard to parallelize stuff, right? Because you have no concept of dependencies. So if your computer has to put, uh, I, I dunno if you remember like long time ago, like Linux machine would, takes like five minutes to boot like your desktop. I remember like openSUSE. I can't remember, like 2008, 2007. Uh, it would take like five minutes to boot and then Ubuntu came and, and it start like immediately. And it was because, you can parallelize things, but you cannot do that if all you're running are bash script. Why was systemd chosen to be included in Linux distributions? [00:04:26] Jeremy: I remember before the Linux distributions didn't include it. And I wonder if you have any insight into how systemd got chosen to be the thing to manage our processes and basically how we got to where we are today. [00:04:44] Anita: I mean, we can kind of speculate a little bit. at the time when Lennart started systemd, um, with. Kai Sievers probably messed up his name there. Um, they were all at Red Hat and Red Hat manages fedora these days and I believe fedoras kind of like the bleeding edge for a lot of the new software ideas. Um, and when they picked up systemd as the defaults, um, eventually it started to trickle down to the rest of their distributions through RHEL and to CentOS and at the same time, I think other distributions started to see how useful it was in terms of managing all the different processes and services. Um, I know Debian at one point had kind of a vote on like whether they should make systemd either default or like, make it easy to switch between both. And then they decided to just stick with systemd because it's, I mean, the public agrees that it's like easy to use and it's more useful. It abstracts away a lot of things that they had to manually do before Who is interested in systemd? Who comes to your talks and workshops? [00:05:43] Jeremy: Something I've been kind of curious about. So just this year at SCaLE uh, you ran a, a workshop teaching people how to use systemd and, and sort of what it is about. I guess when, when you get people coming to these workshops, what are they typically, where are they typically coming from? Are they like system administrators or are they software developers? Like when you run these workshops, who are you looking for as your audience? [00:06:13] Alvaro: To be fair, this was the first time that we actually did a workshop for this. But we have like, talk about this in, in many like conferences. here's what happened, right? So every time that you put systemd on the title of, uh, of a talk, you are like baiting people into coming in, right? Because you do want to hear like some people who are still like reluctant from that war that happened like a few years ago. Between systemd and Ups tart right? most of the people who we get are, I would say like, software engineers, people who do software, and at least the question that I always get a lot, it is like, why should I care about systemd um, if I run everything on my containers in my Docker containers, right? The other type of audience that you get, you do get system administrators. Uh, but in general those people only cares about starting and stopping services don't really care about like the, like the nice other features that systemd has to offer. And then you get people who just wanna start like flame wars and I'm here for them. Why give talks and workshops on systemd? [00:07:13] Jeremy: In previous years, you've given conference talks and, and things like that related to systemd. And I wonder for, for both of you where, where the, the interests came from, where this is something that you feel strongly enough about that you wanna give talks about it. Because it's like, a lot of times when people give a conference talk, it's about, like new front end technology or some, you know, new shiny thing. Whereas systemd is like, it's like very valuable, but it's something that I feel like a lot of people don't think about. And so I'm just kind of curious where the interest came for, for both of you. [00:07:52] Anita: I think I just like giving talks and teaching in general. So if I have work that I found really exciting or interesting, then I'd want to like tell people about it and like teach them and like show them something cool. I think systemd is kind of a really good topic in that case because a lot of people want to learn more about it. Today there's like lots of new developments going on in systemd. So there's like a lot of basic stuff that you can learn, but also a lot of new advanced topics that are changing every year as well. asi
Sentry is an application monitoring tool that surfaces errors and performance problems. It minimizes the need to manually look at logs or dashboards by identifying common problems across applications and frameworks. David Cramer is the co-founder and CTO of Sentry. This episode originally aired on Software Engineering Radio. Topics covered: What's Sentry? Treating performance problems as errors Why you might no need logs Identifying common problems in applications and frameworks Issues with Open Telemetry data Why front-end applications are difficult to instrument The evolution of Sentry's architecture Switching from a permissive license to the Business Source License Related Links Sentry David's Blog Sentry 9.1 and Upcoming Changes Re-Licensing Sentry Transcript You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today I'm talking to David Kramer. He's the founder and CTO of Sentry. David, welcome to Software Engineering Radio. [00:00:08] David: Thanks for having me. Excited for today's conversation. What's Sentry? [00:00:11] Jeremy: I think the first thing we could start with is defining what Sentry is. I know some people refer to it as an error tracker. Some people have referred to it as, an application performance monitoring tool. I wonder if you could kind of describe in, in your words what it is. [00:00:30] David: You know, as somebody who doesn't work in marketing, I just tell it how it is. So Sentry started out doing error monitoring, which. You know, dependent on who you talk to, you might just think of as logging, right? Like that's the honest truth. It is just logging just a different shape or form. these days it's hard to not classify us as just an APM tool that's like the industry that exists. It's like the tools people understand. So I would just say it's an APM tool, right? We do a bunch of things within that space, and maybe it's not, you know, item for item the same as say a product like New Relic. but a lot of the overlap's there, so it's like errors performance, which is like latency and sort of throughput. And then we have some stuff that just goes a little bit deeper within that. The, the one thing i would say that is different for us versus a lot of these tools is we actually only do application monitoring. So we don't do any since like systems or infrastructure monitoring. Meaning Sentry is not gonna tell you when you need to replace a hard drive or even that you need new hard, like more disk space or something like that because it's just, it's a domain that we don't think is relevant for sort of our customers and product. Application Performance Monitoring is about finding crashes and performance problems that users would associate with bugs [00:01:31] Jeremy: For people who aren't familiar with the term application performance monitoring, what is that compared to just error tracking? [00:01:41] David: The way I always reason about it, this is what I tell new hires and what I would tell, like my mother, if I had to explain what I do, is like, you load Uber and it crashes. We all know that's bad, right? That's error monitoring. We capture the crash report, we send it to developers. You load Uber and it's a 30 second spinner, like a loading indicator as a customer. Same outcome for me. I assume the app is broken, right? So we also know that's bad. Um, but that's different than a crash. Okay. Sentry captures that same thing and send it to developers. lastly the third example we use, which is a little bit more. I think, untraditional, but a non-traditional rather, uh, you load the Uber app and it's like a blank screen or there's no button to submit, like log in or something like this. So it's kind of like a, it's broken, but it maybe isn't erroring and it's not like a slow thing. Right. Same outcome. It's probably a bug of some sorts. Like it's what an end user would describe it as a bug. So for me, APM just translates to there are bugs, user perceived bugs in your application and we're able to monitor and, and help the software teams sort of prioritize and resolve those, those concerns. [00:02:42] Jeremy: Earlier you were talking about actual crashes, and then your second case is, may be more of if the app is running slowly, then that's not necessarily a crash, but it's still something that an APM would monitor. [00:02:57] David: Yeah. Yeah. And I, I think to be fair, APM, historically, it's not a very meaningful term. Like I as a, when I was more of just an individual contributor, I would associate APM to, like, there's a dashboard that will tell me what's slow in my application, which it does. And that is kind of core to APM, but it would also, none of the traditional tools, pre sentry would actually tell you why it's broken, like when there's an error, a crash. It was like most of those tools were kind of useless. And I don't know, I do actually know, but I'm gonna pretend I don't know about most people and just say for myself. But most of the time my problems are errors. They are not like it's fast or slow, you know? and so we just think of it as like it's a holistic thing to say, when I've changed the application and something's broken, or it's a bug, you know, what is that bug? How do we help people fix it? And that comes from a lot of different, like data signals and things like that. the end result is still the same. You either are gonna fix it or it's not important and you ignore it. I don't know. So it's a pretty straightforward, premise for us. But again, most companies in the space, like the traditional company is when you grow a big company, what happens is like you build one thing and then you build lots of check boxes to sell more things. And so I think a lot of the APM vendors, like they've created a lot of different products. Like RUM is a good example of another acronym that lives with an APM. And I would tell you RUM is completely meaningless. It, it stands for real user monitoring. And so I'm like, well, what's not real about monitoring the application? Well, nothing's not real, but like they created a new category because that's how marketing engines work. And that new category is more like analytics than it is like application telemetry. And it's only because they couldn't collect the app, the application telemetry at the time. And so there's just a lot of fluff, i would say. But at the end of the day too, like developers or engineering teams, it's like new version of the application. You broke something, let's tell you about it so you can fix it. You might not need logging or performance monitoring [00:04:40] Jeremy: And, and so earlier you were saying how this is a kind of logging, but there's also other companies, other products that are considered like logging infrastructure. Like I, I would think of companies like Paper Trail or Log Tail. So what space does Sentry fill that's that's different than that kind of logging? [00:05:03] David: Um, so the way I always think about it, and this is both personally true, and what I advise other folks is when you're building something new, when you start from zero, right, you can often take Sentry put it in, and that's good enough. You don't even need performance monitoring. You just need like errors, right? Like you're just causing bugs all the time. And you could do that with logging, but like the delta between air monitoring and logging is night and day. From a user experience, like error monitoring for us, or what we built at the very least, aggregates the errors. It, it helps you understand the frequency. It helps you when they're new versus old. it really gives you a lot of detail where logs don't, and so you don't need logging often. And I will tell you today at Sentry. Engineers do not use logs for the most part. Uh, I had a debate with one of our, our team members about it, like, why does he use logs recently? But you should not need them because logs serve a different purpose. Like if you have traces which tell you like, like fast and slow in a bunch of other network data and you have this sort of crash report collection or error monitoring thing, logs become like a compliance or an audit trail or like a security forensics, tool, and there's just not a lot of value that you would get out of them otherwise, like once in a while, maybe there's like some weird obscure use case, but generally speaking, you can just pretend that you don't need logs most days. Um, and to me that's like an evolution of the industry. And so when, when Sentry is getting started, most people were still logs. And if you go talk to SRE teams, they're like, oh, login is what we know. Some of that's changed a little bit, but. But at the end of the day, they should only be needed for more complicated audit trails because they're just not a good solution to the problem. It's just free form data. Structured or not, doesn't really matter. It's not aggregated. It's not something that you can really use. And it's why whenever you see logging tools, um, not even the papertrails of the world, but the bigger ones like Splunk or Cabana, it's like this weird, what we describe as choose your own adventure. Like go have fun, build your dashboards and try to make the logs useful kind of story. Whereas like something like Sentry, it's just like, why would you waste any time trying to build dashboards when we can just tell you when something new is broken? Like that's the ideal situation. [00:06:59] Jeremy: So it sounds like maybe the distinction is with a more general logging tool, like you mentioned Splunk and Kibana it's a collection of all this information. of things happening, even though nothing's necessarily wrong, whereas Sentry is more Sentry is it's going to log things, but it's only going to log things if Sentry believes something is wrong, either because of a crash or because of some kind of performance issue. People don't want to dig through logs or dashboards, they want to be told when something is wrong and whyMost software is bui
Luca Casonato on Deno

Luca Casonato on Deno

2023-03-0201:20:27

Luca Casonato is the tech lead for Deno Deploy and a TC39 delegate. Deno is a JavaScript runtime from the original creator of NodeJS, Ryan Dahl. Topics covered: What's a JavaScript runtime How V8 is used Why Deno was created The W3C WinterCG for server-side JavaScript Why it's difficult to ship new features in Node The benefits of web standards Creating an all-inclusive toolset like Rust and Go Deno's node compatibility layer Use cases for WebAssembly Benefits and implementation of Deno Deploy Reasons to deploy on the edge What's coming next Luca Luca Casonato @lcasdev Deno Homepage Deploy Showcase Subhosting Fresh web framework The anatomy of an Isolate Cloud Deno Users Netlify Edge Functions Deno at Slack GitHub Flat Data Shopify Oxygen Other related links Cache Web API V8 (JavaScript and WebAssembly engine) TC39 (JavaScript specification group) Web-interoperable Runtimes Community Group (WinterCG) Cloudflare Workers (Deno Deploy competitor) How Cloudflare KV works CockroachDB (Distributed database) XKCD Standards Comic Transcript You can help edit this transcript on GitHub. [00:00:07] Jeremy: Today I'm talking to Luca Casonato. He's a member of the Deno Core team and a TC 39 Delegate. [00:00:06] Luca: Hey, thanks for having me. What's a runtime? [00:00:07] Jeremy: So today we're gonna talk about Deno, and on the website it says, Deno is a runtime for JavaScript and TypeScript. So I thought we could start with defining what a runtime is. [00:00:21] Luca: Yeah, that's a great question. I think this question actually comes up a lot. It's, it's like sometimes we also define Deno as a headless browser, or I don't know, a, a JavaScript script execution tool. what actually defines runtime? I, I think what makes a runtime a runtime is that it is a, it's implemented in native code. It cannot be self-hosted. Like you cannot self-host a JavaScript runtime. and it executes JavaScript or TypeScript or some other scripting language, without relying on, well, yeah, I guess it's the self-hosting thing. Like it's, it's essentially a, a JavaScript execution engine, which is not self-hosted. So yeah, it, it maybe has IO bindings, but it doesn't necessarily need to like, it. Maybe it allows you to read the, from the file system or, or make network calls. Um, but it doesn't necessarily have to. It's, I think the, the primary definition is something which can execute JavaScript without already being written in JavaScript. How V8 and JavaScript runtimes are related [00:01:20] Jeremy: And when we hear about JavaScript run times, whether it's Deno or Node or Bun, or anything else, we also hear about it in the context of v8. Could you explain the relationship between V8 and a JavaScript run time? [00:01:36] Luca: Yeah. So V8 and, and JavaScript core and Spider Monkey, these are all JavaScript engines. So these are the low level virtual machines that can execute or that can parse your JavaScript code. turn it into byte code, maybe turn it into, compiled machine code, and then execute that code. But these engines, Do not implement any IO functions. They do not. They implement the JavaScript spec as is written. and then they provide extension hooks for, they call these host environments, um, like environments that embed these engines to provide custom functionalities to essentially poke out of the sandbox, out of the, out of the virtual machine. Um, and this is used in browsers. Like browsers have, have these engines built in. This is where they originated from. Um, and then they poke holes into this, um, sandbox virtual machine to do things like, I don't know, writing to the dom or, or console logging or making fetch calls and all these kinds of things. And what a runtime essentially does, a JavaScript runtime is it takes one of these engines and. It then provides its own set of host APIs, like essentially its own set of holes. It pokes into the sandbox. and depending on what the runtime is trying to do, um, the weight will do. This is gonna be different and, and the sort of API that is ultimately exposed to the end user is going to be different. For example, if you compare Deno and node, like node is very loosey goosey, about how it pokes holds into the sandbox, it sort of just pokes them everywhere. And this makes it difficult to enforce things like, runtime permissions for example. Whereas Deno is much more strict about how it, um, pokes holds into its sandbox. Like everything is either a web API or it's behind in this Deno name space, which means that it's, it's really easy to find, um, places where, where you're poking out of the sandbox. and really you can also compare these to browsers. Like browsers are also JavaScript run times. Um, they're just not headless. JavaScript run times, but JavaScript run times that also have a ui. and. . Yeah. Like there, there's, there's a whole Bunch of different kinds of JavaScript run times, and I think we're also seeing a lot more like embedded JavaScript run times. Like for example, if you've used React Native before, you, you may be using Hermes as a, um, JavaScript engine in your Android app, which is like a custom JavaScript engine written just for, for, for React native. Um, and this also is embedded within a, like react native run time, which is specific to React native. so it's also possible to have run times, for example, that are, that can be where the, where the back backing engine can be exchanged, which is kind of cool. [00:04:08] Jeremy: So it sounds like V8's role, one way to look at it is it can execute JavaScript code, but only pure functions. I suppose you [00:04:19] Luca: Pretty much. Yep. [00:04:21] Jeremy: Do anything that doesn't interact with IO so you think about browsers, you were mentioning you need to interact with a DOM or if you're writing a server side application, you probably need to receive or make HTTP requests, that sort of thing. And all of that is not handled by v8. That has to be handled by an external runtime. [00:04:43] Luca: Exactly Like, like one, one. There's, there's like some exceptions to this. For example, JavaScript technically has some IO built in with, within its standard library, like math, random. It's like random number. Generation is technically an IO operation, so, Technically V8 has some IO built in, right? And like getting the current date from the user, that's also technically IO So, like there, there's some very limited edge cases. It's, it's not that it's purely pure, but V8 for example, has a flag to turn it completely deterministic. which means that it really is completely pure. And this is not something which run times usually have. This is something like the feature of an engine because the engine is like so low level that it can essentially, there's so little IO that it's very easy to make deterministic where a runtime higher level, um, has, has io, um, much more difficult to make deterministic. [00:05:39] Jeremy: And, and for things like when you're working with JavaScript, there's, uh, asynchronous programming [00:05:46] Luca: mm-hmm. Concurrent JavaScript execution [00:05:47] Jeremy: So you have concurrency and things like that. Is that a part of V8 or is that the responsibility of the run time? [00:05:54] Luca: That's a great question. So there's multiple parts to this. There's the part, um, there, there's JavaScript promises, um, and sort of concurrent Java or well, yes, concurrent JavaScript execution, which is sort of handled by v8, like v8. You can in, in pure v8, you can create a promise, and you can execute some code within that promise. But without IO there's actually no way to defer time, uh, which means that in with pure v8, you can either, you can create a promise. Which executes right now. Or you can create a promise that never executes, but you can't create a promise that executes in 10 seconds because there's no way to measure 10 seconds asynchronously. What run times do is they add something called an event loop on top of this, um, on top of the base engine and that event loop, for example, like a very simple event loop, for example, might have a timer in it, which every second looks at if there's a timer schedule to run within that second. And if it does, if, if that timer exists, it'll go call out to V8 and say, you can now execute that promise. but V8 is still the one that's keeping track of, of like which promises exist, and the code that is meant to be invoked when they resolve all that kind of thing. Um, but the underlying infrastructure that actually invokes which promises get resolved at what point in time, like the asynchronous, asynchronous IO is what this is called. This is driven by the event loop, um, which is implemented by around time. So Deno, for example, it uses, Tokio for its event loop. This is a, um, an event loop written in Rust. it's very popular in the Rust ecosystem. Um, node uses libuv. This is a relatively popular runtime or, or event loop, um, implementation for c uh, plus plus. And, uh, libuv was written for Node. Tokio was not written for Deno. But um, yeah, Chrome has its own event loop implementation. Bun has its own event loop implementation. [00:07:50] Jeremy: So we, we might go a little bit more into that later, but I think what we should probably go into now is why make Deno, because you have Node that's, uh, currently very popular. The co-creator of Deno, to my understanding, actually created Node. So maybe you could explain to our audience what was missing or what was wrong with Node, where they decided I need to create, a new runtime. Why create a new runtime? (standards compliance) [00:08:20] Luca: Yeah. So the, the primary point of concern here was that node was slowly diverging from browser standards with no real path to, to, to, re converging. Um, like there was nothing that was pushing node in the direction of standards compliance and there was nothing, that was like sort of forcing node to innovate. and we really saw this because in the time between, I don
Leaguepedia is a MediaWiki instance that covers tournaments, teams, and players in the League of Legends esports community. It's relied on by fans, analysts, and broadcasters from around the world. Megan "River" Cutrofello joined Leaguepedia in 2014 as a community manager and by the end of her tenure in 2022 was the lead for Fandom's esports wikis. She built up a community of contributing editors in addition to her role as the primary MediaWiki developer. She writes on her blog and is a frequent speaker at the Enterprise MediaWiki Conference Topics covered: When to use MediaWiki Visual vs code editor MediaWiki's rough syntax Templates and markup Limiting user input to simplify pages Choosing not to transliterate long player names in certain languages Handling mobile clients Building aliases for search results Creating a single source of truth Roster changes and caching Cargo (Query data in MediaWiki templates using SQL) Hiding implementation details from editors Optimizing for the editor, not a clean codebase Training your users to use workarounds MediaWiki only supports es5 The wiki aesthetic Who is working on the wiki + onboarding Who is using the wiki The future of Leaguepedia How Megan got into wiki development Issues as opportunities to onboard Related Links River Writes - Megan's Blog Leaguepedia - League of Legends esports wiki MediaWiki VisualEditor VueJS in MediaWiki Open issue to support ES6 in MediaWiki Whitespace programming language Lua MediaWiki extensions CharInsert - Add code snippets into the MediaWiki editor Semantic MediaWiki (SMW) - Store and query data inside Wiki pages Cargo - Replaced SMW at Leaguepedia Conference Talks Usage of Cargo with Lua on LoL Gamepedia Mediawiker SublimeText plugin Cargo/Lua Best Practices, and When Not To Use Them MediaWiki Lua Tutorial Editing your wiki with Python is easier than you think Other podcast appearances Between the Brackets Transcript You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today I'm talking to Megan Cutrofello. She managed the Leaguepedia eSports wiki for eight years, and in 2017 she got an award for being the unsung hero of the year for eSports. So Megan, thanks for joining me today. [00:00:17] Megan: Thanks for having me. [00:00:19] Jeremy: A lot of the people I talk to are into web development, so they work with web frameworks and things like that. And I guess when you think about it, wikis are web development, but they're kind of their own world, I suppose. for someone who's going to build some kind of a site, like when does it make sense for them to use a wiki versus, uh, a content management system or just like a more traditional web framework? [00:00:55] Megan: I think it makes the most sense to use a wiki if you're going to have a lot of contributors and you don't want all of your contributors to have access to your server. also if your contributors aren't necessarily as tech savvy as you are, um, it can make sense to use a wiki. if you have experience with MediaWiki, I guess it makes sense to use a Wiki. Anytime I'm building something, my instinct is always, oh, I wanna make a Wiki (laughs) . Um, so even if it's not necessarily the most appropriate tool for the job, I always. My, my first thought is, hmm, let's see, I'm, I'm making a blog. Should I make my blog in in MediaWiki? Um, so, so I always, I always wanna do that. but I think it's always, when you're collaborating is pretty much, you always wanna do MediaWiki [00:01:47] Jeremy: And I, I think that's maybe an important point when you say people are collaborating. When I think about Wikis, I think of Wikipedia, uh, and the fact that I can click the edit button and I can see the markup right there, make a change and, and click save. And I didn't even have to log in or anything. And it seems like that workflow is built into a wiki, but maybe not so much into your typical CMS or WordPress or something like that. [00:02:18] Megan: Yeah. Having a public ability to solicit contributions from anyone. so for Leaguepedia, we actually didn't have open contributions from the public. You did have to create an account, but it's still that open anyone can make an account and all you have to do is like, go through that one step of create an account. Admittedly, sometimes people are like, I don't wanna make an account that's so much work. And we're like, just make the account. Come on. It's not that hard. but, uh, you still, you're a community and you want people to come and contribute ideas and you want people to come and be a part of that community to, document your open source project or, record the history of eSports or write down all of the easter eggs that you find in a video game or in a TV show, or in your favorite fantasy novels. Um, and it's really about community and working together to create something where the whole is bigger than the sum of its parts. [00:03:20] Jeremy: And in a lot of cases when people are contributing, I've noticed that on Wikipedia when you edit, there's an option for a, a visual editor, and then there's one for looking at the raw markup. in, in your experience, are people who are doing the edits, are they typically using the visual editor or are they mostly actually editing the, the markup? [00:03:48] Megan: So we actually disabled the Visual editor on Leaguepedia, because the visual editor is not fantastic at knowing things about templates. Um, so a template is when you have one page that gets its content pulled into the larger page, and there's a special syntax for that, and the visual editor doesn't know a lot about that. Um, so that's the first reason. And then the second reason is that, there's this, uh, one extension that we use that allows you to make a clickable, piece of text. It's called (https://www.mediawiki.org/wiki/Extension:CharInsert) CharInserts, uh, for character inserts. so I made a lot of these things that is sort of along the same philosophy as Visual Editor, where it's to help people not have to have the same burden of knowledge, of knowing every exact piece of source that has to be inserted into the page. So you click the thing that says like, um, insert a pick and band prefill, and then a little piece of JavaScript fires and it inserts a whole bunch of Wiki text and then you just enter the champions in the correct places. In the prefills of champions are like the characters that you play in, uh, league of Legends. And so then you have like the text is prefilled for you and you only have to fill in into this outline. so Visual Editor would conflict with CharInserts, and I much preferred the CharInserts approach where you have this compromise in between the never interacting with source and having to have all of the source memorized. So between the fact that Visual Editor like is not a perfect tool and has these bugs in it, and also the fact that I preferred CharInserts, we didn't use Visual Editor at all. I know that some wikis do like to use Visual Editor quite a bit, and especially if you're not working with these templates where you have all of these prefills, it can be a lot more preferred to use Visual Editor. Visual Editor is an experience much more similar to editing something like Microsoft Word, It doesn't feel like you're editing code. and editing code is, I mean, it's scary. Like for, and when I said like, MediaWiki is when you have editors who aren't as tech savvy, as the person who set up the Wiki. for people who don't have that experience, I mean, when you just said like you have to edit a wiki, like someone who's never done that before, they can be very intimidated by it. And you're trying to build a sense of community. You don't want to scare away your potential editors. You want everyone to be included there. So you wanna do everything possible to make everyone feel safe, to contribute their ideas to the Wiki. and if you make them have to memorize syntax, like even something that to me feels as simple as like two open brackets and then the name of a page, and then two closed brackets means linking the page. Like, I mean, I'm used to memorizing a lot of syntax because like, I'm a programmer, but someone who's never written code before, I mean, they're not used to memorizing things like that. So they wanna be able to click a button that says insert link, and then type the name of the page in the middle of the things that pop up there. Um, so visual editor is. It's a lot safer to use. so a lot of wikis do prefer that. and if it, if it didn't have the bugs with the type of editing that my Wiki required, and if we weren't using CharInserts so much, we definitely would've gone for it. But, um, it wasn't conducive to the wiki that I built, so we didn't use it at all. [00:07:42] Jeremy: And the, the compromise you're referring to, is it where the editor sees the raw markup, but then they can, there's like little buttons on the side they can click and they'll know, okay, if I click this one, then it's going to give me the text for creating a list or something like that. [00:08:03] Megan: Yeah, it's a little bit more high level than creating a list because I would never even insert the raw syntax for creating a list. It would be a template that's going to insert a list at the very end. but basically that, yeah, [00:08:18] Jeremy: And I, I know for myself, even though I do software development, if I click at it on a wiki and there's all the different curly brace tags, there's the square tags, and. I think if you spend some time with it, you can kind of get a sense of what it means. But for the average person who doesn't work with software in their day to day, do, do you find that, is that a big barrier for them where they, they click edit and there's all this stuff that they don't really understand? Is that where some people just, they go, oh, I don't, I don't know what to do. [00:08:59] Megan: I think the biggest barrier is actually clicking at it in the first place. so that was a big barrier
Victor is a software consultant in Tokyo who describes himself as a yak shaver. He writes on his blog at vadosware and curates Awesome F/OSS, a mailing list of open source products. He's also a contributor to the Open Core Ventures blog. Before our conversation Victor wrote a structured summary of how he works on projects. I recommend checking that out in addition to the episode. Topics covered: Most people should use Dokku or CapRover But he uses Kubernetes anyways Hosting a Database in Kubernetes Learning technology You don't really know a thing until something goes wrong History of Frontend Development Context from lower layers of the stack and historical projects Good project pages have comparisons to other products Choosing technologies Language choice affects maintainability Knowing an ecosystem Victor's preferred stack Technology bake offs Posting findings means you get free corrections Why people use medium instead of personal sites Victor VADOSWARE - Blog How Victor works on Projects - Companion post for this episode Awesome FOSS - Curated list of OSS projects NimbusWS - Hosted OSS built on top of budget cloud providers Unvalidated Ideas - Startup ideas for side project inspiration PodcastSaver - Podcast index that allows you to choose Postgres or MeiliSearch and compare performance and results of each Victor's preferred stack Docker - Containers Kubernetes - Container provisioning (Though at the beginning of the episode he suggests Dokku for single server or CapRover for multiple) TypeScript - JavaScript with syntax for types. Victor's default choice. Rust - Language he uses if doing embedded work, performance is critical, or more correctness is desired Haskell - Language he uses if correctness and type system is the most important for the project Postgresql - General purpose database that's good enough for most use cases including full text search. KeyDB - Redis compatible database for caching. Acquired by Snap and then made open source. Victor uses it over Redis because it is multi threaded and supports flash storage without a Redis Enterprise license. Pulumi - Provision infrastructure with the languages you're already using instead of a specialized one or YAML Svelte and SvelteKit - Preferred frontend stack. Previously used Nuxt. Search engines Postgres Full Text Search vs the rest Optimizing Postgres Text Search with Trigrams OpenSearch - Amazon's fork of Elasticsearch typesense meilisearch sonic Quickwit JavaScript build tools Babel SWC Webpack esbuild parcel Vite Turbopack JavaScript frameworks React Vue Svelte Ember Frameworks built on top of frameworks Next - React Nuxt - Vue SvelteKit - Svelte Astro - Multiple Historical JavaScript tools and frameworks Underscore jQuery MooTools Backbone AngularJS Knockout Aurelia GWT Bower - Frontend package manager Grunt - Task runner Gulp - Task runner Related Links Dokku - Open source single-host alternative to Heroku Cloud Native Buildpacks - Buildpacks created by Heroku and Pivotal and used by Dokku CapRover - An open source PaaS-like abstraction built on top of Docker Swarm Kelsey Hightower's tweet about being cautious about running databases on Kubernetes Settling the Myth of Transparent HugePages for Databases Kubernetes Container Storage Interface (CSI) Kubernetes Local Persistent Volumes Longhorn - Distributed block storage for Kubernetes Postgres docs Postgres TOAST Everything I've seen on optimizing Postgres on ZFS Kubernetes Workload Resources Kubernetes Network Plugins Kubernetes Ingress Traefik Kubernetes the Hard Way (Setting up a cluster in a way that optimizes for learning) How does TLS work Let's Encrypt Cert manager for Kubernetes Choose Boring Technology A Linux user's guide to Logical Volume Management Docker networking overview Kubernetes Scheduler Tauri - Build desktop applications with web technology and Rust ripgrep - CLI tool to recursively search directory for a regex pattern (Meant to be a rust replacement for grep) angle-grinder / ag - CLI tool to parse and process log files written in rust Object.observe ECMAScript Proposal to be Withdrawn Ruby on Rails - Ruby web framework Django - Python web framework Laravel - PHP web framework Adonis - JavaScript NestJS - JavaScript What is a NullPointerException, and how do I fix it? Mastodon Clap - CLI argument parser for Rust AWS CDK - Provision AWS infrastructure using programming languages Terraform - Provision infrastructure with terraform language URL canonicalization of duplicate pages and the use of the canonical tag - Used by dev.to to send google traffic to the original blogpost instead of dev.to Transcript You can help edit this transcript on GitHub. [00:00:00] Jeremy: This episode, I talk to Victor Adossi who describes himself as a yak shaver. Someone who likes trying a whole bunch of different technologies, seeing the different options. We talk about what he uses, the evolution of front end development, and his various projects. Talking to just different people it's always good to get where they're coming from because something that works for Google at their scale is going to be different than what you're doing with one of your smaller projects. [00:00:31] Victor: Yeah, the context. Of course in direct conflict with that statement, I definitely use Google technology despite not needing to at all right? Like, you know, 99% of people who are doing like people like to call it indiehacking or building small products could probably get by with just Dokku. If you know Dokku or like CapRover. Are two projects that'll be like, Oh, you can just push your code here, we'll build it up like a little mini Heroku PaaS thing and just go on one big server, right? Like 99% of the people could just use that. But of course I'm not doing that. So I'm a bit of a hypocrite in that sense. I know what I should be doing, but I'm not doing that. I am writing a Kubernetes cluster with like five nodes for no reason. Uh, yeah, I dunno, people don't normally count the controllers. [00:01:24] Jeremy: Dokku and CapRover, I think those are where it's supposed to create a heroku like experience I think it's based off of the heroku buildpacks right? At least Dokku is? [00:01:36] Victor: Yeah Buildpacks has actually been spun out into like a community thing so like pivotal and heroku, it's like buildpacks.io, they're trying to build a wider standard around it so that more people can get involved. And buildpacks are actually obviously fantastic as a technology and as a a process piece. There's not much else like them and you know, that's obvious from like Heroku's success and everything. I know Dokku uses that. I don't know that Caprover does, but I haven't, I haven't really run Caprover that much. They, they probably do. Like at this point if you're going to support building from code, it seems silly to try and build your own buildpacks. Cause that's what you will do, eventually. So you might as well use what's there. Anyway, this is like just getting to like my personal opinions at this point, but like, if you think containers are a bad idea in 2022, You're wrong, you should, you should stop. Like you should, you should stop. Think about it. I mean, obviously there's not, um, I got a really great question at an interview once, which is, where are containers a bad idea? That's probably one of the best like recent interview questions I've ever gotten cause I was like, Oh yeah, I mean, like, you can't, it can't be perfect everywhere, right? Nothing's perfect everywhere. So it's like, where is it? Uh, and of course the answer was networking, right? (unintelligible) So if you need absolute performance, but like for just about everything else. Containers are kind of it at this point. Like, time has born it out, I think. So yeah, I always just like bias at taking containers at this point. So I'm probably more of a CapRover person than a Dokku person, even though I have not used, I don't use CapRover. [00:03:09] Jeremy: Well, like something that I've heard with containers, and maybe it's changed recently, but, but something that was kind of holdout was when people would host a database sometimes they would oh we just don't wanna put this in a container and I wonder if like that matches with your thinking or if things have changed. [00:03:27] Victor: I am not a database administrator right like I read postgres docs and I read the, uh, the Postgres documentation, and I think I know a bit about postgres but I don't commit right like so and I also haven't, like, oh, managed X terabytes on one server that you are making sure never goes down kind of deal. But the stickiness for me, at least from when I've run, So I've done a lot of tests with like ZFS and Postgres and like, um, and also like just trying to figure out, and I run Postgres in Kubernetes of course, like on my cluster and a lot of the stuff I found around is, is like fiddly kernel things like sort of base kernel settings that you need to have set. Like, you know, stuff like should you be using transparent huge pages, like stuff like that. But once you have that settled. Containers are just processes with name spacing and resource control, right? Like, that's it. there are some other ins and outs, but for the most part, if you're fine running a process, so people ran processes, right? And they were just completely like unprotected. Then people made users for the processes and they limited the users and ran the processes, right? Then the next step is now you can run a process and then do the limiting the name spaces in cgroups dynamically. Like there, there's, there's sort of not a humongous difference, unless you're hitting something very specific. Uh, but yeah, databases have been a point of contention, but I think, Kelsey Hightower had that tweet yeah. That was like, um, don't run databases in Kubernetes. And I think he called it back. [00:04:56] Victor: I don't know, but I, I know that was uh, was one of those things that people were real
Xe Iaso on Tailscale

Xe Iaso on Tailscale

2022-10-0154:10

Xe Iaso is the Archmage of Infrastructure at Tailscale and previously worked at Heroku.This episode originally aired on Software Engineering Radio but includes some additional discussion about their blog near the end of the episode.Topics covered:Use cases for VPNsSimplifying service authentication by identifying users via IPPeer-to-peer vs centralized "Virtual Pain Networks"Tailscale's tech stack and why they forked the go compilerDERP relay serversStruggling with the iOS network extension size limitThe surprisingly small amount of infrastructure required to run a VPNRunning your company on your own productWorking at Heroku vs TailscaleUsing the socratic style of debate in technical blog postsRelated Links@theprincessxenaXe's BlogACL samplesGo links origin storyHow Tailscale worksTailscale SSHHow Tailscale assigns IP addressesHey linker, can you spare a meg?My Blog is Hilariously Overengineered to the Point People Think it's a Static SiteThe Sheer Terror of PAMTranscript[00:00:00] Jeremy: Today I'm talking to Xe Iaso, they're the archmage of infrastructure at tailscale, and they also have a great blog everyone should check out. Xe, welcome to software engineering radio.[00:00:12] Xe: Thanks. It's great to be here. [00:00:14] Jeremy: I think the first thing we should start with, is what's a, a VPN, because I think some people they may have used it to remote into their workplace or something like that. But I think the, the scope of what it's good for and what it does is a lot broader than that. So maybe you could talk a little bit about that first.[00:00:31] Xe: Okay. a VPN is short for virtual private network. It's basically a fake network that's overlaid on top of existing networks. And then you can use that network to do whatever you would with a normal computer network. this term has been co-opted by companies that are attempting to get into the, like hide my ass style market, where, you know, you encrypt your internet information and keep it safe from hackers.But, uh, so it makes it really annoying and hard to talk about what a VPN actually is. Because tailscale, uh, the company I work for is closer to like the actual intent of a VPN and not just, you know, like hide your internet traffic. That's already encrypted anyway with another level of encryption and just make a great access point for, uh, three letter agencies.But are there, use cases, past that, like when you're developing a piece of software, why would you decide to use a VPN outside of just because I want my, you know, my workers to be able to get access to this stuff.[00:01:42] Xe: So something that's come up, uh, when I've been working at tailscale is that sometimes we'll make changes to something. And it'll be changes to like the user experience of something on the admin panel or something. So in a lot of other places I've worked in order to have other people test that, you know, you'd have to push it to the cloud.It would have to spin up a review app in Heroku or some terrifying terraform of abomination would have to put it out onto like an actual cluster or something. But with tail scale, you know, if your app is running locally, you just give like the name of your computer and the port number. And you know, other people are able to just see it and poke it and experience it.And that basically turns the, uh, feedback cycle from, you know, like having to wait for like the state of the world to converge, to, you know, make a change, press F five, give the URL to a coworker and be like, Hey, is this Gucci?they can connect to your app as if you were both connected to the same switch.[00:02:52] Jeremy: You don't have to worry about, pushing to a cloud service or opening ports, things like that.[00:02:57] Xe: Yep. It will act like it's in the same room, even when they're not it'll even work. if you're at both at Starbucks and the Starbucks has reasonable policies, like holy crap, don't allow devices to connect to each other directly. so you know, you're working on. Like your screenplay app at your Starbucks or something, and you have a coworker there and you're like, Hey, uh, check this out and, uh, give them the link.And then, you know, they're also seeing the screenplay editor.[00:03:27] Jeremy: in terms of security and things like that. I mean, I'm picturing it kind of like we were sitting in the same room and there's a switch and we both plugged in. Normally when you do something like that, you kind of have, full access to whatever else is on the switch. Uh, you know, provided that's not being blocked by a, a firewall.is there like a layer of security on top of that, that a VPN service like tailscale would provide.[00:03:53] Xe: Yes. Um, there are these things called access control lists, which are kind of like firewall rules, except you don't have to deal with like the nightmare of writing an IP tables rule that also works in windows firewall and whatever they use in Mac OS. The ACL rules are applied at the tailnet level for every device in the tailnet.So if you have like developer machines, you can put people into groups as things like developers and say that developer machines can talk to production, but not people in QA. They can only talk to testing and people on SRE have, you know, permissions to go everywhere and people within their own teams can connect to each other. you can make more complicated policies like that fairly easily.[00:04:44] Jeremy: And when we think about infrastructure for, for companies, you were talking about how there could be development, infrastructure, production, infrastructure, and you kind of separate it all out. when you're working with cloud infrastructure. A lot of times, there's the, I always forget what it stands for, but there's like IAM.There's like policies that you can set up with the cloud provider that says these users can access this, or these machines can access this. And, and I wonder from your perspective, when you would choose to use that versus use something at the, the network or the, the VPN level.[00:05:20] Xe: The way I think about it is that things like IAM enforce, permissions for like more granularly scoped things like can create EC2 instances or can delete EC2 instances or something like that. And that's just kind of a different level of thing. uh, tailscale, ACLs are more, you know, X is allowed to connect to Y or with tailscale, SSH X is allowed to connect as user Y.and that's really different than like arbitrary capability things like IAM offers.you could think about it as an IAM system, but the main permissions that it's exposing are can X connect to Y on Zed port.[00:06:05] Jeremy: What are some other use cases where if you weren't using a VPN, you'd have to do a lot more work or there's a lot more complexity, kind of what are some cases where it's like, okay, using a VPN here makes a lot of sense.(The quick and simple guide to go links https://www.trot.to/go-links) [00:06:18] Xe: There is a service internal to tailscale called go, which is a, clone of Google's so-called go links where it's basically a URL shortener that lives at http://go. And, you know, you have go/something to get to some internal admin service or another thing to get to like, you know, the company directory and notion or something, and this kind of thing you could do with a normal setup, you know, you could set it up and have to do OAuth challenges everywhere and, you know, have to put and make sure that everyone has the right DNS configuration so that, it shows up in the right place.And then you have to deal with HTTPS um, because OAuth requires HTTPS for understandable and kind of important reasons. And it's just a mess. Like there's so many layers of stuff like the, the barrier to get, you know, like just a darn URL, shortener up turns from 20 minutes into three days of effort trying to, you know, understand how these various arcane things work together.You need to have state for your OAuth implementation. You need to worry about what the hell a a JWT is (sigh) . It's it it's just bad. And I really think that something like tailscale with everybody has an IP address. In order to get into the network, you have to sign in with your, auth provider, your, a provider tells tailscale who you are.So transitively every IP address is tied to an owner, which means that you can enforce access permission based on the IP address and the metadata about it that you grab from the tailscale. daemon, it's just so much simpler. Like you don't have to think about, oh, how do I set up OAuth this time? What the hell is an oauth proxy?Um, what is a Kubernetes? That sort of thing you just think about like doing the thing and you just do it. And then everything else gets taken care of it. It's like kind of the ultimate network infrastructure, because it's both omnipresent and something you don't have to think about. And I think that's really the power of tailscale.[00:08:39] Jeremy: typically when you would spin up a, a service that you want your developers or your system admins, to be able to log into, you would have to have some way of authenticating and authorizing that user. And so you were talking about bringing in OAuth and having your, your service understand that.But I, I guess what you're saying is that when you have something like tailscale, that's kind of front loaded, I guess you, you authenticate with tail scale, you get onto the network, you get your IP. And then from that point on you can access all these different services that know like, Hey, because you're on the network, we know you're authenticated and those services can just maybe map that IP that's not gonna change to like users in some kind of table. Um, and not have to worry about figuring out how do I authenticate this user.[00:09:34] Xe: I would personally more suggest that you use the, uh, whois, uh, look up route in the tailscale daemon's local API, but basically, yeah, you don't really have to worry too much about like the authentication layer because
Jonathan Shariat is the coauthor of the book Tragic Design and co-host of the Design Review Podcast. He's currently a Sr. Interaction Designer & Accessibility Program Lead at Google. This episode originally aired on Software Engineering Radio. Topics covered: How poor design kills in medical environmentsCausing harm with features meant to bring joyConsiderations during the product development cycleIndustry specific checklists and testing requirementsCreating guiding principles for a teamWhy medical software often has poor UXDesigning for crisis situationsWhy dark patterns can be bad in the long term Related Links @designuxuiTragic DesignHow Bad UX Killed JennyDesign Review podcastDeceptive Design Transcript You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today I'm talking to Jonathan Shariat, he's the co-author of Tragic design. The host of the design review podcast. And he's currently a senior interaction designer and accessibility program lead at Google. Jonathan, welcome to software engineering radio. [00:00:15] Jonathan: Hi, Jeremy, thank you So much for having me on. [00:00:18] Jeremy: the title of your book is tragic design. And I think that people can take a lot of different meanings from that. So I wonder if you could start by explaining what tragic design means to you. [00:00:33] Jonathan: Hmm. For me, it really started with this story that we have in the beginning of the book. It's also online. Uh, I originally wrote it as a medium article and th that's really what opened my eyes to, Hey, you know, design has, is, is this kind of invisible world all around us that we actually depend on very critically in some cases. And So this story was about a girl, you know, a nameless girl, but we named her Jenny for the story. And in short, she came for treatment of cancer at the hospital, uh, was given the medication and the nurses that were taking care of her were so distracted with the software they were using to chart, make orders, things like that, that they miss the fact that she needed hydration and that she wasn't getting it. And then because of that, she passed away. And I still remember that feeling of just kind of outrage. And, you know, when we hear a lot of news stories, A lot of them are outraging. they, they touch us, but some of them, some of those feelings stay and they stick with you. And for me, that stuck with me, I just couldn't let it go because I think a lot of your listeners will relate to this. Like we get into technology because we really care about the potential of technology. What could it do? What are all the awesome things that could do, but we come at a problem and we think of all the ways it could be solved with technology and here it was doing the exact opposite. It was causing problems. It was causing harm and the design of that, or, you know, the way that was built or whatever it was failing Jenny, it was failing the nurses too, right? Like a lot of times we blame that end user and, and it caused it. So to me, that story was so tragic. Something that deeply saddened me and was regrettable and cut short someone's uh, you know, life and that's the definition of tragic, and there's a lot of other examples with varying degrees of tragic, but, um, you know, as we look at the impact technology has, and then the impact we have in creating those technologies that have such large impacts, we have a responsibility to, to really look into that and make sure we're doing as best of job as we can and avoid those as much as possible. Because the biggest thing I learned in researching all these stories was, Hey, these aren't bad people. These aren't, you know, people who are clueless and making these, you know, terrible mistakes. They're me, they're you, they're they're people. Um, just like you and I, that could make the same mistakes. [00:03:14] Jeremy: I think it's pretty clear to our audience where there was a loss of life, someone, someone died and that's, that's clearly tragic. Right? So I think a lot of things in the healthcare field, if there's a real negative outcome, whether it's death or severe harm, we can clearly see that as tragic. and I, I know in your book you talk about a lot of other types of, I guess negative things that software can cause. So I wonder if you could, explain a little bit about now past the death and the severe injury. What's tragic to you. [00:03:58] Jonathan: Yeah. still in that line of like of injury and death, And, you know, the side that most of us will actually, um, impact, our work day-to-day is also physical harm. Like, creating this software in a car. I think that's a fairly common one, but also, ergonomics, right? Like when we bring it back to something like less impactful, but still like multiplied over the impact of, multiplied over the impact of a product rather, it can be quite, quite big, right? Like if we're designing software in a way that's very repetitive or, you know, everyone's, everyone's got that, that like scroll, thumb, scroll, you know, issue. Right. if, uh, our phones aren't designed well, so there's a lot of ways that it can still physically impact you ergonomically. And that can cause you a lot of problem arthritis and pain, but yeah, there's, there's other, there's other, other ways that are still really impactful. So the other one is by saddening or angry. You know, that emotional harm is very real. And oftentimes sometimes it gets overlooked a little bit because it's, um, you know, physical harm is what is so real to us, but sometimes emotional harm isn't. But, you know, we talk about in the book, the example of Facebook, putting together this great feature, which takes your most liked photo, and, you know, celebrates your whole year by you saying, Hey, look at as a hero, you're in review this, the top photo from the year, they add some great, you know, well done illustrations behind it, of, of balloons and confetti and, people dancing. But some people had a bad year. Some people's most liked engaged photo is because something bad happened and they totally missed. And because of that, people had a really bad time with this where, you know, they lost their child that year. They lost their loved one that year, their house burnt down. Um, something really bad happened to them. And here was Facebook putting that photo of their, of their dead child up with, you know, balloons and confetti and people dancing around it. And that was really hard for people. They didn't want to be reminded of that. And especially in that way, and these emotional harms also come into the, in the play of, on anger. You know, we talk about, well, one, you know, there's, there's a lot of software out there that, that, um, tries to bring up news stories that anger us and which equals engagement. Um, but also ones that, um, use dark patterns to trick us into purchasing and buying and forgetting about that free trial. So they charge us for a yearly subscription and won't refund us. Uh, if you've ever tried to cancel a subscription, you start to see some real their their real colors. Um, so emotional harm and, uh, anger is a, is a big one. We also talk about injustice in the book where there are products that are supposed to be providing justice. Um, and you know, in very real ways like voting or, you know, getting people the help that they need from the government, or, uh, for people to see their loved ones in jail. Um, or, you know, you're getting a ticket unfairly because you couldn't read the sign was you're trying to read the sign and you, and you couldn't understand it. so yeah, we look at a lot of different ways that design and our saw the software that we create can have very real impact on people's lives and in a negative way, if we're not careful. [00:07:25] Jeremy: the impression I get, when you talk about tragic design, it's really about anything that could harm a person, whether physically, emotionally, you know, make them angry, make them sad. And I think the, the most liked photo example is a great one, because like you said, I think the people may be building something that, that harms and they may have no idea that they're doing it. [00:07:53] Jonathan: Exactly like that. I love that story because not, not to just jump on the bandwagon of saying bad things about like Facebook or something. No, I love that story because I can see myself designing the exact same thing, like being a part of that product, you know, building it, you know, looking at the, uh, the, the specifications, the, um, the, the PM, you know, put it that put together and the decks that we had, you know, like I could totally see that happening. And just never, I think, never having the thought, because our we're so focused on like delighting our users and, you know, we have these metrics and these things in mind. So that's why, like, in the book, we really talk about a few different processes that need to be part of. Product development cycle to stop, pause, and think about like, well, what are the, what are the negative aspects here? Like what are the things that could go wrong? What are the, what are the other life experiences that are negative? Um, that could be a part of this and you don't need to be a genius to think of every single thing out there. You know, like in this example, I think just talking about, you know, like, oh, well, some people might've had, you know, if they would have taken probably like, you know, one hour out of their entire project, or maybe even 10 minutes, they might've come up with like, oh, there could be bad thing. Right. But, um, so if you don't have that, that, that moment to pause that moment to just say, okay, we have time to brainstorm together about like how this could go wrong or how, you know, the negative of life could be impacted by this, um, feature that that's all that it takes. It doesn't necessarily mean that you need to do. You know, giant study around the impact, potential impact of this product and all the, all
This episode originally aired on Software Engineering Radio.Randy Shoup is the VP of Engineering and Chief Architect at eBay. He was previously the VP of Engineering at WeWork and Stitch Fix, a Director of Engineering at Google Cloud where he worked on App Engine, and a Chief Engineer and Distinguished Architect at eBay in 2004. Topics covered:eBay’s origins as a single C++ classThe five-year migration to Java servicesSharing a database between the old and new systemsBuilding a distributed tracing systemWorking with bare metalWhy most companies should stick to cloudWhy individual services should own their own data storageHow scale has caused solutions to changeRejoining a former companyThe Accelerate BookImproving delivery time. Related Links:@randyshoupOpenTelemetryLightStepHoneycombAccelerate BookThe MemoValue Stream MappingThe Epic Story of Dropbox’s Exodus from the Amazon Cloud EmpireTranscript:[00:00:00] Jeremy: Today, I'm talking to Randy Shoup, he's the VP of engineering and chief architect at eBay.[00:00:05] Jeremy: He was previously the VP of engineering at WeWork and stitch fix, and he was also a chief engineer and distinguished architect at eBay back in 2004. Randy, welcome back to software engineering radio. This will be your fifth appearance on the show. I'm pretty sure that's a record.[00:00:22] Randy: Thanks, Jeremy, I'm really excited to come back. I always enjoy listening to, and then also contributing to software engineering radio.Back at, Qcon 2007, you spoke with Markus Volter he's he was the founder of SE radio. And you were talking about developing eBay's new search engine at the time.[00:00:42] Jeremy: And kind of looking back, I wonder if you could talk a little bit about how eBay was structured back then, maybe organizationally, and then we can talk a little bit about the, the tech stack and that sort of thing.[00:00:53] Randy: Oh, sure. Okay. Yeah. Um, so eBay started in 1995. I just want to like, you know, orient everybody. Same, same as the web. Same as Amazon, same as a bunch of stuff. So E-bay was actually almost 10 years old when I joined. That seemingly very old first time. Um, so yeah. What was ebay's tech stack like then? So E-bay current has gone through five generations of its infrastructure.It was transitioning between the second and the third when I joined in 2004. Um, so the. Iteration was Pierre Omidyar, the founder three-day weekend three-day labor day weekend in 1995, playing around with this new cool thing called the web. He wasn't intending to build a business. He just was playing around with auctions and wanted to put up a webpage.So he had a Perl backend and every item was a file and it lived on this little 486 tower or whatever you had at the time. Um, so that wasn't scalable and wasn't meant to be. The second generation of eBay's architecture was what we called V2 very, you know, creatively, uh, that was a C++ monolith. Um, an ISAPI DLL with essentially well at its worst, which grew to 3.4 million lines of code in that single DLL and basically in a single class, not just in a single, like repo or a single file, but in a single class.So that was very unpleasant to work in. As you can imagine, um, eBay had about a thousand engineers at the time and they were, you know, as you can imagine, like really stepping on each other's toes and not being able to make much forward progress. So starting in, I want to call it 2002. So two years before I joined, um, they were migrating to the creatively named V3 and V3 architecture was Java, and.you know, not microservices, but like we didn't even have that term, but it wasn't even that it was mini applications. So I'm actually going to take a step back. V2 was a monolith. So like all of eBay's code in that single DLL and like that was buying and selling and search and everything. And then we had two monster databases, a primary and a backup big Oracle machines on some hardware that was bigger, you know, bigger than refrigerators and that ran eBay for a bunch of years, before we changed the upper part of the stack, we, um, chopped up the, that single monolithic database into a bunch of, um, domain specific databases or entity specific databases, right?So a set of databases around users, you know, sharded by the user ID could talk about all that. If you want, you know, items again, sharded by item ID transactions, sharded by transaction ID... I think when I joined, it was the several hundred instances of, uh, Oracle databases, um, you know, spread around, but still that monolithic front end.And then in 2002, I wanna say we started migrating into that V3 that I was saying, okay. So that's, uh, that was a rewrite in Java, again, many applications. So you take the front end and instead of having it be in one big unit, it was this, uh, ER file, EAR, file, if run and people remember back to, you know, those stays in Java, um, you know, 220 different of those.So like here is the, you know, one of them for the search pages, you know, so the, you know, one application be the search application and it would, you know, do all the search related stuff, the handful of pages around search, uh, ditto for, you know, the buying area, ditto for the, you know, checkout area, ditto for the selling area...220 of those. Um, and that was again, domain, um, vertically sliced domains. And then the relationship between those V3, uh, applications and the databases was a many to many things. So like many applicants, many of those applications interact with items. So they would interact with those item databases. Many of them would interact with users.And so they would interact with a user databases, et cetera, uh, happy to go into as much gory detail as you want about all that. But like, that's what, uh, but we were in the transition period. You know, when I, uh, between the V2 monolith to the V3 mini applications in, uh, 2004, I'm just going to pause there and like, let me know where you want to take it.[00:05:01] Jeremy: Yeah. So you were saying that it was, um, it started as Perl, then it became a C++, and that's kind of interesting that you said it was all in one class, right? So it's wow. That's gotta be a gigantic [00:05:16] Randy: I mean, completely brutal. Yeah. 3.4 million lines of code. Yeah. We were hitting compiler limits on the number of methods per class.[00:05:22] Jeremy: Oh my gosh.[00:05:23] Randy: I'm, uh, uh, scared that I have that. I happen to know that at least at the time, uh, Microsoft allowed you 16 K uh, methods per class, and we were hitting that limit.So, uh, not great.[00:05:36] Jeremy: So it's just kind of interesting to think about how do you walk through that code, right? You have, I guess you just have this giant file.[00:05:45] Randy: Yeah. I mean, there were, you know, different methods. Um, but yeah, it was a big man. I mean, it was a monolith, it was, uh, you know, it was a spaghetti mess. Um, and you know, as you can imagine, Amazon went through a really similar thing by the way. So this wasn't soup. I mean, it was bad, but like we weren't the only people that were making that, making that a mistake.Um, and just like Amazon, where they were, uh, they did like one update a quarter (laughs) , you know, at that period, like 2000, uh, we were doing something really similar, like very, very slow. Um, you know, updates and, uh, when we moved to V3, you know, the idea was to get to do changes much faster. And we were very proud of ourselves starting in 2004 that we, uh, upgraded the whole site every two weeks.And we didn't have to do the whole site, but like each of those individual applications that I was mentioning, right. Those 220 applications, each of those would roll out on this biweekly cadence. Um, and they had interdependencies. And so we rolled them out in this dependency order in any way, lots of, lots of complexity associated with that.Um, yeah, there you go.[00:06:51] Jeremy: the V3 that, that was written in Java, I'm assuming this was a, as a complete rewrite. You, you didn't use the C++ code at all.[00:07:00] Randy: Yeah. And, uh, it was, um, we migrated, uh, page by page. So, uh, you know, in the transition period, which lasted probably five years, um, there were pages, you know, in the beginning, all pages were served by V2. In the end, all pages are served by V3 and, you know, over time you iterate and you like rewrite in parallel, you know, rewrite and maintain in parallel the V3 version of XYZ page and the V2 version of XYZ page.Um, and then when you're ready, you start to test out at low percentages of traffic, you know, what would, what does V3 look like? Is it correct? And when it isn't do you go and fix it, but then ultimately you migrate the traffic over, um, to fully take, get fully be in the V3 world, and then you, you know, remove or comment out or whatever.The, the code that supported that in the V2 monolith.[00:07:54] Jeremy: And then you had mentioned using Oracle databases. Did you have a set for V2 and a separate V3 and you were kind of trying to keep them in sync?[00:08:02] Randy: Oh, great question. Thank you for asking that question. No, uh, no. We had the databases. Um, so again, as I mentioned, we had pre-demonolith that's my that's a technical term, uh, pre broken up the databases starting in, let's call it 2000. Uh, actually I'm almost certain that's 2000. Cause we had a major site outage in 1999, which everybody still remembers who was there at the time.Uh wasn't me or I wasn't there at the time. Uh, but you know, you can look it up. Uh, anyway, so yeah, starting in 2000, we broke up that monolithic database into what I was telling you before those entity aligned databases. Again, one set for items, one set for users, one set for transactions, you know, dot dot, dot, um, and that division of those databases was shared.You know, those databases were shared between. The three using those things and then V sorry, V2 using those things and V3 using those things. Um, and then, you know, so
Ant Wilson on Supabase

Ant Wilson on Supabase

2022-05-1159:08

This episode originally aired on Software Engineering Radio.A few topics coveredBuilding on top of open sourceForking their GoTrue dependencyRelying on Postgres features like row level securityAdding realtime support based on Postgres's write ahead logGenerating an API layer based on the database schema with PostgRESTCreating separate EC2 instances for each customer's databaseHow Postgres could scale in the futureMonitoring postgresCommon support ticketsPermissive open source licensesRelated Links@antwilsonSupabaseSupabase GitHubFirebaseAirtablePostgRESTGoTrueElixirPrometheusVictoriaMetricsLogflareBigQueryNetlifyY CombinatorPostgresPostgreSQLWrite-Ahead LoggingRow Security Policiespg_stat_statementspgAdminPostGISAmazon AuroraTranscript You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today I'm talking to Ant Wilson, he's the co-founder and CTO of Supabase. Ant welcome to software engineering radio.[00:00:07] Ant: Thanks so much. Great to be here. [00:00:09] Jeremy: When I hear about Supabase, I always hear about it in relation to two other products. The first is Postgres, which is a open source relational database. And second is Firebase, which is a backend as a service product from Google cloud that provides a no SQL data store.It provides authentication and authorization. It has a functions as a service component. It's really meant to be a replacement for you needing to have your own server, create your own backend. You can have that all be done from Firebase. I think a good place for us to start would be walking us through what supabase is and how it relates to those two products.[00:00:55] Ant: Yeah. So, so we brand ourselves as the open source Firebase alternativethat came primarily from the fact that we ourselves do use the, as the alternative to Firebase. So, so my co-founder Paul in his previous startup was using fire store. And as they started to scale, they hit certain limitations, technical scaling limitations and he'd always been a huge Postgres fan.So we swapped it out for Postgres and then just started plugging in. The bits that we're missing, like the real-time streams. Um, He used the tool called PostgREST with a T for the, for the CRUD APIs. And sohe just built like the open source Firebase alternative on Postgres, And that's kind of where the tagline came from.But the main difference obviously is that it's relational database and not a no SQL database which means that it's not actually a drop-in replacement. But it does mean that it kind of opens the door to a lot more functionality actually. Um, Which, which is hopefully an advantage for us. [00:02:03] Jeremy: it's a, a hosted form of Postgres. So you mentioned that Firebase is, is different. It's uh NoSQL. People are putting in their, their JSON objects and things like that. So when people are working with Supabase is the experience of, is it just, I'm connecting to a Postgres database I'm writing SQL.And in that regard, it's kind of not really similar to Firebase at all. Is that, is that kind of right?[00:02:31] Ant: Yeah, I mean, the other thing, the other important thing to notice that you can communicate with Supabase directly from the client, which is what people love about fire base. You just like put the credentials on the client and you write some security rules, and then you just start sending your data. Obviously with supabase, you do need to create your schema because it's relational.But apart from that, the experience of client side development is very much the same or very similar the interface, obviously the API is a little bit different. But, but it's similar in that regard. But I, I think, like I said, we're moving, we are just a database company actually. And the tagline, just explained really, well, kind of the concept of, of what it is like a backend as a service. It has the real-time streams. It has the auth layer. It has the also generated APIs. So I don't know how long we'll stick with the tagline. I think we'll probably outgrow it at some point. Um, But it does do a good job of communicating roughly what the service is.[00:03:39] Jeremy: So when we talk about it being similar to Firebase, the part that's similar to fire base is that you could be a person building the front end part of the website, and you don't need to necessarily have a backend application because all of that could talk to supabase and supabase can handle the authentication, the real-time notifications all those sorts of things, similar to Firebase, where we're basically you only need to write the front end part, and then you have to know how to, to set up super base in this case.[00:04:14] Ant: Yeah, exactly. And some of the other, like we took w we love fire based, by the way. We're not building an alternative to try and destroy it. It's kind of like, we're just building the SQL alternative and we take a lot of inspiration from it. And the other thing we love is that you can administer your database from the browser.So you go into Firebase and you have the, you can see the object tree, and when you're in development, you can edit some of the documents in real time. And, and so we took that experience and effectively built like a spreadsheet view inside of our dashboard. And also obviously have a SQL editor in there as well.And trying to, create this, this like a similar developer experience, because that's where Firebase just excels is. The DX is incredible. And so we, we take a lot of inspiration from it in, in those respects.[00:05:08] Jeremy: and to to make it clear to our listeners as well. When you talk about this interface, that's kind of like a spreadsheet and things like that. I suppose it's similar to somebody opening up pgAdmin, I suppose, and going in and editing the rows. But, but maybe you've got like another layer on top that just makes it a little more user-friendly a little bit more like something you would get from Firebase, I guess.[00:05:33] Ant: Yeah.And, you know, we, we take a lot of inspiration from pgAdmin. PG admin is also open source. So I think we we've contributed a few things and, or trying to upstream a few things into PG admin. The other thing that we took a lot of inspiration from for the table editor, what we call it is airtable.And because airtable is effectively. a a relational database and that you can just come in and, you know, click to add your columns, click to add a new table. And so we just want to reproduce that experience again, backed up by a full Postgres dedicated database. [00:06:13] Jeremy: so when you're working with a Postgres database, normally you need some kind of layer in front of it, right? That the person can't open up their website and connect directly to Postgres from their browser. And you mentioned PostgREST before. I wonder if you could explain a little bit about what that is and how it works.[00:06:34] Ant: Yeah, definitely. so yeah, PostgREST has been around for a while. Um, It's basically an, a server that you connect to, to your Postgres database and it introspects your schemas and generates an API for you based on the table names, the column names. And then you can basically then communicate with your Postgres database via this restful API.So you can do pretty much, most of the filtering operations that you can do in SQL um, uh, equality filters. You can even do full text search over the API. So it just means that whenever you obviously add a new table or a new schema or a new column the API just updates instantly. So you, you don't have to worry about writing that, that middle layer which is, was always the drag right.When, what have you started a new project. It's like, okay, I've got my schema, I've got my client. Now I have to do all the connecting code in the middle of which is kind of, yeah, no, no developers should need to write that layer in 2022.[00:07:46] Jeremy: so this the layer you're referring to, when I think of a traditional. Web application. I think of having to write routes controllers and, and create this, this sort of structure where I know all the tables in my database, but the controllers I create may not map one to one with those tables. And so you mentioned a little bit about how PostgREST looks at the schema and starts to build an API automatically.And I wonder if you could explain a little bit about how it does those mappings or if you're writing those yourself. [00:08:21] Ant: Yeah, it basically does them automatically by default, it will, you know, map every table, every column. When you want to start restricting things. Well, there's two, there's two parts to this. There's one thing which I'm sure we'll get into, which is how is this secure since you are communicating direct from the client.But the other part is what you mentioned giving like a reduced view of a particular date, bit of data. And for that, we just use Postgres views. So you define a view which might be, you know it might have joins across a couple of different tables or it might just be a limited set of columns on one of your tables. And then you can choose to just expose that view. [00:09:05] Jeremy: so it sounds like when you would typically create a controller and create a route. Instead you create a view within your Postgres database and then PostgREST can take that view and create an end point for it, map it to that.[00:09:21] Ant: Yeah, exactly (laughs) . [00:09:24] Jeremy: And, and PostgREST is an open source project. Right. I wonder if you could talk a little bit about sort of what its its history was. How did you come to choose it? [00:09:37] Ant: Yeah.I think, I think Paul probably read about it on hacker news at some point. Anytime it appears on hacker news, it just gets voted to the front page because it's, it's So awesome. And we, we got connected to the maintainer, Steve Chavez. At some point I think he just took an interest in, or we took an interest in Postgres and we kind of got acquainted.And then we found out that, you know, Steve was open to work and th
Testing with Jason Swett

Testing with Jason Swett

2022-04-0401:09:17

Jason Swett is the author of the Complete Guide to Rails Testing. We covered Jason's experience with testing while building relatively small Ruby on Rails applications. Our conversation applies to just about any language or framework so don't worry if you aren't familiar with Rails. A few topics covered:- Listen to advice but be aware of its context. Something good for a large project may not apply to a small one- Fast feedback loops help us work quicker and tests are great for this- If you don't involve things like the database in any of your tests your application may not work at all despite your tests passing- You may not need to worry about scaling at the start for smaller or internal applications - Try to break features into the smallest pieces possible so they can be checked in and reviewed quickly- Jason doesn't remember the difference between a stub and a mock because he rarely uses them Related Links:- Code with Jason- The Complete Guide to Rails Testing- Code With Jason Podcast Transcript: [00:00:00] Jeremy: today I'm talking to Jason Swett, he's the author of the complete guide to rails testing, a frequent trainer and conference speaker. And he's the host of the code with Jason podcast. So Jason, welcome to software sessions. [00:00:13] Jason: Thanks for having me. [00:00:15] Jeremy: from listening to your podcast, I get a sense that the size of the projects you work on they're, they're relatively modest. Like they're not like a super huge thing. There, there may be something that you can fit all within your head. And I was wondering if you could talk a little bit to that first, so that we kind of get where your perspective is and the types of projects you work on are. [00:00:40] Jason: Yeah. Good question. So that is true. Most of my jobs have been at small companies and I think that's probably typical of the typical developer because most businesses in the world are small businesses. You know, there's, there's a whole bunch of small businesses for every large business. And so most of the code bases I've worked on have been not particularly huge. And most of the teams I've worked on have been relatively small And sometimes so small that it's just me. I'm the only person working on the application. I, don't really know any different. So I can't really compare it to working on a larger application. I have worked at, I worked at AT&T so that was a big place, but I was at, AT&T just working on my own solo project so that wasn't a big code base either. So yeah, that's been what my experience has been like. [00:01:36] Jeremy: Yeah. And I, I think that's interesting that you mentioned most people work in that space as well, because that's basically where I fall as well. So when I listened to your podcast and I hear you talking about like, oh, I have a, I have a rails project where I just have a single server and you know, I have a database and rails, and maybe I have nginx in front, maybe redis it's sort of the scale that I'm familiar with versus when I hear podcasts or articles, you know, I'm reading where they're talking about, oh, we have 500 microservices or we have 200 instances of the application. That's, that's not a space that I've, I've worked in. So I, I found it helpful to, to hear, you know, from you on your show that like, Hey, you know, not everybody is working on these gigantic projects. [00:02:28] Jason: Yeah. Yeah. It's not terribly relatable when you hear about those huge projects. And obviously, sometimes, maybe people earlier in their career can get the wrong idea about what's applicable to their situation. I feel like one of the most dangerous kinds of advice is advice that's good advice, but it's good advice for somebody else. And then I've, I've. Been victim of that, where I get some advice and maybe it's genuinely good advice, but it's not good advice for me where I am doing what I'm doing. And so, I apply the advice, but it's not the right thing. And so it doesn't work out for me. So I'm always careful to like asterisk a lot of the things I say where it's like, Hey, this is, this is good advice if you're in this particular situation, but maybe not for everybody. And really the truth is I, I try not to give advice these days because like advice is dangerous stuff for that very reason. [00:03:28] Jeremy: so, so when you mentioned you try not to give advice and you have this book, the complete guide to rails testing, would you not describe what's in the book as advice? I'm kind of curious what the distinction is there. [00:03:42] Jason: Yeah, Jeremy, right after I said that, I'm like, what am I talking about? I give all kinds of advice. So forget, I said that I totally give advice. But maybe not in certain things like like business advice or anything like that. I do give a lot of advice around testing and various programming things. So, yeah, ignore that part of what I said. [00:04:03] Jeremy: something that I found a little bit unique about rails testing was that a lot of the tests are centered around I guess you could call it like a full integration test, right? Because I noticed when working with rails, if I write a test, a lot of times it's talking to the database, it's talking to if, if I. Have an API or I have a website it's actually talking to the API. So it's actually going through all the layers and spinning up a database and all that. And I wonder if you, you knew how that work, like each time you run a test, is it creating a new database? So that each test is isolated or how does all that stuff actually work? [00:04:51] Jason: Yeah, good question. First. I want to mention something about terminology. So I think one of the most important things for somebody who's new to testing to learn is that in our industry, we don't have a consensus around terminology. So what you call an integration test might be different from what I call an integration test. The thing you just described as an integration test, I might call an acceptance test. Although I happen to also call it an integration test cause I use that terminology too, but I just wanted to give that little asterisk for the listener, because if they're like, wait, I thought an integration test was this. And not that anyway, you asked how does that work? So. It is true that with those types of rails tests, and just to add more terminology into the mix, they call those system tests or system specs, depending on what framework you're using. But those are the tests that actually instantiate a browser and simulating user input, exercise, the UI of the application. And those are the kinds of tests that like show you that everything works together. And mechanically how that works. One layer of it is that each test runs in a database transaction. So when you, you know, in order to run a certain test, maybe you need certain records like a user. And then I don't know if it's a scheduling test, you might need to create an appointment and whatever. All those records that you create specifically for that test that's happening inside of a database transaction. And then at the end of the test, the transaction is aborted. So that none of the data you create during the test actually gets persisted to the database. then regarding the whole database, it's not actually like creating a new database instance at the start of each test and then blowing it away. It's still the same database instance is just the data inside of each test is not being persisted at. [00:07:05] Jeremy: Okay. So when you run. What you would call, I guess you called it an acceptance test, right? Where it's going, it's opening up your website, it's clicking through the website, creating records, things like that. That's happening in a database instance that's created for, I guess, for all your tests that all your tests get to reuse and rails is automatically wrapping your test in a transaction. So even if you're doing five or 10 database queries at the end of all, that they all get rolled back because they're all within the same transaction. [00:07:46] Jason: Exactly. And the reason why we want to do that. Is because of a testing principle that you want your tests to be runnable in any order. And the key thing is you want your tests to be deterministic. So deterministic means that the starting state determines the in-state and it's the same every time, no matter what. So if you have tests a, B and C, it shouldn't be the case that you can run them in the order, ABC, and they all pass. But if you do it CBA, then test a fails because it should only fail. If something's actually wrong, it shouldn't fail for some other reason, like the order in which you run the tests. And so to ensure that property of deterministic newness we need to make it so that each test doesn't leak into the other tests. Cause imagine if that. Database transaction. thing didn't happen. And it's, it's only incidental that that's achieved via database transactions. It could conceivably be achieved some other way. That's just how this happens to work in this particular case. But imagine if no measure was taken to clean up afterward and I, I ran a test and it generated an appointment. And then the test that runs after that does some tests that involves like doing a count of appointments or something like that. And maybe like, coincidentally, my second test passes because I've always run the tests in a certain order. and so unbeknownst to me, test B only passes because of what I did in test a that's bad because now the thing that's happening is different from what I think is happening. And then if it flipped and when we ran it, test B and then test a. It wouldn't work anymore. So that's why we make each test isolated. So it can be deterministic. [00:09:51] Jeremy: and I wonder if you've worked with any other frameworks or any other languages, and if you found that the approaches and those frameworks or languages is similar to rails, like where it creates these, the transaction for you, does the rol
Swizec is the author of the Serverless Handbook and a software engineer at Tia.SwizecSwizec's personal siteServerless HandbookAWSLambdaAPI GatewayOperating Lambda (The cold start problem)Provisioned ConcurrencyDynamoDBRelational Database ServiceAuroraSimple Queue ServiceCloudFormationCloudWatchOther serverless function hosting providersGatsby Cloud FunctionsVercel Serverless FunctionsNetlify FunctionsCloud Functions for FirebaseRelated topicsServerless FrameworkJamstackLighthouseWhat is a Static Site Generator?What is a CDN?Keeping Server-Side Rendering Cool With React HydrationTypeScriptTranscriptYou can help edit this transcript on GitHub.[00:00:00] Jeremy: Today, I'm talking to Swiz Teller. He's a senior software engineer at Tia. The author of the serverless handbook and he's also got a bunch of other courses and I don't know is it thousands of blog posts now you have a lot of them.[00:00:13] Swizec: It is actually thousands of, uh, it's like 1500. So I don't know if that's exactly thousands, but it's over a thousand.I'm cheating a little bit. Cause I started in high school back when blogs were still considered social media and then I just kind of kept going on the same domain.Do you have some kind of process where you're, you're always thinking of what to write next? Or are you writing things down while you're working at your job? Things like that. I'm just curious how you come up with that. [00:00:41] Swizec: So I'm one of those people who likes to use writing as a way to process things and to learn. So one of the best ways I found to learn something new is to kind of learn it and then figure out how to explain it to other people and through explaining it, you really, you really spot, oh shit. I don't actually understand that part at all, because if I understood it, I would be able to explain it.And it's also really good as a reference for later. So some, one of my favorite things to do is to spot a problem at work and be like, oh, Hey, this is similar to that side project. I did once for a weekend experiment I did, and I wrote about it so we can kind of crib off of my method and now use it. So we don't have to figure things out from scratch.And part of it is like you said, that just always thinking about what I can write next. I like to keep a schedule. So I keep myself to posting two articles per week. It used to be every day, but I got too busy for that. when you have that schedule and, you know, okay on Tuesday morning, I'm going to sit down and I have an hour or two hours to write, whatever is on top of mind, you kind of start spotting more and more of these opportunities where it's like a coworker asked me something and I explained it in a slack thread and it, we had an hour. Maybe not an hour, but half an hour of back and forth. And you actually just wrote like three or 400 words to explain something. If you take those 400 words and just polish them up a little bit, or rephrase them a different way so that they're easier to understand for somebody who is not your coworker, Hey, that's a blog post and you can post it on your blog and it might help others.[00:02:29] Jeremy: It sounds like taking the conversations most people have in their day to day. And writing that down in a more formal way. [00:02:37] Swizec: Yeah. not even maybe in a more formal way, but more, more about in a way that a broader audience can appreciate. if it's, I'm super gnarly, detailed, deep in our infrastructure in our stack, I would have to explain so much of the stuff around it for anyone to even understand that it's useless, but you often get these nuggets where, oh, this is actually a really good insight that I can share with others and then others can learn from it. I can learn from it. [00:03:09] Jeremy: What's the most accessible way or the way that I can share this information with the most people who don't have all this context that I have from working in this place. [00:03:21] Swizec: Exactly. And then the power move, if you're a bit of an asshole is to, instead of answering your coworkers question is to think about the answer, write a blog post and then share the link with them.I think that's pushing it a little bit.[00:03:38] Jeremy: Yeah, It's like you're being helpful, but it also feels a little bit passive aggressive.[00:03:44] Swizec: Exactly. Although that's a really good way to write documentation. One thing I've noticed at work is if people keep asking me the same questions, I try to stop writing my replies in slack and instead put it on confluence or whatever internal wiki that we have, and then share that link. and that has always been super appreciated by everyone.[00:04:09] Jeremy: I think it's easy to, have that reply in slack and, and solve that problem right then. But when you're creating these Wiki pages or these documents, how're people generally finding these. Cause I know you can go through all this trouble to make this document. And then people just don't know to look or where to go. [00:04:30] Swizec: Yeah. Discoverability is a really big problem, especially what happens with a lot of internal documentation is that it's kind of this wasteland of good ideas that doesn't get updated and nobody maintains. So people stop even looking at it. And then if you've stopped looking at it before, stop updating it, people stop contributing and it kind of just falls apart.And the other problem that often happens is that you start writing this documentation in a vacuum. So there's no audience for it, so it's not help. So it's not helpful. That's why I like the slack first approach where you first answered the question is. And now, you know exactly what you're answering and exactly who the audiences.And then you can even just copy paste from slack, put it in a conf in JIRA board or wherever you put these things. spice it up a little, maybe effect some punctuation. And then next time when somebody asks you the same question, you can be like, oh, Hey, I remember where that is. Go find the link and share it with them and kind of also trains people to start looking at the wiki.I don't know, maybe it's just the way my brain works, but I'm really bad at remembering information, but I'm really good at remembering how to find it. Like my brain works like a huge reference network and it's very easy for me to remember, oh, I wrote that down and it's over there even if I don't remember the answer, I almost always remember where I wrote it down if I wrote it down, whereas in slack it just kind of gets lost.[00:06:07] Jeremy: Do you also take more informal notes? Like, do you have notes locally? You look through or something? That's not a straight up Wiki. [00:06:15] Swizec: I'm actually really bad at that. I, one of the things I do is that when I'm coding, I write down. so I have almost like an engineering log book where everything, I, almost everything I think about, uh, problems I'm working on. I'm always writing them down on by hand, on a piece of paper. And then I never look at those notes again.And it's almost like it helps me think it helps me organize my thoughts. And I find that I'm really bad at actually referencing my notes and reading them later because, and this again is probably a quirk of my brain, but I've always been like this. Once I write it down, I rarely have to look at it again.But if I don't write it down, I immediately forget what it is.What I do really like doing is writing down SOPs. So if I notice that I keep doing something repeatedly, I write a, uh, standard operating procedure. For my personal life and for work as well, I have a huge, oh, it's not that huge, but I have a repository of standard procedures where, okay, I need to do X.So you pull up the right recipe and you just follow the recipe. And if you spot a bug in the recipe, you fix the recipe. And then once you have that polished, it's really easy to turn that into an automated process that can do it for you, or even outsource it to somebody else who can work. So we did, you don't have to keep doing the same stuff and figuring out, figuring it out from scratch every time.[00:07:55] Jeremy: And these standard operating procedures, they sound a little bit like runbooks I guess. [00:08:01] Swizec: Yep. Run books or I think in DevOps, I think the big red book or the red binder where you take it out and you're like, we're having this emergency, this alert is firing. Here are the next steps of what we have to check.[00:08:15] Jeremy: So for those kinds of things, those are more for incidents and things like that. But in your case, it sounds like it's more, uh, I need to get started with the next JS project, or I need to set up a Postgres database things like that. [00:08:30] Swizec: Yeah. Or I need to reset a user to initial states for testing or create a new user. That's sort of thing.[00:08:39] Jeremy: These probably aren't in that handwritten log book.[00:08:44] Swizec: The wiki. That's also really good way to share them with new engineers who are coming on to the team.[00:08:50] Jeremy: Is it where you just basically dump them all on one page or is it where you, you organize them somehow so that people know that this is where, where they need to go. [00:09:00] Swizec: I like to keep a pretty flat structure because, I think the, the idea of categorization outlived its prime. We have really good search algorithms now and really good fuzzy searching. So it's almost easier if everything is just dumped and it's designed to be easy to search. a really interesting anecdote from, I think they were they were professors at some school and they realized that they try to organize everything into four files and folders.And they're trying to explain this to their younger students, people who are in their early twenties and the young students just couldn't understand. Why would you put anything in a folder? Like what is a folder? What is why? You just dump everything on your desktop and then command F and you find it. Why would you, why would you even w
Alexander Pugh is a software engineer at Albertsons. He has worked in Robotic Process Automation and the cognitive services industry for over five years.This episode originally aired on Software Engineering Radio.Related LinksAlexander Pugh's personal siteEnterprise RPA SolutionsAutomation AnywhereUiPathblueprismEnterprise "Low Code/No Code" API SolutionsappianmulesoftPower AutomateRPA and the OSOffice primary interop assembliesOffice Add-ins documentationTask Scheduler for developersThe Component Object ModelThe Document Object ModelTranscriptYou can help edit this transcript on GitHub.[00:00:00] Jeremy: Today, I'm talking to Alexander Pugh. He's a solutions architect with over five years of experience working on robotic process automation and cognitive services. Today, we're going to focus on robotic process automation. Alexander welcome to software engineering radio. [00:00:17] Alex: Thank you, Jeremy. It's really good to be here. [00:00:18] Jeremy: So what does robotic process automation actually mean? [00:00:23] Alex: Right. It's a, it's a very broad nebulous term. when we talk about robotic process automation, as a concept, we're talking about automating things that humans do in the way that they do them. So that's the robotic, an automation that is, um, done in the way a human does a thing.Um, and then process is that thing, um, that we're automating. And then automation is just saying, we're turning this into an automation where we're orchestrating this and automating this. and the best way to think about that in any other way is to think of a factory or a car assembly line. So initially when we went in and we, automated a car or factory, automation line, what they did is essentially they replicated the process as a human did it. So one day you had a human that would pick up a door and then put it on the car and bolt it on with their arms. And so the initial automations that we had on those factory lines were a robot arm that would pick up that door from the same place and put it on the car and bolt it on there.Um, so the same can be said for robotic process automation. We're essentially looking at these, processes that humans do, and we're replicating them, with an automation that does it in the same way. Um, and where we're doing that is the operating system. So robotic process automation is essentially going in and automating the operating system to perform tasks the same way a human would do them in an operating system.So that's, that's RPA in a nutshell, Jeremy: So when you say you're replicating something that a human would do, does it mean it has to go through some kind of GUI or some kind of user interface?[00:02:23] Alex: That's exactly right, actually. when we're talking about RPA and we look at a process that we want to automate with RPA, we say, okay. let's watch the human do it. Let's record that. Let's document the human process. And then let's use the RPA tool to replicate that exactly in that way.So go double click on Chrome, launch that click in the URL line and send key in www.cnn.com or what have you, or servicenow hit enter, wait for it to load and then click, you know, where you want to, you know, fill out your ticket for service. Now send key in. So that's exactly how an RPA solution at the most basic can be achieved.Now and any software engineer knows if you sit there and look over someone's shoulder and watch them use an operating system. Uh, you'll say, well, there's a lot of ways we can do this more efficiently without going over here, clicking that, you know, we can, use a lot of services that the operating system provides in a programmatic way to achieve the same ends and RPA solutions can also do that.The real key is making sure that it is still achieving something that the human does and that if the RPA solution goes away, a human can still achieve it. So if you're, trying to replace or replicate a process with RPA, you don't want to change that process so much so that a human can no longer achieve it as well.that's something where if you get a very technical, and very fluent software engineer, they lose sight of that because they say, oh, you know what? There's no reason why we need to go open a browser and go to you know, the service now portal and type this in when I can just directly send information to their backend.which a human could not replicate. Right? So that's kind of where the line gets fuzzy. How efficiently can we make this RPA solution? [00:04:32] Jeremy: I, I think a question that a lot of people are probably having is a lot of applications have APIs now. but what you're saying is that for it to, to be, I suppose, true RPA, it needs to be something that a user can do on their own and not something that the user can do by opening up dev tools or making a post to an end point.[00:04:57] Alex: Yeah. And so this, this is probably really important right now to talk about why RPA, right? Why would you do this when you could put on a server, a a really good, API ingestion point or trigger or a web hook that can do this stuff. So why would we, why would we ever pursue our RPA?There there's a lot of good reasons for it. RPA is very, very enticing to the business. RPA solutions and tools are marketed as a low code, no code solution for the business to utilize, to solve their processes that may not be solved by an enterprise solution and the in-between processes in a way.You have, uh, a big enterprise, finance solution that everyone uses for the finance needs of your business, but there are some things that it doesn't provide for that you have a person that's doing a lot of, and the business says, Okay. well, this thing, this human is doing this is really beneath their capability. We need to get a software solution for it, but our enterprise solution just can't account for it. So let's get a RPA capability in here. We can build it ourselves, and then there we go. So there, there are many reasons to do that. financial, IT might not have, um, the capability or the funding to actually build and solve the solution. Or it it's at a scale that is too small to open up, uh, an IT project to solve for. Um, so, you know, a team of five is just doing this and they're doing it for, you know, 20 hours a week, which is uh large, but in a big enterprise, that's not really. Maybe um, worth building an enterprise solution for it. or, and this is a big one. There are regulatory constraints and security constraints around being able to access this or communicate some data or information in a way that is non-human or programmatic. So that's really where, um, RPA is correctly and best applied and you'll see it most often.So what we're talking about there is in finance, in healthcare or in big companies where they're dealing with a lot of user data or customer data in a way. So when we talk about finance and healthcare, there are a lot of regulatory constraints and security reasons why you would not enable a programmatic solution to operate on your systems. You know, it's just too hard. We we're not going to expose our databases or our data to any other thing. It would, it would take a huge enterprise project to build out that capability, secure that capability and ensure it's going correctly. We just don't have the money the time or the strength honestly, to afford for it.So they say, well, we already have. a user pattern. We already allow users to, to talk to this information and communicate this information. Let's get an RPA tool, which for all intents and purposes will be acting as a user. And then it can just automate that process without us exposing to queries or any other thing, an enterprise solution or programmatic, um, solution.So that's really why RPA, where and why you, you would apply it is there's, there's just no capability at enterprise for one reason or another to solve for it. [00:08:47] Jeremy: as software engineers, when we see this kind of problem, our first thought is, okay, let's, let's build this custom application or workflow. That's going to talk to all these API APIs. And, and what it sounds like is. In a lot of cases there just isn't the time there just isn't the money, to put in the effort to do that.And, it also sounds like this is a way of being able to automate that. and maybe introducing less risk because you're going through the same, security, the same workflow that people are doing currently. So, you know, you're not going to get into things that they're not supposed to be able to get into because all of that's already put in place.[00:09:36] Alex: Correct. And it's an already accepted pattern and it's kind of odd to apply that kind of very IT software engineer term to a human user, but a human user is a pattern in software engineering. We have patterns that do this and that, and, you know, databases and not, and then the user journey or the user permissions and security and all that is a pattern.And that is accepted by default when you're building these enterprise applications okay.What's the user pattern. And so since that's already established and well-known, and all the hopefully, you know, walls are built around that to enable it to correctly do what it needs to do. It's saying, Okay. we've already established that. Let's just use that instead of. You know, building a programmatic solution where we have to go and find, do we already have an appropriate pattern to apply to it? Can we build it in safe way? And then can we support it? You know, all of a sudden we, you know, we have the support teams that, you know, watch our Splunk dashboards and make sure nothing's going down with our big enterprise application.And then you're going to build a, another capability. Okay. WHere's that support going to come from? And now we got to talk about change access boards, user acceptance testing and, uh, you know, UAT dev production environments and all that. So it becomes, untenable, depending on your, your organization to, to do that for things that might fal
Josef Strzibny is the author of Deployment from Scratch and a current Fedora contributor. He previously worked on the Developer Experience team at Red Hat.This episode originally aired on Software Engineering Radio.Links:Deployment from Scratch@strzibnyjsystemdIntroduction to Control GroupsSELinuxFedoraRocky LinuxPumaAppSignalDatadogRollbarSkylightBootstrapping a multiplayer server with Elixir at X-PlaneStackExchange PerformanceChrubyPassword SafeVaultRails Custom CredentialsTranscript: You can help edit this transcript on GitHub. [00:00:00] Jeremy: Today, I'm talking to Josef Strzibny.He's the author of the book deployment from scratch. A fedora contributor. And he previously worked on the developer experience team at red hat. Josef welcome to software engineering radio. [00:00:13] Josef: Uh, thanks for having me. I'm really happy to be here. There are a lot of commercial services for hosting applications these days. One that's been around for quite a while is Heroku, but there's also services like render and Netlify. why should a developer learn how to deploy from scratch and why would a developer choose to self host an application [00:00:37] Josef: I think that as a web engineers and backend engineers, we should know a little bit more how we run our own applications, that we write. but there is also a business case, right?For a lot of people, this could be, uh, saving money on hosting, especially with managed databases that can go, high in price very quickly. and for people like me, that apart from daily job have also some side project, some little project they want to, start and maybe turn into a successful startup, you know but it's at the beginning, so they don't want to spend too much money on it, you know?And, I can deploy and, serve my little projects from $5 virtual private servers in the cloud. So I think that's another reason to look into it. And business wise, if you are, let's say a bigger team and you have the money, of course you can afford all these services. But then what happened to me when I was leading a startup, we were at somewhere (?) and people are coming and asking us, we need to self host their application.We don't trust the cloud. And then if you want to prepare this environment for them to host your application, then you also need to know how to do it. Right? I understand completely get the point of not knowing it because already backend development can be huge.You know, you can learn so many different databases, languages, whatever, and learning also operations and servers. It can be overwhelming. I want to say you don't have to do it all at once. Just, you know, learn a little bit, uh, and you can improve as you go. Uh, you will not learn everything in a day. [00:02:28] Jeremy: So it sounds like the very first reason might be to just have a better understanding of, of how your applications are, are running. Because even if you are using a service, ultimately that is going to be running on a bare machine somewhere or run on a virtual machine somewhere. So it could be helpful maybe for just troubleshooting or a better understanding how your application works.And then there's what you were talking about with some companies want to self-host and, just the cost aspect. [00:03:03] Josef: Yeah. for me, really, the primary reason would be to understand it because, you know, when I was starting programming, oh, well, first of there was PHP and I, I used some shared hosting thing, just some SFTP. Right. And they would host it for me. It was fine. Then I switched to Ruby on Rails and at the time, uh, people were struggling with deploying it and I was asking myself, so, okay, so you ran rails s like for a server, right. It starts in development, but can you just do that on the server for, for your production? You know, can you just rails server and is that it, or is there more to it? Or when people were talking about, uh, Linux hardening, I was like, okay, but you know, your Linx distribution have some good defaults, right.[00:03:52] Jeremy: So why do you need some further hardening? What does it mean? What to change. So for me, I really wanted to know, uh, the reason I wrote this book is that I wanted to like double down on my understanding that I got it right. Yeah, I can definitely relate in the sense that I've also used Ruby and Ruby on rails as well. And there's this, this huge gap between just learning how to run it in a development environment on your computer versus deploying it onto a server and it's pretty overwhelming. So I think it's, it's really great that, that you're putting together a book that, that really goes into a lot of these things that I think that usually aren't talked about when people are just talking about learning a language. [00:04:39] Josef: you can imagine that a lot of components you can have into this applications, right? You have one database, maybe you have more databases. Maybe you have a redis key-value store. Uh, then you might have load balancers and all that jazz. And I just want to say that there's one thing I also say in the book, like try to keep it simple. If you can just deploy one server, if you don't need to fulfill some SLE (SLA) uh, uptime, just do the simplest thing first, because you will really understand it. And when there was an error you will know how to fix it because when you make things complex for you, then it will be kind of lost, very quickly. So I try to really make things as simple as possible to stay on top of them.[00:05:25] Jeremy: I think one of the first decisions you have to make, when you're going to self host an application is you have to decide which distribution you're going to use. And there's things like red hat and Ubuntu, and Debian and all these different distributions. And I'm wondering for somebody who just wants to deploy their application, whether that's rails, Django, or anything else, what are the key differences between them and, and how should they choose a distribution?[00:05:55] Josef: if you already know one particular distribution, there's no need to constantly be on the hunt for a more shiny thing, you know, uh, it's more important that you know it well and, uh, you are not lost. Uh, that said there are differences, you know, and there could be a long list from goals and philosophy to who makes it whether community or company, if it's showing distribution or not, lack of support, especially for security updates, uh, the kind of init systems, uh, that is used, the kind of c library that is used packaging format, package manager, and for what I think most people will care about number of packages and the quality or version, right?Because essentially the distribution is distribution of software. So you care about the software. If you are putting your own stuff, on top of it. you maybe don't care. You just care about it being a Linux distribution and that's it. That's fine. But if you are using more things from the distribution, you might star, start caring a little bit more.You know, other thing is maybe a support for some mandatory access control or in the, you know, world of Docker, maybe the most minimal image you can get established because you will be building a lot of, a lot of times the, the Docker image from the Docker file. And I would say that two main family of systems that people probably know, uh, ones based on Fedora and those based on Debian, right from Fedora, you have, uh, Red Hat Enterprise Linux, CentOS, uh, Rocky Linux.And on the Debian side you have Ubuntu which is maybe the most popular cloud distribution right now. And, uh, of course as a Fedora packager I'm kind of, uh, in the fedora world. Right. But if I can, if I can mention two things that I think makes sense or like our advantage to fedora based systems. And I would say one is modular packages because it's traditional systems for a long time or for only one version of particular component like let's say postgresql, uh, or Ruby, uh, for one big version.So that means, uh, either it worked for you or it didn't, you know, with databases, maybe you could make it work. With ruby and python versions. usually you start looking at some version manager to compile their own version because the version was old or simply not the same, the one your application uses and with modular packages, this changed and now in fedora and RHEL and all this, We now have several options to install. There are like four different versions of postgresql for instance, you know, four different versions of redis, but also different versions of Ruby, python, of course still, you don't get all of the versions you want. So for some people, it still might not work, but I think it's a big step forward because even when I was working at Red Hat, we were working on a product called software collections.This was kind of trying to solve this thing for enterprise customers, but I don't think it was particularly a good solution. So I'm quite happy about this modularity effort, you know, and I think the modular packages, I look into them recently are, are very better, but I will say one thing don't expect to use them in a way you use your regular version manager for development.So, if you want to be switching between versions of different projects, that's not the use case for them, at least as I understand it, not for now, you know, but for server that's fine. And the second, second good advantage of Fedora based system, I think is good initial SELinux profile settings, you know, SE Linux is security enhanced Linux.What it really is, is a mandatory access control. So, on usual distribution, you have a discrete permissions that you set that user set themselves on their directories and files, you know, but this mandatory access control means that it's kind of a profile that is there beforehand, the administrators prepares. And, it's kind of orthogonal to those other security, uh, boundaries you have there. So that will help you to protect your most vulnerable, uh, processe
Michael Ashburne and Maxwell Huffman are QA Managers at Aspiritech. This episode originally aired on Software Engineering Radio. Related Links: AspiritechSection 508 Test for AccessibilityANDI Accessibility Testing ToolWindows Hardware Compatibility ProgramAudio over Bluetooth Transcript You can help edit this transcript on GitHub. Jeremy: [00:00:00] Today I'm joined by Maxwell, Huffman and Michael Ashburn. They're both QA managers at Aspiritech. I'm going to start with defining quality assurance. Could one of you start by explaining what it is? Maxwell: [00:00:15] So when I first joined Aspiritech, I was kind of curious about that as well. One of the main things that we do at Aspiritech besides quality assurance is we also, give meaningful employment to individuals on the autism spectrum. I myself am on the autism spectrum and that's what, initially attracted me to the company.quality assurance in a nutshell is making sure that, products and software is not defective. That it functions the way it was intended to function. Jeremy: [00:00:47] how would somebody know when they've, when they've met that goal? Michael: [00:00:50] It all depends on the client's objectives. I guess. quality assurance testing is always about trying to mitigate risk. There's only so much testing that is realistic to do, you know, you could test forever and never release your product and that's not good for business. It's really about, you know, balancing, like how likely is it that the customer is going to encounter defect X, how much time and energy would be required to, to fix it? Overall company reputation, impact, there's all sorts of different metrics. Uh, and every, every customer is unique really they, they get to set the pace, Maxwell: [00:01:30] does the product work well? is the user experience frustrating or not? that's always a bar that I look for. One of the main things that we review in the different defects that we find is customer impact. and how much of this is going to frustrate the customers. And when we're going through that analysis, is this cost effective or not. The client they'll determine it's worth, the cost of the, uh, quality assurance and of the fix of the software to make sure that that customer experience is smooth. Jeremy: [00:02:03] When you talk to, to software developers, now, a lot of them are familiar with things like they need to test their code right. They have things like unit tests and integration tests that they're running regularly. where does quality assurance fit in with that? Like, is that considered a part of quality assurance is quality assurance something different? Michael: [00:02:24] we try to partner with our clients, because the goal is the same, right. It's to release a quality product that's as free of defects, as, you know, as possible. We have multiple clients that will let us know these are clients typically that we've worked with for a long time that have sort of established a rhythm. they'll let us know when they've got a new product in the pipeline and as soon as they have available, Uh, software requirements, documentations specs, user guides, that kind of thing.They'll provide that to us, to be able to then plan. Okay. You know, what are these new features? Uh, what defects have been repaired since the last build or, you know, it all depends on what the actual product is. And we start preparing tests even before there may be, uh, A version of the software to test, you know, now that's more of a, what they call a waterfall approach where it's kind of a back and forth where, you know, the client preps the software, we test the software. If there's something amiss, the client makes changes. Then they give us a new build. but we just as well, we work in, uh, iterative design or agile is a popular term, of course, where. We have embedded testers, that are, you know, on a daily basis, interacting with, uh, client developers to address, you know, to, to verify certain parts of the code as it's being developed. Because of course the problem with waterfall is you find a defect and it, it could be deep in the code or some sort of linchpin aspect of the code. And then there's a lot of work to be done. To try to fix that sort of thing. Whereas, you know, embedded testers can identify a defect or, or even just like a friction point as early as possible.And so then they don't have to, you know, tear it all down and start over and it's just, Oh, fix that, you know, while they're working on that part, basically. Jeremy: [00:04:18] so I think there's two things you touched on there. One is. the ability to bring in QA early into the process. And if I understand correctly, what you were sort of describing is. even if you don't have a complete product yet, if you just have an idea of what you want to build, You were saying you start to generate test cases and it almost feels like you would be a part of generating the requirements generating. Like, what are the things that you need to build into your software before uh, the team that's building it necessarily knows themselves did that I sort of get that. Maxwell: [00:04:55] I've been in projects that we've worked with the product from cradle to grave. a lot of them haven't gotten all the way to a grave yet, but, um, some of them, the amount of support that they're offering. It's reached that milestone in its life cycle, where they're no longer going to, um, address the defects in the same way. They want to know that they're there. They want to know what exists. But then now there are new products that are being created, right? So we are, um, engaged in embedded testing, which is, which is testing, certain facets of the code actively, and making sure that it's, doing what it needs to do. And we can make that quick patch on that code and put it out to market. And we're also doing that at earlier stages with, in, in earlier development where before it's an even, fully formed design concepts, we're offering suggestions, and recommending that, you know, this doesn't follow with the design strategy and the concept design.so that part of embedded testing or unit testing, can be involved at earlier stages as well. For sure. Michael: [00:06:08] Of course, those, you have to be very, careful you know, we wouldn't necessarily make blanket recommendations to a new client, a lot of the clients that we have, we have been with us for several years. And so. You know, you develop a rhythm, common vocabulary, you know, you know, which generally speaking, which, goals weigh more than other goals and things like that from, from client to client or even coder to coder. it's only once we've really developed that. shared language that, you know, we would say by the way, you know, such and such as missing from blankety blank, say great example with a bunch of non words in it, but I think you get the picture. Jeremy: [00:06:48] when you're first starting to work on a project, you don't know a whole lot about it, right? you're trying to, to understand how this product is supposed to work and, what does that process look like? Like what should a company be providing to you? What are the sorts of meetings or conversations you're having that, that sort of thing. Michael: [00:07:08] we'll have an initial meeting with a handful of people from both sides and just sort of talk about both what we can bring to the project and what their objectives are. and, and, you know, the, the thing that they want us to test, if you will. And, if we reach an agreement that we want to move forward, then the next step would be like a product demo, basically, we would come together and we would start to fold in, you know, leads and some other analysts, you know, people that were, might be a good match for the project say, and we always ask, our clients. And they're usually pretty accommodating. if we can record the meeting, you know, now everyone's meeting on Google meet and virtually and so forth. And so, uh, that makes it a little easier, but a lot of our analysts have everyone has their own learning style. Right. You know, some people are more auditory, some people are more visual. So we preserve, you know, the client's own demonstration of what it's either going to be like, or is like or is wrong or whatever they want us to know about it. and then we can add that file to our secure project folder and anybody down the road that's being onboarded. Like that's, that's a resource an asynchronous resource that they can turn to right? A person doesn't have to re demonstrate the software to onboard them, or sometimes, you know, by the time we're onboarding new people, the software has changed enough that we have to set those aside actually. And then you have to do a live in person kinda deal. Maxwell: [00:08:32] and you really want to consider, individuals on the, on the spectrum, the different analysts and testers they do have different learning styles. We do want to ask for as many different. resources that are available, to, accommodate for that, but also to have us be, the best enabled to be the subject matter experts on the product. so what we've found is that what we're really involved in is writing test cases and, and, and rewriting test cases to humanize the software to really get at, what are you asking this software to do that in turn is what the product is doing. a lot of the testing we do is black box testing, and we want to understand what the original design concept is. So that involves the user interface, design document, right? early stages of that, if available, or just that, dialogue that Michael was referring to, to get that common language of what do you want this product to do? What are you really asking this code to do? having recordings, or any sort of training material, is absolutely essential. To being the subject matter experts and then developing the kind of testing that's required for that. Michael: [00:09:48] And all sorts of different clients have different, dif
loading
Comments 
Download from Google Play
Download from App Store