On Rails

Alexander Stathis: Scaling a Modular Rails Monolith at AngelList

Rails Foundation, Robby Russell

In this episode of On Rails, Robby is joined by Alexander Stathis, a Principal Software Engineer at AngelList, where Rails powers complex investment, accounting, and banking business logic across a modular monolith structure. They explore how AngelList maintains conceptual boundaries in their codebase, uses gradual typing to influence their Ruby style away from Rails “magic,” and why they’ve adopted multiple async job solutions for different types of work rather than seeking a one-size-fits-all approach. Alex shares insights on consolidating microservices back into their monolith, creating the Boba gem to extend type generation capabilities, using production data subsetting tools for local development, and successfully onboarding engineers without Rails experience in under a month while staying current on Ruby 3.4 and Rails 7.2.

Tools & Libraries Mentioned

Active Job – Framework-agnostic job API built into Rails.

ASDF – Tool version manager.

Boba – AngelList’s Sorbet compiler extension.

Delayed Job – Database-backed job processor.

FactoryBot – Test data builder.

GoodJob – Postgres-backed Active Job processor.

GraphQL Batch Loader – Batching utility for GraphQL.

GraphQL Ruby – Ruby GraphQL implementation.

Linear – Issue tracking tool.

Money – currency handling library.

Packwerk – Shopify’s modular boundary enforcement tool.

Paperclip – Legacy file attachment gem for Rails (deprecated).

RSpec – Ruby testing framework.

Sidekiq – Redis-backed job framework.

Solid Queue – Rails 8 Active Job adapter.

Sorbet – Gradual static type checker for Ruby.

State Machines – Finite state machine support.

Tapioca – Sorbet RBI file generator.

Temporal – Workflow orchestration system.

Tonic – De-identified datasets platform.

Will Lars

Send us a text

On Rails is a podcast focused on real-world technical decision-making, exploring how teams are scaling, architecting, and solving complex challenges with Rails.

On Rails is brought to you by The Rails Foundation, and hosted by Robby Russell of Planet Argon, a consultancy that helps teams modernize their Ruby on Rails applications.

[00:00:00.000] - Robby Russell
Welcome to OnRails, the podcast where we dig into the technical decisions behind building and maintaining production Ruby on Rails apps. I'm your host, Robby Russell. In this episode, I'm joined by Alexander Stathis, a Principal Software Engineer at AngelList. Alex's team has helped evolve a complex Rails monolith over the last several years, from deep survey integrations to merging large apps into a modular engine structure. Angellist's back-end power some of the most intricate business logic and investment investing, accounting, and banking, and they've done it all while staying committed to Ruby on Rails. Alex joins us from Chattanooga, Tennessee. All right, check for your belongings. All aboard. 
Alexander Stathis, welcome to On Rails.
 
[00:00:51.380] - Alexander Stathis
Hey, Robby. Happy to be here.
 
[00:00:53.600] - Robby Russell
I want to ask you a quick question. What keeps you on Rails?
 
[00:00:57.940] - Alexander Stathis
Yeah, it's a good question. I I think I actually had another engineer, we were working on a new product, message me this weekend. He said something along the lines of like, Oh, there's nothing quite like Rails new. I think Rails is just really high quality. A lot of the gems have been around for a really long time or been well-tested. Everything just works. You can't beat Active Record when it comes to interacting with the database. That would be why I'm on Rails.
 
[00:01:22.600] - Robby Russell
Active Record is very much a special thing. I mean, admittedly, it's one of my favorite things about Ruby on Rails as well. Something I wish I had had prior to my experience with Rails, what other types of frameworks or stacks do you have some experience in prior to Rails? Presumably that you did work with other programming languages.
 
[00:01:42.400] - Alexander Stathis
Yeah, in the distant past, maybe 10 years or something ago, I'd used Entity Framework. Netcore, C#. Honestly, I like it. Link has a cool, expressive way of building queries. More recently, we've used Prisma quite a bit. We have some node back-end. I don't like Prisma as much. Try not to say anything bad on the internet. But yeah, working in Active Record just has an ergonomics to it that is pretty hard to beat for sure.
 
[00:02:10.040] - Robby Russell
Definitely. So one of the reasons I wanted to have you on the podcast was I want to talk about how AngelList is approaching using Rails. I believe you have maybe one or two Rails, monoliths there. For our listeners, could you paint a picture of the architecture you're working with today at AngelList and approximately how large is the engineering team at this point in time?
 
[00:02:28.360] - Alexander Stathis
The engineering team, I think, is maybe 40-ish engineers. Don't quote me on that, give or take 10, but not super large. Our core product is entirely Rails-back. More recently, we've acquired a company that had a product built in Node, and so we've got some code there as well. But the core product is all Rails. So we have a Rails monolith with a few Rails microservices floating around. We power all of our authentication that way. We use device, we use GraphQL Ruby, and then we've got some next front ends that interact with that.
 
[00:03:01.460] - Robby Russell
Are you doing that where the... So with the next app separate, like repositories on a basic level, or are those all- Same repo, obviously different application, right?
 
[00:03:09.910] - Alexander Stathis
But the next app sit in a, slash frontend or whatever, and the Ruby app, the Rails app is in a, slash back-end. And we use a couple of different Rails engines that we serve up through a single application.
 
[00:03:21.640] - Robby Russell
Interesting. Was that all intentionally from the beginning put into a... Is that a monorepo or are those separated deployment processes?
 
[00:03:30.000] - Alexander Stathis
Yeah, definitely not. So we had one main monolith, and then we had some microservices. In the last year or two, we'd actually taken one of those microservices and refactored it into a Rails engine and then moved it into the monolith application. I would say the reasoning for that was less technical. It was because over time, the two had become commingled in a way that we wanted to undo. And there was a lot of complexity to do networking and transient failure, student network stuff. And it was just easier just to be able to call the Ruby code directly. We still have them as separate modules. We use Pacwork, which is a static analysis tool that I think Shopify built. And we use Pacwork to manage the dependencies between the various modules inside of our Rails monolith. We also have a few other small products that we've built and shipped that are also built as separate Rails, engines inside of the same monolith.
 
[00:04:23.340] - Robby Russell
You mentioned that you acquired a Node app. And so what was that process? Did the team come along with that then to some extent?
 
[00:04:30.960] - Alexander Stathis
Yeah, a few. But most of the people I would say working on the Node app now or we've hired more recently, we have some engineers that were pre-existing at Angelus that are working on it as well. When you acquire a company, obviously, the two things are totally distinct. And over time, part of what we're working on now is killing the duplicative parts. So we're moving some things one way and some things the other way and trying to really clearly define the boundaries between the products, whereas before there was some overlap, which is what made it such a nice spit for us.
 
[00:05:00.000] - Robby Russell
Were you around when the decision to start using packwork came about?
 
[00:05:03.920] - Alexander Stathis
Yeah, I drove a lot of that. The reason it was a microservice to begin with was primarily because it really was supposed to be a separate set of concerns. I'm trying not to get overly specific about the details. I'm trying not to put anybody to sleep, but the separate microservice was really intended to house a lot of our legal and economic. Angelus is a platform that helps founders and general partners raise and deploy venture capital. And so we track a lot of legal and economic structures. Our ERD is very complex, and our separate microservice was largely intended to house a lot of that. That's distinctly different from our marketing site or a lot of the stuff that happens before we formally define a fund or collect capital into it. We were very intentional about separating those two things, and we're still very intentional about it. And over time, as you build your startup, you ship stuff. And that That stuff doesn't necessarily always conform to the very precise, preconceived notions of what should live where. You make concessions to meet timelines, you do stuff that's hacky, you make it work. So part of why we consolidated the two things was not necessarily because we wanted to merge them together and turn them into a bigger ball of mud was actually because they'd already over time turned into a ball of mud.
 
[00:06:20.730] - Alexander Stathis
And it's much easier to unwind that when you're working in the same repository versus managing it across an API when it's become deeply integrated already. That was a large motivating factor, reducing complexity, increasing simplicity. Back to the original question, we kept packwerk and we implemented packwerk specifically to hold those conceptual boundaries very neatly.
 
[00:06:42.540] - Robby Russell
When you're thinking about setting those boundaries, how much of that is strictly just the architecture or the data related to those different domains versus how your team might separate boundaries of teams themselves? Who's working on what?
 
[00:06:59.500] - Alexander Stathis
Yeah, Yeah, a really good question. I can't remember what it's called, but it's the law that you ship your org structure or whatever it is. I would say we really don't have a culture at AngelList of, I'm a this engineer. I mean, it certainly happens that way, and You can't have totally fungible engineers all the time. It's really expensive to do that, especially when you're working across many sacs. I wouldn't say that the point of it was separating out the teams or being able to have one team operate in a silo from another. It really was mostly designed around separating the concepts and the business domain and keeping things organized in that way. This is probably a naive statement, but we work in a very complex domain. I'm sure others out there are like, Ha ha, right? Mine's more complex or whatever. But we deal with a lot of legal requirements, a lot of economic requirements. The relationships are very complex. Our data graphs are very highly connected, highly interconnected. I'm a big believer in getting the ERD right. And if you get the ERD right, then the code falls out of it. But if you get it wrong, then you have to fight really hard to make the logic work.
 
[00:08:09.720] - Alexander Stathis
I think the major driver here is really around establishing and maintaining the conceptual, the business domain boundaries, and holding firm on that.
 
[00:08:17.400] - Robby Russell
For the few listeners who might be wondering what ERD means.
 
[00:08:20.460] - Alexander Stathis
Oh, yeah. Energy Relationship Diagram. Just the graph of our models. In Rails speak, you have your models, your Active Record base, and then you have your associations. The associations are the edges and the models would be the nodes.
 
[00:08:34.740] - Robby Russell
Thanks for helping clarifying that. When it comes to using things like packwerk, I think it's just been a trend that since I'm speaking with a lot of people, and this is something I want to clarify for the audience is that I'm talking with a lot of companies that are maybe a little larger in scale. So I want to make sure that everybody's thinking about when do you think there's a benefit of using packwerk versus should everybody be using packwerk? Because everybody that's been on on rail so far has mentioned packwerk, but that might not apply. Do you think there's a point where an organization or an application gets to a certain scale or a certain level of complexity that something like that can be really helpful? Do you feel like it would have hindered the project earlier on in the process? It's something you should tack on later if and when you maybe need it? Where's that distinction in your mind?
 
[00:09:15.900] - Alexander Stathis
Yeah, it's a good question. I wouldn't be too prescriptive here. I guess my general philosophy here is if you can, you should enforce a rule with a static analysis tool, something you can run in CI. This is why we use formatters. This is why we use linters. This is why at AngeList, we use Sorbet, right? Because it allows us to not have to be as disciplined when we approach these types of problems. I think packwerk fits into that category very nicely. If you have different product verticals or if you have different, I don't know, domains like we do, it's really helpful to just have something that consistently checks and enforces those boundaries. Packwerk has gone through its own evolution. They tried to do this It's a privacy thing, and then they pulled it out. At the end of the day, what it does is it just defines a dependency graph between modules in a Ruby application. For us, we use that to make sure that we aren't calling across. We maintain almost a linear dependency tree, and that's what we use it primarily for. And it helps, it catches things. It's like type checking also.
 
[00:10:23.820] - Alexander Stathis
Most people are pretty good about not shipping bad stuff, but every once in a while, you make a mistake, and it's nice to have that CI running to keep you honest. It's easier to catch it with a tool than it is in review. Let's put it that way, right?
 
[00:10:35.760] - Robby Russell
You just mentioned Sorbet. How does using something like Sorbet influence your Ruby style as an organization?
 
[00:10:41.820] - Alexander Stathis
Yeah, good question. I think it was Jake Zimmerman, one of the core Sorbet guys, said something along the lines of, and I'm totally poorly paraphrasing, but the point of Sorbet is not so that you can keep writing Ruby the way you want to write it. Sorbet, the type shift system is supposed to influence the way you write the code, and I find that to be very true. For instance, something that I've had great difficulty trying to add typing to is Active Support concern. The included hook, just like anything that uses class eval or whatever, it's not easy for Sorbet to understand at least statically. Meta programming is such a big part of the Ruby culture, and that type of stuff can be so prominent in Ruby code, but it just doesn't fly in Sorbet, or it can, but you have to get pretty mean with it. And so, yeah, we tend to write very different, I would say, Ruby code than your general Ruby. We don't use a lot of multiple inheritance. We don't even really use a lot of instances. I mean, of course, our Active Record models are instances of classes, but we use a lot of class methods on modules, right?
 
[00:11:44.570] - Alexander Stathis
To define service logic to try to make it more functional. One, it works better with the typing system, certainly, and then it also makes it a little more functional for us, which is a little easier to understand.
 
[00:11:55.140] - Robby Russell
We had a previous call for our listeners to talk through maybe some of the conversation topics for this interview. One of the things you mentioned in that conversation is that your team intentionally avoids a lot of, I'm air quoting, rail's magic. What does that look like in practice?
 
[00:12:10.240] - Alexander Stathis
Yeah, in practice, there's a pretty funny video of some engineer from five years ago or something, describing his experience, debugging a series of Active Record callbacks, where you make some change somewhere, and then it's like, how did this occur? And you're jumping callbacks for many chains. Calls. So yeah, we don't really use callbacks. We're not ultra dogmatic on it. There are times when callbacks are appropriate, but we really try to avoid anything that isn't explicit and obvious. So callbacks, I would argue, are a little more implicit behavior. Things happen and then you have these side effects. It's not like you have a clearly defined method. In order to understand callbacks really well, you have to understand how all the books work, what order they call in. You have after commit, after save, after validate. Another component of this, a lot of our engineers coming in don't know Rails or Ruby very well. And so I think we've partly developed a style that caters more to a more imperative or like, procedural coding. So we try to avoid meta programming. Monkey patching is amazing, but also a little bit evil. It's definitely a two-sided sword. We avoid callbacks if possible.
 
[00:13:24.560] - Alexander Stathis
We don't use a bunch of multiple inheritance. When you look at Rails, for instance, you see these deep basically nested includes, and you have to figure out where this method is coming from. Chasing references around like that. It's a little easier with Sorbet now, but definitely at the time we adopted the practice, it was pretty intense to try to figure out what was going on unless you knew already how the internals worked. I think that's what people really struggle with with magic. It's like you have to have this deep knowledge of how the internals work, and you can't see it, and that's why it feels magical. We write in a way that tries to avoid that.
 
[00:13:58.120] - Robby Russell
What does that look like on let's say, a typical crud type process? If you're saving some data, whether that's coming from your next app, one of your next Js apps, or you have a Rails interface or some view in your Rails application and you're saving some data and it needs to trigger some additional things that happen, whether it's syncing to another system, maybe notifying people. How does your team think about that process then if you're not going to lean on the callbacks? Are you doing that primarily within code and the controllers? Do you use something like a service object type approach? When you say procedure, talk us through that.
 
[00:14:30.900] - Alexander Stathis
Yeah, good question. We use GraphQL Ruby, so everything comes in as a query or a mutation. Then we use thin models with, I don't know, you might even call these anemic models, right? If you were Martin Fowler, I'm not sure. And then we use pretty thin controller, pretty thin presentation logic, and try to keep our business logic very clearly in the business service layer. So the majority of our business logic lives inside of Slash services, and we have a lot of modules with class methods defined on them that implement the various business logic methods. An example of that might be like, it's very common pattern to have a create service. So you might call like object create service, widgets, colon, colon, create service, right, dot create, and pass in the things that you need, and that would create the model. I wouldn't say we're dogmatic, but we're pretty consistent about using dot create bang. We use a lot of bang methods, and none of the instantiate and save, I mean, we do this a little bit. And then anything that needed to happen after that, we would structure it as like sequential calls inside of that logic. In our most complex cases, we might have something where we have an after commit helper method, where it's like a callback, but it's just like an actual method in the service.
 
[00:15:40.680] - Alexander Stathis
Maybe it's private in the class or in the module with the create method, and then we would call that after the transaction wrapping the things that we're creating. We would generally structure this as just like sequential calls. And then if we need to make something asynchronous or if we want something to fire and forget, or a callback would typically be something where we would fire and forget it, shoot off a sidekick job or shoot off a good job job or whatever, and let it run and not have it impact the end user experience at all. We'll recover it separately if it fails or whatever, and it's not mission critical. So that's about how we do that.
 
[00:16:13.220] - Robby Russell
So is the controller in, say, a create action, in your controller. Is that then you're not directly interfacing with the model then, you're interfacing with create type thing, and then it's a little layer between?
 
[00:16:24.340] - Alexander Stathis
Yeah, exactly. We don't end up with a lot of thin wrappers, though, because like I said, our business domain is very complicated. It's very complex, and the graphs are highly interconnected. And so typically, it's hard to create something in a void. You're creating some object. It usually has follow-on objects that you need to create with it for that state to be consistent. Our models, we try to map very closely to the business domain online. That just makes it a lot easier. It closely ties the business concepts to the code, which is a feature for us because the business concepts are very complicated and hard to understand. And so being able to read them out of the code is helpful for engineers to figure it out. The point is that We do have a lot of logic that is included in those create services, and it's not just a lot of thin wrappers that call Active Record, create, and then return. It's usually like, create a couple of things, maybe wrap it in a transaction, maybe kick off a sync job or something and update an index or whatever it is.
 
[00:17:17.680] - Robby Russell
When those things fail, how do you handle when things aren't saving properly and bringing things back? Does that come back to the controller level? Are you rescuing things or raising exceptions and things of that nature? Or can you rely a little bit on Active Record in those situations?
 
[00:17:30.220] - Alexander Stathis
I would say generally when we encounter errors, like when our create fails, it's usually due to some validation, right? And we'd probably just surface that backup to the user, either in an explicit error message coming from the validation, right? Or through some translated error domain. It depends on how friendly the error message is. Some error messages are not super friendly, and you don't really want to show them to your end user. You just want to show them, I don't know, what I would say is the 500, the red screen of death thing. But we want our errors to be relatively loud because for us, it's actually pretty bad when we get these inconsistent states because our data graph is so complex and there are so many objects that all have to be kept together. When you get into a bad state, it's actually more expensive to recover than it is to just roll back and be like, Hey, that didn't work. I'd rather debate why it didn't work and then just redo it, make an item put in right, then try to recover midway, which can be very expensive when you have a lot of pretty complex business logic that's running and needs to run in the right order.
 
[00:18:31.030] - Alexander Stathis
There's a lot of assumptions throughout the system that these things exist or that they exist in a certain way. When you allow them to get created halfway and then either swallow that error or don't transactionalize, don't make it atomic, then other parts of your system start behaving badly as well. And that's definitely not good. I'd rather one user see an error than take down the admin view or whatever.
 
[00:18:51.910] - Robby Russell
Sure. So you're typically wrapping those in a transaction, and that way, if something triggers it, it'll just roll back everything and then have to figure it out and solve that problem and then go through that process again.
 
[00:19:02.960] - Alexander Stathis
Yeah, definitely when we have these states that need to stay consistent, if we're creating multiple models and they need to be created all at once together or updated all at once or whatever, definitely wrapping those in transactions. We actually have multiple different databases, so we run into some issues where we need those actually to be atomic, and then that gets very complex because you start having to do some gnarly things like stacking transactions. You might start a transaction on one database and then immediately open another transaction on the other database and nest them that way. You also get a lot of complexity from this for nested transactions, which are automatically flattened. So if, record, if you open a transaction and then maybe you call some other service method and open another transaction inside of that service method, those get flattened into a single database transaction. And so if they fail deep, right? Then it unrolls the whole thing and you can get into weird, goofy states that way. So I guess what I would say is I personally try to use a lot of very narrow transactions, ones that are very tightly to just the actions that we're trying to perform and keep in sync.
 
[00:20:03.000] - Alexander Stathis
That, of course, is totally conceptual thing that does not work in practice, and not everyone's so disciplined about how transactions are used, and we're trying to ship stuff. We're trying to deliver value, right? We're not trying to be ultra nitpicky about our use of transactions in our application. We have a lot of atomic states that we keep glued together with transactions, and then when those fail, we roll them back. We also just have a lot of logic that runs sequentially, and it's okay for it to fail in the middle. It is recoverable. It It's just item potent by nature. We don't need a pessimistic walking or whatever it is to try to keep that in line.
 
[00:20:37.140] - Robby Russell
Out of curiosity, what scenarios do you think having multiple databases came into play? Is it because there's sensitive data in some area where we need to keep this more locked down? Or is it more like just companies I talk to? Like, well, we have some HIPAA compliance related, so they isolated that. It's a different database, but still interact with it, but we just keep privacy information. What does that look like in your world?
 
[00:20:56.280] - Alexander Stathis
So, yeah, we have a few different services. And like I said before, we actually consolidated one of them into the same repo into a monolith. So not just the same repo, but the same application actually as an engine. And those as separate services naturally had different databases. We also have a bit of tech debt where a lot of our older databases are MySQL. I'm not personally a huge fan of MySQL. I have had plenty of fun times, debugging 5.7 query plans and trying to deal with that crap. So we also have a Postgres database that's floating around and we're trying to do this long-staged migration. And so we have some models living in one and some in the other. So I would say one of our services has multiple databases because we want to eventually... Well, really, we adopted it because we wanted to use GoodJob. Although in hindsight, if I had known about Solid Queue, I would have used Solid Queue instead. But I guess even... I apologize, I forget her name, but she even said from episode one or earlier, she even said, I think they use it for databases at... I can't remember any of the specifics.
 
[00:21:58.640] - Alexander Stathis
I'm sorry, I'm totally floping this, but...
 
[00:22:00.420] - Robby Russell
That was Rosa, Rosa Gutierrez.
 
[00:22:02.340] - Alexander Stathis
Yeah, Rosa. Rosa, author of Solid Queue, I think she said that they use separate databases for it, although I forget, it sounded like they had gone back and forth a little bit. And so we originally adopted Postgres in one of our services so that we could use GoodJob. And it was a means to an end. Sidekick was not doing what we wanted it to do. We wanted some observability, right? Why are these things failing? Where are they going? We wanted to be able to query and have a little more interactiveness with the queue itself. And then we also had these separate services going on that had separate databases. And so now we have just a few databases floating around, and we have to try and keep all of those in sync. And honestly, it's a real pain. I hate it. But I think I would still keep separate databases for the separate services, like what you mentioned. There's a conceptual domain thing. You almost want a separate split there. If I could, I would consolidate to a single instance and just use different databases in the same instance rather than actual physical different databases, like in AWS or whatever.
 
[00:23:03.140] - Alexander Stathis
But we have a lot of other problems to solve before I think we unlined that one.
 
[00:23:08.300] - Robby Russell
I know that you mentioned using Sidekiq. GoodJob. And I think in our previous conversation, you mentioned Temporal. Is that right? That you're using there? Yeah. So it sounds like you've experimented with multiple background job tools. So what have you learned along the way?
 
[00:23:22.400] - Alexander Stathis
I've learned that there is no one-size-fits-all solution for this type of work. Sidekiq has a lot of packages and gems that add on to it that provide different functionalities, and you can install them at will and whatever. We've had some trouble with Sidekiq Unique Jobs, for instance, not to try and pick on anybody, but we've just had trouble with the digest getting stuck, and then you have to go and manually clear the digest, and you're like, Why didn't this thing update? It's like, Oh, because the job didn't run, and then whatever. It's hard to recover from that type of stuff because it's hard to even know that it's happening. We also had to observe ability. Why is this even happening? I think adopting GoodJob was primarily because there wasn't a lot of transparency. I always say it wrong, but everybody gives me hard time. Redis. Redis is an incredible database, incredible tool, but you can't look at the jobs after. They're just gone. The queue is gone once it's processed. Having the ability to introspect on what happened before is a really useful tool when you're debugging jobs that have more complex stuff going on.
 
[00:24:31.540] - Alexander Stathis
You want to know when they failed or why are they failing for all of these reasons. And it was just hard for us to do that in Sidekiq. It was a lot easier if we just had an actual database table backing that. So we pulled GoodJob in. The problem with Good Job is that it's not super performant when you're interacting with a bunch of really small jobs, like fire and forget type stuff. There's an active issue in the GoodJob dashboard. You install GoodJob in the gym and you can add this dashboard to your routes.rv , right? And it's like an admin panel, and it's incredibly slow for us. It never loads. You have to load it a bunch of time. You have to warm the DB cache up so that it can actually serve the query. And it's just a pain. And then we run into issues with it will acquire a lock on the table, and then suddenly the whole table is locked up. And now we got to figure out why the lock is still holding and what's running and how it's going. I would say that where we're leaning now is we're going to try and move, I think, and this is totally really speculative because we haven't actually done any of this yet, and everything is the best laid plans.
 
[00:25:34.370] - Alexander Stathis
I would like to see us try to move any of our more complex work that we're doing asynchronously in a worker, in a job or whatever into Temporal. And so Temporal is, I think it came out of Coinbase. It's just a really powerful workflow orchestration tool. We use Temporal Manage solution. I think it's Temporal io. The downside there is it's expensive, and it's also the ergonomics are not great. It's not Active Job. I mean, this is the thing, why am I on Rails? Active Job, right? Active Job is awesome. I think it'd be cool to build an Active Job connector for Temporal. I'm going a little tangential here, but I think it'd be really cool to see an Active Job connector come out of that that would work for Temporal. It's something I've noodled on for a while, but just haven't sat down to actually try to write. But yeah, I'd like to see us move a lot of our more complex workflows, things where we need, like guaranteed delivery, things where we need observability or stuff It's stuff that we're not comfortable just firing and forgetting that we need to run for maybe a little longer, right?
 
[00:26:34.890] - Alexander Stathis
Because Sidekiq, you don't want your jobs to run forever anyways. I'd like to see us move that into Temporal. I think we have a couple of cases where we might either keep GoodJob or adopt. I think we also have delayed, although I've been less involved in that in some other services. We've really gone through them all. And I think it would be cool to maybe consolidate those with Solid Queue, maybe when Rails 8 comes out, and then use something used that for our quick fire and forget stuff, or use something like Solid Queue or GoodJob, or even Sidekiq for our stuff that is going to be quick, queue up this email or whatever. But yeah, I mean, coming back to the original question, I think as much as it offends me conceptually, there's no perfect asynchronous solution, at least that I found. I don't know. Maybe the listeners will write in and complain that this other thing is perfect. But for us, I think we're going to end up probably with multiple solutions. And I think coming to terms with that more recently, I think it's okay. There's probably some ergonomics we need to figure out, and then you have to define for people when to use what.
 
[00:27:36.370] - Alexander Stathis
But ultimately, I think it'll be the better solution.
 
[00:27:38.860] - Robby Russell
Yeah, that was going to be my next question. How does your team then go about deciding what to use where? I hadn't heard of Temporal before our conversation recently. That's a platform people can. I think there's open-source versions that you can use as well. They've got a bunch of example. You can use this with lots of different programming languages. Are you actually triggering anything with your next apps? Or does Almost all of your background processing, Async jobs, get triggered by something within your Ruby or Rails processing.
 
[00:28:06.200] - Alexander Stathis
Yeah, this is an active conversation in our node apps as well. It's like, what do we do there? We do have a temporal setup, but again, we're running into the same issue where you have this more complex processing job versus a short fire and forget type job. So temporal works by defining workflows and activities. Workflows are basically comprised of some logic, although it needs to be totally item potent, and then activities that they can call to execute actual work. So I think it's possible to set up, and I don't know, I haven't done this. This is totally conjectural, to set up a workflow from one service that cobbles together activities from other services, I think. But I'm not sure. I think that's true. We're not doing that. We're explicit, we're straightforward, we're like no magic. Right now, we're just using workflows and activities defined locally in the same service and running those. But we do have a lot of complex workflows that we need to run asynchronously in these jobs, like processing jobs, basically. And it's important that they don't fail in the middle or that we know when they do so we can recover or whatever it is.
 
[00:29:11.360] - Alexander Stathis
And Temporal is really useful for that.
 
[00:29:13.540] - Robby Russell
Was that something that the Node app that your organization acquired, were they already using that? Was that how it got introduced to you or is it something you implemented after that process?
 
[00:29:22.660] - Alexander Stathis
I don't think they were using it prior. No, we implemented it after. I think the idea was we need an async solution, and like processing worker solution, and we're already using Temporal, so let's use it. There's been some issues with trying to adopt it. We're getting out of my expertise here, but we have some patches to make it work on an NPM packages because I don't know, this is like technical, this is like normal stuff. You have to do this stuff to make stuff work.
 
[00:29:51.100] - Robby Russell
Sure. That is always the case. There are always a little bit interesting edge cases or things you have to figure out how to make work with whatever you're doing your apps and using some third-party service. Does using a third... I'm air-quoting outsourcing something like that work to something like temporal. How does that then translate back to if they run some processing or is it then orchestrating and triggering some API endpoints in your system? Be like, Okay, once this is finished, tell us that this is done. What does that end up looking like? Or are you just checking temporal without having used it or look at too much into the thing? But if you trigger a thing in a temporal, are you then just checking the status if it's finished or not and then continuing? Or is it sending some our callback and let you know that it's finished doing whatever we're trying to do? Or am I coldly misunderstanding how it works?
 
[00:30:35.400] - Alexander Stathis
It's just like Sidekiq or GoodJob or probably Solid Queue, although I haven't used Solid Queue yet, but you still have to run worker instances. So these are deployed on our own and from in AWS, in the internet, like their pods in our Kubernetes cluster. And these actually take the workflows, take the activities and process them. This is actually our code running. The database that actually stores the queue, right, is managed. Managed through Temporal io. You can self host, you can do all of that. It's just a lot of complexity from a DevOps perspective, I guess, to do that. And so for us, it was easier to just have a managed solution that you just connect to it and push your jobs there. They have a really nice UI. I think it's super cool. There's a burn down or a time chart. It shows you where each activity is running and what stage it's in and why it failed and this and that. I think you can also query on it. You could certainly have it do callbacks. We don't do a ton of that. We just need to make sure that the jobs actually execute and that they execute in the right order or whatever it is.
 
[00:31:37.040] - Robby Russell
Do you think that Rails needs more, say, stronger opinion around asynchronous processing?
 
[00:31:43.360] - Alexander Stathis
I don't think so. I mean, this is like one of the beauties of Active Job, right? It's like the thing just plugs into whatever. I think it is an interesting part of the ecosystem. So in NPM land, you get a thousand packages that do the same thing, and they're all tiny forks of each other. You've got Joe's whatever, and you've got Frank's whatever, and they do the same thing slightly differently. In Ruby land, I think that's not quite so true. You generally don't, right? You have actually these long-standing... These packages, these gems that have been around forever, and they just work and they do the thing that they're supposed to do. The Ruby community is very good at consolidating on these efforts and dumping a lot of energy into them and making them very good. If I can be totally just talk totally unfounded way, I think the reason there are so many different solutions here is partly because Active Job is pretty flexible, but I think it's partly what I said before, which is that I think depending on what you're doing, asynchronous work comes in a lot of forms. It's asynchronous work, and so it's similar in that way.
 
[00:32:42.320] - Alexander Stathis
But depending on whether you're doing long running processing jobs, you might want a different solution than if you're doing quick little fire and forgets that you don't care about. If you're sending a two-factor code email, you want that to land. You don't want that one to go missing. If it's, I don't know, something else, maybe you don't care. If you're just updating an elastic search index that you know that later today the cron will run and fix it or whatever, maybe you don't care as much. Versus if you're pulling in a file off of an SFTP server from a bank to process, because This is still how a lot of the financial industry works today. Then you're going to do some heavy compute on that and pull a bunch of records into the database and you need to know if it fails, you need to know where and why, and you need to be able to recover and all of this. Then you want a different solution than maybe sending a transactional email. I guess if I were to just guess why this is the one area in the Ruby ecosystem where you see a lot of diversity, I think it's because asynchronous jobs probably have less in common than they have in common.
 
[00:33:48.360] - Alexander Stathis
They tend to be more different than they are the same. It's just that they're all asynchronous work. They're all in some kick off the job and run it later. I think the opinionatedness here would probably hinder people and would probably not be so useful. I think the work defines the tech and the tooling, right? And I think that's a pretty basic engineering principle. And you want to build the right things with the right tools, right? And I think that's how Async workers work. And I think that's what we've definitely found at AngelList. We keep adopting new ones, trying to hope it'll solve all the problems. But at the end of the day, we're just going to need two or three or whatever it is, right? Different solutions for different problems.
 
[00:34:26.340] - Robby Russell
I think that's a good point. I always feel like most of the teams I talk to and developers I talk to, they're always trying to be like, which one do you recommend so I could just start using that one or what have you? And so, Rails is now obviously shipping with Solid Queue and things like that as well as maybe a good reliable default for this type of work. But knowing that you can then use with Active Job a lot of different types of platforms to accomplish that work. So I'm always a little hesitant to be like, This is the one you should be using, this is the best one, or which one is the most popular to be using. We run a survey every couple of years. We can see how the landscape is landscape changes there. But hearing someone actually say, No, we probably actually need to run a couple of different ones for very different reasons, I think is something I haven't heard that much from a lot of people I've talked about because they're always trying to just find that one solution. Is that something you feel like you just have come to understand and concede, or is it just like, no, this makes the most sense for us and maybe for other companies that are...
 
[00:35:21.840] - Robby Russell
Do you think that they should be thinking about embracing multiple different paths rather than just be like, We need to migrate all over to X because we're done with Y?
 
[00:35:29.640] - Alexander Stathis
Yeah, I mean, we've certainly tried to just migrate everything to X because we're done with Y. The thing about migrations is you end up migrating forever. So doing that is not super successful. I mean, it can be, but it takes a lot of concerted effort. Migrations are hard. Everybody knows this. I think what we found is that we have really diverse, asynchronous work. Back to the point here. I think we have a lot of different stuff going on in workers, and a lot of it tends to be expensive compute, stuff that's not super cheap. And so it's not well suited to something like Sidekiq, which is backed by Redis, Redis, however you say it. So I think, yeah, I mean, as someone who really values clean, organized, really, really precise stuff. It's a little offensive to me that I need multiple of these things, right? And I think that's probably what you're detecting also. I think engineers are very similar in that way. We all want the solution that works. We don't want to have to cobble together stuff. But I think for us, we just have different work And it requires different tools.
 
[00:36:31.980] - Alexander Stathis
And asynchronous processing, asynchronous workers, asynchronous jobs, whatever, is a tool, but it comes in a lot of different shapes, right? So you wouldn't use a sledgehammer in the same way you'd use a framing hammer, right? And I'm sure if you knew a lot more about hammers than I do, that there'd be even different framing hammers you might use. I think using Sidekiq to run jobs that may run for hours is like using a framing hammer to try to run the thing you need a sledgehammer for. Whereas Temporal, great solution can literally do it all is a little bit abstract. It's the ergonomics are not super great. And maybe is it great for us with a managed solution? Cost is a factor. We have to think about you pay per workflow run or whatever it is. And so you got to think about that as well. It's maybe not the best solution for a little fire and forget, update this cache jobs that you're not actually that worried about landing. We just have very distinct, different needs in that realm. And we've tried the let's adopt this one and let's adopt that one and let's push all the new jobs and let's migrate, let's do it.
 
[00:37:37.880] - Alexander Stathis
And then we keep running into the limits of that system, of whatever system we're migrating into. And It's really tough to do a migration like that and then be like, Oh, man, did we mess up? Should we have left this work in Sidekiq? Because it's just better suited for it and would make more sense. That's a painful thing. That's a pretty awful thing to feel. But yeah, I'm coming around to the fact that I think we just need multiple solutions. We just need multiple different types of asynchronous work. Then you asked earlier, how do you decide? I don't know.
 
[00:38:10.580] - Robby Russell
How do you onboard people into this world? They're like, Well, we've got three different things, and having even mentioned delayed job there, which has been around and maybe not so in vogue in a long time, I'm curious about, has any of those decisions been based off of... Is anything hindering you from keeping things relatively up to date with, say, Ruby and Rails versions themselves?
 
[00:38:30.780] - Alexander Stathis
Yeah, no, not at all.
 
[00:38:32.140] - Robby Russell
That's good.
 
[00:38:32.840] - Alexander Stathis
I think we're on Ruby 3.4 . We're on Rails 7.2. I think we are on 7.2.2 something. We're pretty leading edge, I would say. In a related realm, we've also adopted Falcon in one of our microservices. I don't think so. We're pretty good about really grinding this stuff out. Honestly, this is fun. I like this stuff, personally. I did a lot of this stuff as side projects in parallel with product work, where I would just be like, Okay, I'm going to upgrade this thing from Rails 6 to Rails 7. And it's a little sketchy. It's a little painful. Like, stuff breaks on you. You don't anticipate, but it's fun work, and I like it anyways. And it gets you really deep into Rails itself or Ruby itself because you're forced to contend with these idiosyncratic things that you never knew. And I like that. So no, it hasn't really held us back. We're pretty cutting edge with all of that stuff.
 
[00:39:30.740] - Robby Russell
This episode of OnRails is brought to you by Application Controller, where all roads lead and no logic escapes. Do you need to share a method across all controllers? Add it here. Once you run a callback before every action, you know where to put it. Do you need a place for auth logic, flash helpers, and a little panic code? It's waiting for you. Application controller is the one file that's holding your app together. And also maybe holding it a little hostage. Side effects may include confusion, long scroll sessions, and blaming something in here for half of your bucks. Application controller, because if you don't know where it goes, it probably goes here.
 
[00:40:08.580] - Robby Russell
I'm curious, if you're willing to answer this question, do other developers and engineers on your team have much experience also participating in those upgrades, or is it primarily you doing a lot of the heavy lifting there?
 
[00:40:23.260] - Alexander Stathis
We have a couple of people, me and another person, that I would say drive the majority of it or have driven the majority of it because I think I like it. It's easy for me to just do it, and it's not like something that you have to do that often. Rails isn't releasing new versions every day. I mean, honestly, Ruby ships more versions than I realized. If you'd ask me five years ago before I started working in Ruby, How often does a new Ruby version come out? I'd probably told you every 10 years or something, right? But it's actually every few months or every couple of months or whatever. But it's not that hard with patch versions with minors, usually, to upgrade. You usually don't run into that many issues. And so really, it's just the majors that are tricky. Rails is a little different. Rails will ship stuff in minors that requires some effort. But typically, I would say the hardest part about upgrading is there's two really key facets of it. One is fixing all the tech debt and monkey patches to whatever internals of whatever gems, Rails or otherwise that you're reliant on.
 
[00:41:20.400] - Alexander Stathis
And the other is figuring out how to get the dependencies to come along with it. Because usually when you bump a Rails version, you also have to bump a bunch of other packages. And that can come with other stuff for sure.
 
[00:41:32.480] - Robby Russell
Do you have any team guidelines or at least scenarios when you evaluate, when it makes sense to bring in an external, say, Ruby gem dependency into your applications versus just doing something yourself so you're not so dependent on that gem being potentially a bottleneck for keeping things updated or giving you too much or not?
 
[00:41:54.340] - Alexander Stathis
Yeah, no, I don't think so. Historically, AngelList has been very high trust engineering Sure. And so if you have some problem and the gem solves it, let's pull that in. It's nice in Ruby, like I said, because the ecosystem is pretty high quality. In NPM, you get these packages that are eight lines of code, and it's like, Oh, I'm pulling in this one method through a package. Each one function through a package. In Ruby, it's less like that. I mean, we certainly have adopted our fair share of technical debt from some packages. I think like mind magic, we have our own fork of mind magic. We have some forks and stuff of these core dependencies that are not updated very often and you end up with it. I'm a pretty big proponent of upstreaming stuff when you can. It's hard to do, right? And then also the gems themselves have to be maintained, which is another technical complexity. But I always try. I think we should always try. I would rather upstream something than fork. But usually what we do is we fork, slot our own thing in, and then begin this slow grind to try to upstream it so that we can get unreliant on the fork.
 
[00:42:55.640] - Alexander Stathis
But yeah, I would say, generally, I'm pretty open. I'll approve any PR that has reasonable justification and isn't sketchy. I don't know, maybe that sounds too willy-nilly. Maybe I should back off of that one.
 
[00:43:07.180] - Robby Russell
I don't know the answer to that. If there's one true way to do that either, I think, obviously, as a developer, it depends. But how does your team actually then keep track? I mean, obviously, you can look at your gem file and see which ones you have forked because it's probably pointing to a different repository. Do you have a regular process or anything, or is it just top of mind? Everyone's like, Oh, right, we have that thing. Let's see if that has already been addressed upstream because maybe we're just waiting on that gem to eventually get updated so we can run it on the latest version of Rails or Active Record or whatever. That is potentially whatever preventing you from just bumping up the gem itself. How does that process happen? Or is it just materialize when you or someone comes across it and you're like, Oh, right, we have this thing. I wonder if we can go back and I'm going to go. Or is there a task to remind yourself every couple of months? Let's go back and review these things.
 
[00:43:51.160] - Alexander Stathis
Yeah, it's a good question. I wouldn't say we have a formal process for it. I generally try to be pretty precise and exercise any technical debt that I can find when I'm touching it. But that's me, right? That's certainly not true universally. Some people are just trying to ship the thing they're trying to ship, and that's totally okay. That's a reasonable tack to take. When I'm monkey patch, for instance, I try to leave comments with the context of the monkey patch in there. So like, Hey, we're doing this because... And once they do this or once this be our chips or once this issue is resolved or whatever it is, we can remove this monkey patch and upgrade. It should be whatever. It's not perfect. You end up with this stuff. It just becomes dead. I wouldn't say that we have the velocity of something like Shopify. And I don't know, I've never seen their repos, but I imagine they have thousands of engineers shipping stuff into these things. And so they probably have to be a little more careful about this stuff. I wouldn't say that we have enough people shipping enough code where we're growing exponentially in our number of monkey patches or fork gems or whatever.
 
[00:44:54.120] - Alexander Stathis
We have a bunch of internal gems that we've written, but mostly internal APIs and clients and stuff like that. But our number of forks is relatively low. It's probably less than 10, maybe even less than five. And so it's not incredibly hard to maintain. I mean, it usually is packages that aren't maintained or things that are very old. I think we forked money at some point, which is a gem for manipulating money. My magic comes to mind. We have a few different ones that we've acquiesced on over the years, for sure.
 
[00:45:24.140] - Robby Russell
Could you tell us a little bit about the story behind creating the Boba Gem and how that relates to Sorbet?
 
[00:45:28.700] - Alexander Stathis
Yeah, totally. Sorbet is a gradual type system for Ruby, in case you're unfamiliar. And what that means is that you can enable it on a file by file or even a method by method basis. It's like TypeScript. If you're familiar with TypeScript, I would say it's not as mature as TypeScript, and it has some expressiveness issues because Ruby is such a dynamic language that it's really hard to... There's a great post by Jake Zimmerman on this. Definitely go read that if you're interested in this. But yeah, so Sorbet is a gradual type system. We use it very heavily at Angelist. It's awesome preventing stupid mistakes. One of the core pieces to it is that the Static Analysis tool, which is one very important half of it, relies on type files for any gems or any DSLs, like domain-specific languages, like Rails, for instance, that provides code available at run time. For instance, associations. By default, the Static Compiler doesn't know about these associations because they're generated from a method that's running on the class when it's instantiated. And so you can either hand code these type files, RBI's or RBS, or you can generate them.
 
[00:46:38.760] - Alexander Stathis
And Shopify built a really great tool called Tapioca, which is like playing on the Sorbet theme. What it does is it allows you to define compilers, and it ships with a preset collection of compilers that generate these RBI files for you. It runs a copy of your Rails app, and it reflects on the classes, and it uses this to generate these RBI files for you. So you make an update to your model, you add an association, whatever. We use make files, so we have a make command, make RBI, and it just reruns it and shoots out the new RBI files. So the beauty of tapioca is extensible. If you have a domain-specific language that you need to generate an RBI file for, you can define your own compiled, and you can ship a compile with it. It also ships with some default compilers for Active Record, for Rails, for some various other gems that I've actually just revised their stance here, but various other gems that Shopify uses. I'm trying to pick my language here carefully. I'm not trying to offend anybody at Shopify. One of the things that's very annoying for us is that the default Active Record Compiler is 100% type safe.
 
[00:47:47.360] - Alexander Stathis
And anybody listening is probably like, that's a weird thing that that's annoying, right? But in Rails, at least in our Rails app, it's very typical to assume that your objects, your Active Record models, are actually persisted. They're coming from the database. And because they're coming from the database, that means that they're subject to validations. They're subject to unique indexes or non-null indexes on your database. You can assume certain things exist. Also, Sorbet is not complex, it's not mature enough to be able to generate type files that can know whether something is coming from the database and should be beholden to these... For instance, if it has a presence validator, you might know that that object exists there. It's not sophisticated enough to split the types, to know that that thing exists unless you tell it. So the default compileer generates everything, assuming everything is nullable, because you can always do. New, and then even if you're on an Active Record model, and ID could be null. It could be null. And so that's fine. It's 100% time safe. It's totally reasonable, but it's very annoying because you have to assert everywhere. And because The way to do this in Ruby is very...
 
[00:49:02.180] - Alexander Stathis
It's not like TypeScript where you can just out a bang, right? That's valid Ruby code, so you can't just out a bang. So you have to do t.must, or you have to explicitly check it, returnif.Nil. And it's just super unergonomic, really unergonomic. I don't think I'm alone. I don't think we at Angelist are alone in this pain because I'd seen a number of issues over the years and attempts. The backstory is we used a gem called Sorbet Rails to generate our RBIs. That's now deprecated. It's been replaced by tapioca. When we were trying to convert, the option was either go through and fix all 20,000 violations of this, put T.Must 20,000 times, or figure out how to write a tapioca compiler. So I wrote up what I thought were pretty nice changes, and I attempted to get them upstreamed into tapioca. And there were some back and forth on the threads, and ultimately, I think it was the right call for them to make, but they decided not to accept those changes. That was a very long-winded story of saying, I created my own gem. So Boba is another play on tapioca because Boba pearls tapioca or whatever.
 
[00:50:09.920] - Alexander Stathis
Well, one, it has our custom compilers that we've built at Angelist, right? Like sharing with the community. And it's intended to be a repo where people can contribute their compilers for other gems that tapioca is not willing to accept into the main repo. It felt like there were a lot of demand for these gems that we had built, and people were circling around the issue a bunch. So I created Boba and put them up there.
 
[00:50:33.580] - Robby Russell
Nice. I'll share a link, obviously, in the show notes for our listeners. And I was just looking at the list of compilers that are available in Boba so far. And I see things, aside from the ActiveRecord things you mentioned, there's Kaminari, money, paper clip even. So you're accounting for these different scenarios there. And is that even maybe state machine related stuff in there as well?
 
[00:50:55.520] - Alexander Stathis
Yeah. So there's state machines, which we use, which is a gem which extends your Active Model objects to have these state methods on them. So a good example here is money. Shopify actually has their own internal money gem that they wrote. It works a little differently than the standard Ruby money gem that we're using. And so they're probably not interested in accepting that compile. Actually, I think I'm aligned with them long term on the end state here, which is ideally all of these gems would actually ship with the compile. So it would be part of the gem, right? Money would ship with the Money Compiler, and then Tapioca knows how to just load it up and run it, and you're using Money, so it works.
 
[00:51:37.240] - Robby Russell
Interesting.
 
[00:51:38.120] - Alexander Stathis
I think that's a pretty beautiful end vision, but we're not there yet. Sorbet, I don't think quite has the adoption or the buy-in from the general Ruby community quite yet. There's still some open questions to solve there around adopting it, like RBS and these other competing things. And then again, some of these gems are just not well maintained.
 
[00:52:00.100] - Robby Russell
Why do you think the community is still a little cautious about static typing?
 
[00:52:04.980] - Alexander Stathis
Mots. Mots is an opinion, right? I watched a Ruby World Con or whatever it is a little while back, and he was like, I don't need it. I don't really want it, but I see why it could be useful, and I'm okay with it in certain cases, right? I think that's about as good as we're going to get. I think, like I said, it changes the way you write Ruby. But fundamentally, you have to alter the way you write Ruby. I think a lot of Rubyists, they use Ruby because they love Ruby. They love the stuff that lives outside of that, that Venn diagram of Sorbet Ruby, like the Sorbet Ruby circle. The other part is, I saw a thing with, I think it might have been DHH, and he was talking about not using TypeScript or something because he was like, I don't need it, and it just slows me down and whatever. I think people that like working in Ruby tend to like the fact that it's not so opinionated on how you do this type of stuff. It's like a pointer in arithmetic from C. It's bad, but everybody does it because it's good.
 
[00:53:05.320] - Alexander Stathis
I think it's an uphill battle, but I think there's evidence of success there. We've used it to great effect. I mean, our new engineers coming in who don't know Ruby Especially coming from languages like C# or even TypeScript now, where you're expecting things to have types, and you're expecting to have the compile or the run time or whatever to protect you a little bit from yourself. It's really valuable in that regard. I can't speak for why anybody would or wouldn't adopt it in their specific projects, but I think culturally, it just changes the way you write Ruby. And I think generally, Rubyists like Ruby the way it is. They don't like that.
 
[00:53:41.840] - Robby Russell
Has using Sorbet changed how you approach testing and refactoring as a team?
 
[00:53:46.500] - Alexander Stathis
I mean, it has definitely made refactoring easier for us, for sure. That's one of the big reasons I would say we don't use instances super heavily of instance service logic, basically. So we wouldn't create a service class and then instantiate differentiate it and then call methods on that thing, an internal state. I went through an exercise recently where I was looking for source, the method source, S-O-U-R-C-E. And you'd be amazed how many sources there are in a code base. At least ours there are. Doing a refactor where you have to change, fundamentally alter source, and it exists across all these models, and it's used all over the place, and you don't have any type system or type safety, Having a static analysis tool like Sorbet is super powerful. It gives you a lot more confidence. It makes broad refactors way more chill. You can literally comment out the method and see what errors, and then suddenly, it's a pretty good idea of what you need to change, what you need to update. In terms of testing, not really. Hasn't altered that a ton. We actually don't have Sorbet enabled in tests because we use FactoryBot a lot, which is a gem that helps mock out Active Record data, and it actually creates the records in the database.
 
[00:55:03.320] - Alexander Stathis
It has a really nice feature. I mean, if you're not familiar with factorybot, you haven't used it. It's a pretty popular gem for this type of thing. But it's really hard for Sorbet to get because you use create as a method and you pass in a symbol, which is the name of the factory, and it pops out an Active Record object. So we could use it. It would still give us type safety on our spec. You can generate... Even our spec is not super well supported. Like I said, you have to generate type files for all of these DSLs and stuff. And so we just have it disabled in our test files. But I think your question was more around, do we write fewer tests or do we write more tests or whatever? And I would say, no, we write the same tests. I mean, we like to have unit tests or smoke tests, but I would say we don't have a ton of smoke testing. What we have more is we want to prevent regression. We have very complex business logic. We were talking about this earlier. And we want to make sure that the nuances of that logic are captured in tests so that other engineers can have confidence in making changes.
 
[00:56:03.620] - Alexander Stathis
Because ultimately, this allows us to move faster as an engineering organization. So, yeah, they work hand in hand. I think they work together. I don't think you need to change your testing.
 
[00:56:14.760] - Robby Russell
When your team is thinking about testing and preventing regressions, how often when a weird edge case thing pops up, do you then make sure that you include that in your tests from here on out?
 
[00:56:26.480] - Alexander Stathis
We use our spec. We use factory bot to mock stuff. We run our full suite of tests, every branch in CI, it has to all pass before it merges. I don't know if this is standard practice or not, but that's what we do. If you add a test, it now gets run forever. That's good if it works. It's bad if it's a flake. But I would say that if I hit an edge case in very tricky logic and it's not clear to me at the outset why it is that way, I will 100% add a test to try to enforce that. When you have stuff that's Hey, calculate the amount of money this person is going to get when they... Whatever. And then the state is super complex and you're modeling an incredibly difficult allocation waterfall. Maybe that means something to you, maybe it doesn't. But you're doing this very complex calculation and a number pops out, and the inputs are this huge data graph. You tend to want to try to capture the edges of those in a way that the next person who has to touch this thing is not going to shoot themselves in a foot by being like, Why is this weird thing here?
 
[00:57:28.320] - Alexander Stathis
Let me just remove that. That's unnecessary. And then creates a regression and some number is wrong. And you got to figure out how to solve that in an incident or whatever.
 
[00:57:38.180] - Robby Russell
Out of curiosity, when you've touched on this a number of times, saying how complicated your domain modeling is and your type of business there and you're dealing with finances, money, all that, making sure people get the right amount of money or things get calculated in an appropriate way. I guess what I'm trying to get at, do you have a lot of hard-coded logic for very weird specific things in there, or Are you able to abstract that in a pretty healthy way, do you feel like?
 
[00:58:05.440] - Alexander Stathis
Yeah, it's a good question. I'm sure at this point, I sound like a jerk. Everybody's like, Oh, your business is hard. I don't know. It's hard for me. That's where I'm coming from. It's not For me, it's pretty thoughtful stuff. I have to spend a lot of time thinking about it to get it right. But maybe smarter people than I wouldn't find it difficult. We do have some hard coding, but it's usually because our product is built in a way to assume a lot. How do I say this? In the legal world, at the end of the day, what we're trying to do is implement the terms of a legal and economic contract. That legal contract is pretty precise about a lot of stuff, but it can be pretty imprecise about stuff. It can really come down to what was the economic intent that everyone agreed on in some email? Or what was the preconception that someone had coming in to the agreement? Our system, it works in the volume case, in the general case. It works the way that we expect it to work. But every once in a while, you have somebody come in and they're like, Actually, I thought this.
 
[00:59:11.830] - Alexander Stathis
My lawyer interprets this clause slightly differently. Really, if it can be written in a legal contract and we agree that we're willing to take on the fund or whatever it is, we have to support it. In the business that we're in, you don't get the option of being like, Well, your lawyers are wrong. You can, but that's like lawsuit territory. You're just like, No, that's not where you want to be. We've had to bend over backwards a little bit in some ways to try to support things, and we do have that to some extent. I think we'd like to build in a way that's a little more configurable over rigid, but it doesn't always It doesn't always work out that way. You do sometimes have to have hard codes. But I would say typically hard codes happen for us, not in our deep business logic, deeply. It ends up in the presentation layer. Hey, our customer would really like it if this was displayed a little differently or whatever, or they want to use this custom, whatever. Can you make that work? And it's like, Yeah, sure. We'll just ship it.
 
[01:00:08.340] - Robby Russell
I was thinking without having seen anything about your existing code base there, I work in the consulting space, so I get to get invited in situations. And then you mentioned you've organized everything into, say, services. Are there files with specific clients or whatever the name of your air-quoting customers or clients with their names in the thing? You've got a collection of 50 different things for each different things. There's a file that's almost copy pasted, and then you're modifying it for that particular business entity. No, of course not, Robbie.
 
[01:00:40.440] - Alexander Stathis
We're a premiere. All of our engineers are perfect.
 
[01:00:43.340] - Robby Russell
Is that even a bad thing? It's like that's the logic or business logic for that particular client.
 
[01:00:49.540] - Alexander Stathis
Totally, yeah. We have some of that. It's not written because we're like, Oh, we have this one whale, for lack of a better term, client, and we're going to just build stuff to make it work for them. In the past, we've generally had this attitude of like, If we're going to build it, we want it to work for anyone because it becomes a value prop for all of our future customers as well. That said, we do have a lot of code where we just shipped it to make it work because we were trying to make the sale. We're a startup. You live or die on the next sale sometimes. So over the years, we've definitely shipped a lot of code and we've done some things, and we've gotten ourselves into some situations where there's definitely a customer service or here or there. We have an edge cases service where we try to pull any super gnarly logic that needs to live. A common almost like... We also collect our monkey patches in a similar vein. If we're going to monkey patch something, We want to know where it is. We want to know where to look when we're thinking about why is this behaving broadly or why is this thing different?
 
[01:01:51.240] - Alexander Stathis
We're not perfect about it, but if a customer wants something and we're agreeing to build it, we generally try to build it so that we can give value to our other customers as well. And then if we're not and we're breaking the rules and just making it work and getting somebody over the line or whatever it is, then we try to keep that contained, keep the footprint narrow.
 
[01:02:10.560] - Robby Russell
That makes sense. Alex, another thing I wanted to go back on, you've mentioned a number of times how you're using GraphQL. Do you also have any just RESTful API endpoints or are you primarily doing everything through GraphQL? And is that primarily to support the next apps or trying to understand how that came into play? But also, is Do you think that's making things easier for you or harder as a team?
 
[01:02:33.200] - Alexander Stathis
We do have some older controllers, certainly a lot of internal APIs. Those are RESTful. We're not using GraphQL. Well, they're more RPC, right? But we have a bunch of internal controllers that we're using to communicate between inter-services. Graphql comes with a little bit of performance overhead. You have to like parts of the query and just like JSON and whatever that you can avoid with internal stuff. You don't need to have the whole powerful query language built on top of that either. So mostly for our next apps, mostly for our consumer-facing products. I would say the big The thing that GraphQL really provides us, it allows the client to define the data that it needs. This is one of the core features. That's really useful for us because we have really complex data that can be computationally heavy. If we can be really precise about which fields we want or not and really flexible in this view or that view or this component or that component about querying this or that field. When you're working with a RESTful or RPC style API, you have to have some way of either have to cut a separate endpoint for each those different use cases or you have to modulate the response, which is a little awkward in a RESTful situation.
 
[01:03:37.600] - Alexander Stathis
Graphql just really gives us the ability to be really precise about the data that we want from the back-end and also to move very quickly. I can set up a new view and all the data is already there, the type is already defined. I just need to write the query on it. The trade-off comes at that because you don't know what you want beforehand. If you need to query it up very deeply, it's tricky. You don't know what it is beforehand. You don't know what the client is requesting. And so you can't just naively preload it. If you have a bunch of deeply nested Active Record objects, you typically use an Active Record preloader and you would preload the objects beforehand to prevent n plus ones. We use a gem called GraphQL Batch Loader. It was written by an exangelus engineer, Jeff Genny. It's used by some other people. There's a bunch of solutions out there now for this. There's also one called like Batch GraphQL Loader or some other incantation of those words. Graphql has their own thing called Data Loader that that they come out with now. We tried to switch that. It didn't work super well for us.
 
[01:04:34.510] - Alexander Stathis
We ran into some other issues. But what it does is it essentially allows you to wrap a field in your GraphQL schema and batch those together. And so they're lazy evaluated. So instead of querying on widgets and widgets have an association to .add, instead of querying on the collection of widgets and loading the .add for each widget one after another, doing the classic n plus one, you can wrap the .add method, the field on the widget type in a batch to all Graph Loader, and it will lazy evaluate them all together and load them all up at once. And so we can do that to prevent those ones. We have a little bit of a custom field that we've written to try to work around it to make it easier to add these types of preloads to each of the fields. That code, I'm pretty sure I found on a Stack Overflow thread somewhere, although it's since gotten boosted. One of our DevOps Wizard guys, Corbin, who I love, he's incredible, looked at how Rails handles through associations and how they handle preloading in that case and how they predictably reflect to determine what's a preload and added all of that to it, too.
 
[01:05:39.840] - Alexander Stathis
So now you don't even always need to specify exactly what the preloads are. Sometimes I can just get it right, which is pretty slick.
 
[01:05:46.380] - Robby Russell
You mentioned earlier that a lot of the developers that might be coming in might not already be Ruby on Rails developers. We could maybe get into the wiser, none of that. But if they're coming from other tech stacks and backgrounds, what does that onboarding experience it look like for getting people ramped up with Rails? Do you have any onboarding tools that you use to help developers spin up their local development environments? If you have all these different services and you got a delayed job, you've got good job. What does that environment look like at the moment?
 
[01:06:14.020] - Alexander Stathis
Yeah, totally. It's a great question. We have a Notion Dock. It has the steps to reproduce, the 100 % deterministic. Definitely no issues there. Steps to reproduce a dev environment. It's like the Apple Silicon. We're all on MacBooks. We use Docker.
 
[01:06:28.650] - Robby Russell
Install Home Brew.
 
[01:06:29.600] - Alexander Stathis
Yeah, Exactly. Brew, like install, and then Brew install. Here's a list of things. And then we use ASDF. That helps us manage. It's a tool manager, so you can define on a folder by folder basis what versions of what tools. It uses plugins, so we maintain our PNPM versions, Ruby versions through that, and you just go to the directory, run ASDF install, and it installs all the things you need. So we've got it wired up a little bit. And then we use Docker to containerize some of our dependent services, like our DBs, Redis, Redis. Again, I don't know how to say it, stuff like that. We use a company called Tonic, and I'm not sure what people's experiences with them or other solutions is, but what it does is it takes our production database and then it allows us to define which columns we want to scramble. And then it produces a subsetted version of our production database with scramble data, so no PII, and then we can load that locally. So We have some make file scripts that we've written that make that pretty slick, some Argo jobs, stuff like that. The cool thing about it is that it preserves primary keys.
 
[01:07:37.240] - Alexander Stathis
If you have multiple services running, you can install your consistent tonic images from each database, and they all, cross-references across, assuming that the subsets overlap. There's still a little bit of work to do here. It's not magic. You can get relatively good production environments. Like I said, for us, it's really important that we do that because our data graph is quite complex, and there are a lot of relationships in there, and it has to all be there or all not. You can't just pull in one table. Our flows are pretty complicated. So trying to create new stuff is... We had for a while some docs that were like, Here's how you spin up a new thing like this or do that. But being able to just load up the app and have it work is a pretty sweet dev experience. It still requires some maintenance. Migration still affect this thing. We have to regenerate it pretty regularly. I find myself in there every once in a while adding a relation. You still have to maintain the data. The data is still organic. It's still growing. It's still shaping. The new things are getting added, things are getting removed.
 
[01:08:41.500] - Alexander Stathis
And so you still have to maintain that. You can't get away from that, I don't think. It automates a lot of the ETL step of taking this thing and turning it into a thing you can use locally.
 
[01:08:51.400] - Robby Russell
Does Tonic then literally bring that data snapshot that's been scrubbed off of use data so there's no private information about any of the real user data? And then is that loading in your local Docker containers and everything? If you got Postgres or MySQL and things like that, or is it you making a network connection to an endpoint that tonic is providing your developers?
 
[01:09:14.760] - Alexander Stathis
We It uses a self-hosted, a self-managed solution. So data doesn't leave us. It's not going to tonic. That's important. That's really, really important. Our data is not leaving. And what it does is it loads up off of our production DB The way we have it set up, I'm sure you can do this different ways. It loads it up and pushes it down, transforms it, and then pushes it down to another database. And then we have a RakeJob or an Argo RakeJob. Argos is our CI/CD/Frawn tool. And It runs a Rake job, which actually makes a physical copy of that database. So it copies the database files into a TAR, which you can then download, which goes to S3, and then you can download that TAR then you can use to just replace. You stop your database container and you replace all the file contents, and then you up your database container, and there the data is.
 
[01:10:09.040] - Robby Russell
Okay.
 
[01:10:09.520] - Alexander Stathis
So it's a little bit of a pipeline, but it works relatively well.
 
[01:10:14.420] - Robby Russell
That's good. I'm always curious about how teams are thinking about seeds or keeping things efficient locally for developers or like, Hey, we need to spin up a couple more of this type of user. How does that actually end up working? If you're building out a new set of functionality and that's not already your existing data, what's that process look like a little bit there? Are people using seeds in that scenario or something like that or some rate tasks to spin up some data locally?
 
[01:10:39.960] - Alexander Stathis
Yeah, I mean, the way it's set up is that it's a live copy of your schema locally. You can run migrations on it. And so if someone's adding a feature on a branch or ship something, that's not going to impact me. I can just run the migration. We have, again, make file scripts and make migrate or whatever, and it runs and applies the migrations to this database, and it works fine. Or if I I want to pull in some data from tonic, I can kick off a generation DOB in tonic, and then, I don't know, some time goes by, right, Beboop? And then I can run make setup again or whatever, and it pulls the data down and installs the new data. So all that's really required there, at least with tonic, is you got to go into the UI and you have to set up the relations so it knows what to pull into the subset or leave out of the subset or whatever. You can figure it a little bit. But then we've got the pipeline set up so that it's relatively straightforward. You can kick it and then pull it down when it's done.
 
[01:11:31.520] - Alexander Stathis
It's not a fast process, and it certainly has its gaps. It's a solution. I think this is one of those things that, I don't know, if somebody has a great solution, definitely let me know. But I don't think there's a great answer here at all. I don't think you can just say, Oh, here's the end all solution that everyone should use, and this is what works. I think this is just a tough problem. Your data is organic. You have to come up with states that are consistent. It's expensive to maintain because the code is living, breathing organisms, changing over time, and it's hard to keep seeds or whatever solution you have in sync with that. And so the other way I would put it is that people still complain, but it works mostly, and it's relatively maintainable. And that's, I I think what you really want, right? It's something that you feel like you can keep on top of because in the past, with hard-coated seeds or whatever it is, it gets really hard to keep on top of. It's just a real pain.
 
[01:12:26.020] - Robby Russell
I'm curious, in those scenarios, are you able to... Do you have enough data, typically, to address a performance issue that's only seemingly happening when there's a high volume of data or that much information and you're not going to be able to run that locally or you're maybe not cloning that to some staging type environment of data.
 
[01:12:45.580] - Alexander Stathis
Yeah, sometimes production issues come in a lot of different flavors. Latency is a particularly gnarly one to try to unwind. Where is it coming from? What's slow? Sometimes the data is the issue, and sometimes you can nail it, you have enough of it or whatever. Other times, you just have to get into Datadog and start looking at the flame graphs. One of the nice things about our GraphQL set up is we have it instrumented so that each field has its own span in Datadog. And because Rails is largely not asynchronous or parallel, you get these really pretty flame graphs of the fields resolving themselves, and you can easily identify the big ones. It's as simple as look at the thing and which one's longer. It sometimes works It locally, but a lot of the times I think you have to try to repro it in production or you have to look into data dog.
 
[01:13:37.540] - Robby Russell
That makes a lot of sense. You're thinking about how different teams are navigating this, and there's always a lot of contextual things, and I'm particularly trying to ask some questions around this, at least a lot of the different guests I'm bringing on the show, just to show that there's a lot of variety to this. But also to help share this, I think it's maybe the less exciting part of what we're solving sometimes is how we just make the local developer environment a little bit simpler to get up and running? Because Rails is new, it's amazing, and you can start really quickly. But when you're coming into an existing job, there's existing code and existing data. It's a lot of things to wrap your head around as a developer. How do I have something somewhat realistic that I can start working with to understand everything? Then there's not a strong opinion in the rail's ecosystem necessarily, because we're relying on a third-party tools or approaches or people are literally just taking database dumps, scrubbing the data. There's a bunch... There's Gems that do some of that as well, but it's not glamorous, I think, in a lot of ways.
 
[01:14:28.140] - Alexander Stathis
Yeah, I wouldn't describe what we're doing as glamorous. It's definitely not. It still requires elbow grease. Again, I don't know that there's a great solution to this. I view it as like... It's like writing a Notion doc or writing a document. You can write a document which explains how code works. That's good for usually a few days, maybe a week. And then that thing is no longer relevant. I mean, I hope your company is shipping it enough so that that's true, not to be too opinionated. But yeah, I mean, it's like a document. When you write a document that explains a technical thing or an area or a calculation or whatever it is that you're trying to capture, it requires constant tending. You have to be willing to maintain. And I just don't know that there's a way around that with code that's changing. You're adding models, you're removing models, your business logic is changing, the assumptions that you have around the relationships in the models are changing. I don't know how you can allow the code to evolve and organically grow and become more or less in some cases. Because less can be bad in seeds also.
 
[01:15:32.960] - Alexander Stathis
You end up with these things that don't work anymore. I don't know how you do that without a gardener, without someone to constantly tend to the garden. With tonic, it automates a little bit of it. You just pull the new table in and you can say, get rid of these columns. And that works pretty well because it automates. I don't have to go and write each column or generate data or whatever. It's just there. But it also requires its tending. You have to take care of it. And you also You have to make sure that you're scrubbing the data that you need to be scrubbing and you're doing justice to the data itself.
 
[01:16:06.080] - Robby Russell
I think that makes a lot of sense. Were there other tools like Tonic that you also evaluated that you recall?
 
[01:16:10.980] - Alexander Stathis
No, actually, I wasn't part of the decision to adopt there. I'm not sure where where that came from.
 
[01:16:15.610] - Robby Russell
Okay, interesting. Well, definitely links to Tonic in the show notes. This is, I think, an ongoing conversation I want to have with a lot of people. Maybe I'll compile a list of a bunch of these different approaches at some point because there's no right way to do it.
 
[01:16:27.980] - Alexander Stathis
It'd be cool to hear if other people have alternatives that work for them that are similar or different or whatever. This is just what we've made work.
 
[01:16:35.740] - Robby Russell
I recently talked to a couple of developers that work at Doximity, and they were mentioning that they had... I think they have some tooling like tonic as well, but they also have some tooling where they have a bunch of like, raked tasks that they have like a web UI. They're like, okay, so I'm working on this part of the application. I need five of these types of users, three of these types of things, and I'll go generate the data so that developer can work on that area of their code base. But I think they also got like 40 50 services there. It's a much bigger environment, at least number of engineers and number of services there. I thought it was an interesting way, but it's still not this standardized thing that everybody's having to make up their own solution for this. I think that's just part of the job of having an engineering team is you're going to have to figure this stuff out.
 
[01:17:18.300] - Alexander Stathis
Yeah. It's one of those things, those jobs that's not really owned by anybody. It's like, well, what I mean by that is if I go add a feature and it requires, let's say, a new table, If I'm being good, if I put my halo on and I'm being the proper engineer that I should be, then I'm going to go and grab the seeds, not rake or whatever it is, and I'm going to go update it and add the table and do the whole thing. But in the real world, I don't know, this is the constant gardening. If you're a relatively new engineer to a company, maybe you're not familiar with Rails, or maybe you're not familiar with that setup. You don't know that seed stop rake is a thing. So you add your new table or whatever, and you don't do that. And then the next new engineer that comes along is the first person to hit it, right? Because everybody else has already got their setup or whatever doing it, and they run into it. And so my take here would be that I think it doesn't matter what your solution is for this, maybe, but you need a gardener.
 
[01:18:10.200] - Alexander Stathis
You need someone who's willing to take that role on and be like, yes, I'm going to keep this thing healthy. And if you have that, then it'll work, whatever it is. Someone new will hit it, they hop on a huddle or a Zoom chat or whatever it is and debug it, update the seed so the next person doesn't hit the same issue. But it requires that proactive gardening, I think. And that's worked really well for us at AngelList. I do a lot of that gardening. I enjoy that type of work. There's a few other people that pitch in and help out with that as well.
 
[01:18:39.610] - Robby Russell
Do you keep a separate backlog of gardening type tasks?
 
[01:18:42.580] - Alexander Stathis
We have a platform team. We do a little bit. For a long time, I just had a... You can chat to yourself in Slack, and I would just keep a list of things that I wanted to do in Slack. We're a little more sophisticated now. We use linear to track work. So, yeah, I would say, nowadays, what we would do is create a linear issue, push it to the correct team. We have a platform team, which is not... That name's a little overloaded. It's typically in your typical environment would mean your DevOps, your deployments, CI/CD. Our team's a little more like they own our core services, like authentication, or they own our design system that we use. We have our own internal design system. They'll own some of the CI/CD stuff. They'll own some of these in between projects, like upgrading packages, things like that. That's probably where something like that would land. Like dev experience would fall squarely in their domain.
 
[01:19:34.340] - Robby Russell
So at AngelList, you're using these different tech stacks now, but primarily Rails and Next.js and you've node. Are you using TypeScript as well there, if I recall?
 
[01:19:44.960] - Alexander Stathis
Yeah, totally TypeScript.
 
[01:19:46.460] - Robby Russell
How do you find Rails to still be part of AngelList, say, secret sauce? Or is that an assumption that it is?
 
[01:19:53.260] - Alexander Stathis
Yeah, I think it comes down, I think, to Rails is, I think, pretty easy, actually, to learn. Like I said, the ecosystem is super high quality. It's funny because when I started at AngelList, I didn't know Rails, I didn't know Ruby. And I had the previous job written a lot of SQL. Every engineer goes through this phase where they're like, SQL. And I came in and I remember this thread where somebody was like, Hey, how can I get this? And I was like, Oh, here's how you'd write the SQL, but I don't know if you know whatever or whatever. And I just remember being so baffled by Active Record and the Rails console and all of this. And now I'm like, I don't even want to write SQL anymore. I just want to work on Active Record. And so I think one of the beauties of Rails is that the whole thing works like that. You start to get it and you start to work with it. And at least for me, the more I use it, the more I fall in love with just how easy it is. Like, Rails, new is magic. You do Rails, new and the thing just works, right?
 
[01:20:47.680] - Alexander Stathis
And so I think we can bring engineers in who, because of some trend shifts or whatever in the industry, don't have any Rails experience. And in three months, they can be writing or in less than three months, in a month, they can be writing proficiently in our code base and shipping code. And it's not that hard to figure out how to add a GraphQL type and GraphQL Ruby or whatever it is. And so I would say our ability to continue to ship product and deliver value and build cool shit is a lot due to Rails. You could do that at Node. Express is not so complicated. But I think Rails has really organically evolved into this thing that it's just really nice to use and it just works. And you can to bring anybody in and they figure it out pretty quickly and ship stuff.
 
[01:21:33.080] - Robby Russell
I like that. It's really refreshing to hear somebody come into the... Given that you came from a different tech stack, you found your way into Rails, you think it's cool, you can do built cool shit with it, you're able to ship things for AngelList, and you're able to onboard people in a month or two that didn't have Rails experience. That's quite a testament to not just Rails, but also to how AngelList has been able to benefit from that and your team culture there, I would imagine. I do talk to some people that when they're thinking about trying to hire people and they're like, Well, it sometimes can be hard to find Ruby on Rails engineers, depending on the job market at that point in time. But AngelList has been able to make that not necessarily the requirement. Was that a conscious thing that you recall there? Why aren't you just looking for people that already have Ruby on Rails experience or you're just trying to hire for aptitude and knowing that people will figure this stuff out as well?
 
[01:22:24.620] - Alexander Stathis
Yeah. This is going to sound really opinionated, but I think any engineer worth hiring should be able to learn any language. I don't care if you write, I don't know, Scala, right? And you're using some gnarly functional Scala stuff. It's not easy for me to write Rails code and then immediately switch to Scala and write functional Scala. But if you gave me a month or you gave me a couple of months, I feel like I could figure it out. And I feel like any engineer who's worth hiring should be able to make that switch, especially because, dude, we do not write crazy Ruby code here. We're not doing a lot of meta programming stuff like that. So if we hire somebody in, I expect them to be able to read the Ruby code. It's not super different than reading any other code. I'm not super closely involved in hiring, but we do have some trouble finding Ruby and Rails just is not in vogue the same way that it was maybe 10 or 20 years ago. This is what I'm told, right? And so it's hard to find, yeah, like Wizard Ruby, like Ruby poets, right?
 
[01:23:26.620] - Alexander Stathis
Or Rails poets. But like I said, It hasn't really held us back in terms of... At the end of the day, what we're trying to do is make cool things and deliver value to customers and innovate the industry and help startups start up and innovate and change the world. And that's what AngelList is all about. And I don't really care if you know Ruby or not to do that. The tech stack is something separate. And so it does make hiring hard. This is what I've heard I can only relate this second-hand, but it does make hiring hard. It's really hard to find Ruby on Rails engineers in this day and age. But I also just think it's not that hard to learn or pick up. And I think any engineer worth hiring should be able to do that. And so it hasn't really hindered us. It hasn't seemed like a problem.
 
[01:24:20.140] - Robby Russell
That's great to hear that. And it's interesting because I mentioned this earlier, but given that I work in the consulting side of things, prospective clients come to me and my team at Planet Argon expecting us to have people that are well-versed in Ruby and Rails. I don't really have the luxury of hiring someone and waiting a few months to see if they figure out Rails so they can show up and be like, air-quoting an expert in a consulting engagement. So we have to specifically find those types of people with that skillset. But we'll get called into situations where we're like, Hey, can you help our team ramp up on some of these Rails things? Or we've got some inconsistencies or some of the developers that were around early on when we started building this application no longer here. We have a bunch of people on our now that have inherited this tech stack that they understand a lot of it, but they don't really understand some of the Rails ethos, or they may or may not even know who DHH is. That actually happens. That is a reality of a lot of teams of developers.
 
[01:25:14.300] - Robby Russell
Like, Oh, I'm just working on this thing called Ruby on Rails at this team, picked 15 years ago. I got a job, and that's how I get ramped up and stuff. All that to say is that there's these different spectrums, and there are a lot of really talented Rails developers out there, and I'm trying to help get those people highlight that. We're sharing these stories, and maybe this will help AngelList find some of more of those Ruby and Rails poets in the future.
 
[01:25:37.160] - Alexander Stathis
Yeah, totally. We're hiring, and we'd love to have you. I would say most of our engineers are not... The poet analogy I really like because it's like you can know a little bit of a language that can get you by in a country, or you can be conversational, or you can be well-read in the canon, and you can be a poet or whatever, however you want to gradiate it out. I think any company to be successful in their tech stack, they have to have at least one. Probably there's some perfect mix. You have to have one person who can write poetry or whatever. Most of our engineers are not that way. They're not digging into the internals of Rails or doing any of this really complex stuff, but that's not necessary. At least for what we're doing, it's really not necessary. When I say people come in and they learn Rails in a few months or in a month or whatever it is, or Ruby or whatever, I'm not talking about people. These aren't people who are like wizards. They're They're just like...
 
[01:26:30.810] - Robby Russell
They're not monkey patching yet.
 
[01:26:32.620] - Alexander Stathis
Yeah, but I think they come in and they can work in it. At the end of the day, I think when you're a company and you're a software company, what's your goal? You're trying to deliver value to your customers. You're not trying to write like, Rails poetry. You need you sometimes. That's just part of solving the problems. But most people can get conversational pretty quickly, and that works pretty well for us.
 
[01:26:54.600] - Robby Russell
All right. It kept you long enough, Alex. I have a couple of quick last questions for you. Is there a programming book that find yourself recommending to peers?
 
[01:27:03.160] - Alexander Stathis
Yeah, so I don't really read too many programming books. I did read semi-recently Staff Engineers by Will Larson, which is not a programming book, but it's maybe relevant to some people listening. It helped me conceptualize some of the things that I was doing as a senior engineer and some of the things that I wanted to be doing or not wanted to be doing. And so I thought that was pretty useful.
 
[01:27:26.740] - Robby Russell
I'll definitely include links to that book. And you said that was Will Larson in the show notes for everybody.
 
[01:27:31.440] - Alexander Stathis
It was recently, it was the CTO of CARTO, which is one of our competitors. Although recently, he's moved on to another career or another option.
 
[01:27:40.400] - Robby Russell
Interesting. Where can folks follow your work or learn more about what you're building over at AngelList.
 
[01:27:46.060] - Alexander Stathis
Yeah, I unfortunately don't do a great job of having a public programming persona. You can check out my GitHub, I guess, but it's mostly in our private repos. I do occasionally do a little bit of side work or whatever for my own personal pleasure, but yeah, not a ton of public programming stuff.
 
[01:28:03.420] - Robby Russell
Again, that's one of the things I'm really trying to accomplish on this podcast is I'm trying to get in and have conversations with people that are working in the weeds, and they have a lot of things that they have to share with the rest of the community, but they're not necessarily hanging out on Twitter or Blue Sky all day or broadcasting on an engineering blog or something. So thank you so much for stopping by to talk shop with us, Alex. I really appreciate that.
 
[01:28:24.040] - Alexander Stathis
Yeah, thanks. It's been great. I've enjoyed it a lot.
 
[01:28:28.280] - Robby Russell
That's it for this episode of On Rails. This podcast is produced by the Rails Foundation with support from its core and contributing members. If you enjoyed the ride, leave a quick review on Apple Podcasts, Spotify, or YouTube. It helps more folks find the show. Again, I'm Robby Russell. Thanks for riding along. See you next time.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Maintainable Artwork

Maintainable

Robby Russell
Remote Ruby Artwork

Remote Ruby

Chris Oliver, Andrew Mason
IndieRails Artwork

IndieRails

Jess Brown & Jeremy Smith
REWORK Artwork

REWORK

37signals