Unlocking Clojure’s Power: Insights from Sean Corfield

Our third episode is full of amazing learning opportunities. Sean Corfield is a very Senior Software Architect who has previously been on the C++ standard committee, contributed and created various open source Clojure projects such as clojure.java.jdbc, next.jdbc, core.cash, core.memoize, tool.cli, clj.new, honey.sql and clj.time.

Host:

Hello, Sean, welcome to the podcast!

Sean:

Hi, thank you for having me on.

Host:

Very excited to have you on. This is something I’ve really been looking forward to, and I’m sure the people will as well. I want you to explain exactly who you are, and just give the people a short synopsis of where you work, what you do, and your experience—things of that nature.

Sean:

Sure, yeah. Sean Corfield, I'm a senior software architect. I’ve been in that role at various companies now for about two decades. I work for an online dating company right now, and we have about 40 dating sites in over a dozen languages with a global community of users. I was fortunate enough to get started with programming when I was a kid. My dad bought me a programmable calculator, and I was instantly hooked. I thought the ability to tell computers what to do was just fascinating. I went on to do math and computer science and got to learn a lot of fun languages at university, which sparked my interest in languages.

When I got started in the industry, I actually worked on compilers, runtime systems, and test suites. I got involved with standards—so I was on the C++ ANSI standards committee for eight years in the '90s. I then gradually drifted into web development because that’s what a lot of companies and clients I was working with were doing. Eventually, I ended up in the U.S. I was born and raised in the UK, and I ended up working for Macromedia for six years and Adobe for three years as their senior software architect overseeing all of the large internal projects. After that, I went freelance for a while, mostly advising people on coding standards and the architecture of systems. Then I ended up at the dating company, where I’ve been for the last decade. So, I've been enjoying that.

Host:

That’s amazing. You have quite a robust experience. So, if we talk about C++, what was it like back then? How did it feel during that time?

Sean:

It was really interesting. I got involved with the standards stuff in very early '92. I initially got involved at the British Standards Institute just because I was working in compilers and language systems. Fairly quickly, once I'd been attending meetings, I got voted to be the Principal UK expert on the ISO committee at most of the meetings through the '90s. Then I decided to join WG6, which was the NCC++ committee, and work on that.

So, it was really interesting to see how the sausages are made—working behind the scenes, actually looking at language features, and trying to figure out how to specify a language feature. Of course, back then, in the early '90s, the standard book covered the basics of the language, but there were still some language features that were in design. Templates were still being worked on a lot. Namespaces were being worked on a lot. And so, a lot of the stuff around that was things we had to go in and design and figure out how they should all work.

Talking about a room full of just crazy smart people, it was amazing to work with them. My very first meeting, I went into the ISO meeting, which is always the night before the main meeting, and sat down at the table. This tall, slim guy across the table looked up from his book and said, “Who are you, and what are you doing here?”

I said, “I’m Sean Corfield. I’m the UK representative this year.”

A guy at the other end of the table said, “Oh, don’t worry about him, that’s Dark Brook. He’s like that with everyone.” And then he introduced himself, and it was Sam Harbison. Then, just a stream of people whose books and columns and magazines I’d read started coming in, and I was like, “Oh my God, why am I here? All these people are just world-renowned experts.”

They put you at ease, and you gradually get used to it. You find yourself working with them on really hard problems. I found that pattern repeated as new people joined the committee. They would come to it from their companies as a rep and be very much like, “Oh my God, look at all these authors and world-renowned speakers.” Then they’d settle in, and you'd find various people would be leading various language groups, and it was a really interesting experience for someone who has an interest in languages as I do. Absolutely amazing, that's a very rare opportunity. And what I got from that is what it's like to work with really smart people. What that experience is, because a lot of people who are joining, and you know, I always advise them, hey, if you can find a group of people who are very smart, try and join them, whatever way you can. It will just enrich who you are as a person and how you work.

Host:

So, could you talk a little bit of exactly what it was that made it so enriching? Like, what was your experience with that, especially at that level?

Sean Corfield:

It was definitely very interesting for me because you had obviously Bjarne Stroustrup, the creator of the language, and there were parts of the language that he had not yet fully specified. And so, as a group, we broke down into little subcommittees and focused on different features of the language. We would take his concepts and ideas and then fully flesh them out into a specification that compiler writers could actually implement. And sometimes they had to change a fair bit; sometimes they just needed additional specification. But it was also interesting to see how the different compiler vendors brought their own opinions to the table, because some compiler vendors had pretty entrenched positions on some of the language features. They’d say, "We’ve implemented it this way, and we've got thousands of users of our compiler that use this feature, and now you want to change it? So we’re going to argue against that. We're going to vote against this change to the language." And you had to find a way to navigate through that. A huge amount about working with groups of people, coming to compromises, trying to reach consensus. Sometimes there's just no good solution to a problem, so you have to pick the least worst solution. Mhm, and I stayed with the committee until the late '90s and then began to get very interested in Java, which was the young upstart, and in fact, started to do more work with Java and began to really enjoy it. So I drifted away from the C++ committee and stopped using C++ by pretty much the end of the '90s.

Host:

Uh, when you were done with C++, I'm assuming your interest in Java then brought you over to Adobe. How were you able to leverage the information and the experiences you got there, working with smart people, essentially coming to consensus, communicating with them, things of that nature, and then going into Adobe and architecting and managing large teams?

Sean Corfield:

Yeah, it was definitely interesting. I went from a small company that I was representing at the time to Macromedia first, which was probably about, I don’t know, 600 to 800 people when I joined, and grew to about 1200 people. And it was the early days of course, the dot-com bubble, and then the bubble burst, and so the massive layoffs. It was a strange time to be in the industry, but I’d gone into Macromedia in a very senior role, and I had a team of very smart architects working for me. And between us, we oversaw pretty much all of the large projects that happened in the IT side of the company. So I found myself split between working with department heads, working with different divisional IT groups, and also sometimes working with the C-level folks. It was a very interesting mix of very high-level management, executive decision-making, and also right down to getting your hands dirty with the engineers and working on things. We rolled out an MQ message system to try and integrate a lot of the diverse systems Macromedia had. And so there were times when I had to roll up my sleeves and actually build some of the adapters for different divisions to connect that up, and other times I would just be managing the architects as they oversaw projects. It was a very good experience, and I really enjoyed Macromedia. It was a very free-thinking company. It was one of those where it was okay to make mistakes and apologize afterwards, so there was a lot of innovation, um, and also at times not a lot of structure to how things were done. And then the acquisition by Adobe was a radical change for a lot of us. I'd gone from being, at that point, one of the oldest people in the division to being far from the oldest when Adobe acquired Macromedia. If you made your five-year tenure, you got a watch, and if you made your tenure, you got a fully loaded MacBook, which was a really nice gift, and it was a big deal. They got like the whole sort of company together and Adobe, it was common to find people who had been there for 15, 20, even 25 years, so it was very different culture, and they liked a lot of what Macromedia had in culture, which was partly why they acquired the company. But I think ultimately the DNA of the companies was very different, and I didn’t enjoy my time with Adobe all that much, and I left after a year. And it’s while I was at Adobe, we did quite a lot of work on you’re the acquisition, all right.

Yeah, there was certainly some interesting work to be had. I ran a team that was working on core REST APIs that sat behind a lot of the document management systems that they were building out at the time. Things like Adobe Sign that you have now originated from the teams that I was working with back then. But it was such a change of culture and such a change in size of company—Adobe was four or five times bigger than Macromedia—that it just didn’t really suit me, and so I decided to quit and go freelance and did a whole bunch of freelance web architecture and coding standard work after that, all around the world.

Host:

So the journey so far is C++ standards, working with a startup, which was a very interesting sort of experience for you and fit your style of intellectual discourse and creativity. Then the larger company comes in, acquires, and it does what they do, and then you decided to hop off the train completely and do freelancing, which is basically a way of doing a startup. Yeah, what ended up happening there? Did you travel any interesting...

Sean Corfield:

I was fairly fortunate that most of my clients let me work remotely. I think my first client at the time was actually in Australia, and this is pre...

Oh yeah, this was 2007.

Okay, you’re ready, and I actually worked full-time remote from home almost continuously since 2007. Wow, so I’ve had about 14 years of working remotely now. But yeah, I did, I would help with sort of architectural design on large web apps, e-commerce systems. Reviews of code were very common, that was something I was engaged for a lot, and it was something I’d been doing back in England back in the early days that, after I’d been working with compilers, I went to work for a company that did static source code analysis and coding standards. So one of the things we would do is we’d go around the world and that involved a lot of traveling and analyze millions of lines of code in very large companies and highlight potential problems in the code, things like that. So I’d gone back to that in a small-scale way where I was advising on coding standards and architecture, and then a startup in the Bay Area engaged me initially to do that sort of work and then to actually do development and build up a team and create a new product for them.

Host:

And that shifted you from,

Sean Corfield:

I’d gone from C++ to Java, and that shifted me, and we built the front end with Flex, which was originally Macromedia technology and is now an Apache project. That was a fascinating product—it was a desktop collaboration system that let you share documents, chat, and do video. So this is back in 2007, and the tech was pretty rough around the edges, so it was hard to get it working and kind of hard to get it to scale. During that time, I really began to appreciate how certain technologies could improve Clojure productivity, especially when it came to scaling systems and handling concurrent tasks.

And in the way of startups, one day the startup just imploded through lack of funds, and so we just—the whole engineering team was pulled together, and they said, "Really sorry, but tomorrow is your last day." It was a tough moment, but I took it as a learning experience, and I ended up going back to freelancing again after that. This period of freelancing gave me time to reflect on the technologies I’d worked with, and it further solidified my belief in how well-suited Clojure could be for handling complex, performance-sensitive tasks in future projects.

Host:

It’s better to fly too close to the sun and get burned than to not fly at all, yeah.

Sean Corfield:

I think in the Bay Area, there’s so many people who have that badge of honor of having worked for one or more startups that have imploded. It wasn’t the first startup I’d worked at that had imploded, and it wasn’t actually the last in the area either. It’s a lot of fun to work at startups, but obviously, you know, where there’s risk, there’s sometimes reward—not always so.

Host:

So, you freelance, you joined a few startups, I’m assuming, then you ended up somehow stumbling into what you do now and have been doing for the last 10 years, which is Clojure?

Sean Corfield:

Yeah, and I was very lucky with that, again, because I got involved initially as a consultant. A friend of mine had engaged me as a consultant with World Singles Networks, and he suggested me as an additional consultant. So that was how I got involved, and we were designing and building out the second generation of their dating platform. Actually built in ColdFusion, and partly I got involved with that because when we were at Macromedia, we acquired a company called Lair, and that was the company that had created ColdFusion. So we had ColdFusion in-house running as a language on the JVM—a compile-on-demand language. So that kind of stayed there in my background for a while. But one of the first things I did when I joined World Singles was there was a particularly difficult problem that involved repeatedly scanning a database, producing XML packets, feeding them into a custom, proprietary search engine, and running searches and generating emails off that. This needed to be a continuous process, and it doesn’t fit ColdFusion, which is much more web-based and request-response based.

Various people at the company had tried different techs, and they said, "Okay, you've got free reign to do pretty much what you want." I was interested in Scala at the time, and this kind of goes back to my comment about learning a lot of languages in university, and I’ve continued learning a lot of languages throughout my career. They said, "If you want to try Scala, go ahead," so I built a version of the process in Scala. It was a nice small program that ran very fast and, in fact, was able to bring down the search engine because it was able to generate so many searches concurrently. However, as I moved further along, I realized that the Clojure development cycle offered some key advantages over Scala, especially in terms of its simplicity and flexibility. While Scala was powerful, the ongoing adjustments in Clojure allowed me to quickly iterate and optimize the system.

But at the time, Scala 2.7’s built-in actor library had memory leaks, so the process ran, but we had to restart it every day or two. That was fine; we lived with that. However, it became clear that Clojure performance optimization could be a game-changer for our scalability needs. Unlike Scala, where we were dealing with certain limitations, Clojure's built-in concurrency models gave us a more efficient way of handling these tasks. Over time, I started to explore Clojure software design patterns more thoroughly, particularly its approach to managing state and concurrency, which proved to be highly beneficial when we scaled the system further.

When I made the switch, I started using the Clojure REPL for rapid prototyping, which gave me a much more interactive and dynamic development experience. The REPL made it incredibly easy to experiment with different parts of the system without needing to stop and recompile, thus drastically reducing the development cycle time. I also started to experiment with ClojureScript for certain parts of the front-end, which helped bridge the gap between the server-side and client-side logic, enabling us to write both in a unified way.

This combination of tools allowed us to build a more efficient and scalable solution that didn’t require constant maintenance and tuning. The Clojure-based system was much easier to maintain, and once we moved to it, we were able to focus on solving the real problems, rather than managing the intricacies of the underlying technologies.

Then we upgraded to Scala 2.8, which was horrendously painful. There was no binary compatibility between any of the milestone builds during the pre-release; it was just a nightmare. You had to recompile everything, basically. You had to upgrade all of your dependencies at the same time every time there was a new milestone, and it didn’t sit too well with some of the other engineers who were used to very dynamic languages. So I started to cast around for an alternative. I’d done a bit of Lisp at university and saw that there was now a Lisp on the JVM, Clojure. So I figured, let’s try re-implementing the Scala processing in Clojure. They’re both functional languages, both support concurrency, and that actually went really well as a proof of concept. We got pretty good throughput, and it didn’t need restarting due to memory leaks, so that was nice.

Once I got that in place, the other engineers were curious about it, so I started to cross-train them. We started to write lots of small, low-level pieces of the system in Clojure and gradually built up from that until the situation we’re in now, where we have 115,000 lines of Clojure, have replaced pretty much all of the legacy codebase we had before, and now the backend is all Clojure engineers. This transition not only improved the overall system’s performance but also boosted Clojure productivity, allowing us to more efficiently handle the complexity of the system.

As we continued to expand and evolve the system, we found that using Clojure helped us avoid many of the challenges we faced with other languages. With its immutability and concurrency features, Clojure was able to scale as our needs grew, and the team became more proficient, allowing us to handle even more complex tasks with greater ease. The combination of functional programming and the JVM’s capabilities allowed us to move forward with more confidence and efficiency, dramatically enhancing both the quality of the code and our Clojure productivity.

Host:

That’s amazing. I think you’ve been doing this for quite a while now, almost 10 years. I’d like to talk a little bit about what you’ve contributed, as I understand you’ve worked a lot with open-source libraries. You’ve contributed to ClojureSQL, which has been converted to Clojure.Java JDBC, and you then created an improvement to that called Next.JDBC. You’ve also contributed to core.memoize and core.async, as well as CJ Time. I would say that Clojure owes quite a bit to your contributions—this is not a paid advertisement, I promise, but it’s true, and that’s one of the reasons I was really excited to have you on the podcast. Let’s talk about your first major contribution, which was the Clojure contrib library converted to Clojure.Java JDBC. You felt the need to revamp that library into Next.JDBC. Could you speak a little bit about the reason you did that, and then, of course, about the application and library itself?

Sean Corfield:

Sure. So when I got started with it, the Clojure JDBC library was going through quite a big change. Initially, it had a monolithic standard library, batteries included, called contri, which had about 60 different sub-projects that, back then, were all fairly ad hoc and didn’t all have proper maintenance. Because of the work we were doing with Clojure at World Singles, I needed a solid JDBC library, a wrapper for the underlying JDBC stuff. No one was maintaining Clojure Contract SQL at the time, and I jumped up and down and pestered the Clojure Core team, saying, “Look, you know, I really want to have a well-maintained library.”

They said, “Sure, okay, you can take over maintenance, and we’ll spin that contract library off into a separate project,” and that’s how the monolithic contract got broken up from 60 sort of ad hoc projects into about 20 or 30 well-maintained, more focused projects. This transition was a key moment for Clojure productivity, as it set the stage for libraries to be more modular, efficient, and maintainable.

So, I started working on that, and of course, Clojure best practices were still evolving quite a lot. This is going back to 2012, 2013, and how libraries were designed was changing. So, I started to evolve Clojure Java JDBC in line with what was evolving as best practices in Clojure, got it settled down with an API that I liked, and it began to get quite a lot of use. But one of the things that I noticed was that, at scale, there was a lot of overhead involved in the translation between the raw JDBC types in Java and the Clojure data structures that were being constructed over them.

I worked with a few people who were kind of performance specialists. They said, “Oh, if we could do this sort of thing or that sort of thing, it would be faster, and we could work with larger data sets.” And that got me thinking about how we could improve Clojure productivity by streamlining this process, especially for handling large data sets more efficiently. I started to think about a different kind of JDBC library that would work better with large streaming data sets, which was quite problematic with Java JDBC, but also would allow you to circumvent the overhead of converting everything to a Clojure data structure if all you needed was a certain part.

Because closure has such a strong focus on backward compatibility in libraries and in the language itself, the feeling is that you shouldn’t make breaking API changes; you should really create new names for things, whether that’s a new package for it, a new library, or just a new function. That’s part of the reason you moved from Scala to closure in the first place, that yes, the horrible breakages that I’d experienced with Scala.

And I must admit, while I was still building up Closure Java JDBC, I did change the API quite a bit, but strangely back in those days, not too many people were using JDBC with closure. That gradually got more popular as I maintained that library longer, and I think that was partly because early adopters of closure were doing more esoteric things, whereas I wanted to do very generalist stuff. I wanted to do regular MySQL database manipulation stuff with it.

So, I started to sketch out what a new library would look like that was based on Clojure’s abstraction of the idea of a reducible collection. Clojure is all about abstractions within the language, and one of the ideas is to have a collection that knows how to reduce itself, so it doesn’t do any work until you hand it to a process that will perform some reduce operation over it. This concept ties nicely into the idea of improving Clojure productivity, as it allows developers to write more concise, efficient code without worrying about unnecessary intermediate steps or resources.

That fits quite nicely with JDBC because you can say, “Okay, I plan to run this SQL statement with these parameters,” and then reduce can come along, and the plan says, “I’m going to set up the connection, I’m going to run the SQL, I’m going to start reading the result set data, and I’m going to feed you the result set data row by row as you need it,” in a way that lets you pull columns directly out of the underlying result set object.

Then, when the reduction completes, when the collection gets to the end, it tidies up all its resources, gives a connection back to the connection pool, and you’re done. So, this allows you to set up a query to run against the database to lazily stream rows out of the database so you can process an arbitrarily large data set and only have a small batch of it in memory. By utilizing such abstractions, developers can boost their Clojure productivity, enabling efficient handling of large-scale data without overwhelming the system’s resources.

So, although the reduce is eager—that it runs the entire thing start to finish—the activity on the database is lazy, essentially. And so, I went back with the folks who were really heavily into performance in Clojure, and they were able to get some very impressive benchmarks out of next JDBC at that point, and that's where I've kept the library since. So it's gradually added some options but always with a very strict eye on performance, ensuring that you shouldn't have to pay for an option you aren't using.

Host:

Okay, so from what I'm understanding, you've separated intent from execution within the context of that library. So, what I want you to do is, if you could explain how it is possible now that you've done that to actually make it so that a plan could be reduced eagerly but the execution has happened lazily?

Sean Corfield:

That's slightly more complicated because Clojure does have lazy sequences, and you can set up a collection that is only realized on demand. So, as you consume elements of it, that causes more elements to be realized, and the collection to be produced. This feature is part of the Clojure development cycle, where the language emphasizes flexibility and efficiency in data processing. However, the challenge arises in resource-based systems like JDBC. If someone stops consuming the sequence, there's no way to automatically identify if the connections should be closed, which can lead to resource leaks.

One of the issues this presents is with Clojure software design when working with external systems such as databases. Lazy sequences are a powerful tool, but they require careful management of side effects, such as connection handling. In Java's JDBC library, laziness wasn't the default behavior; it allowed you to override it to implement lazy loading, but it also meant that it became your responsibility to consume the entire sequence to ensure the connection would be properly closed and resources would be tidied up.

This issue becomes even more apparent when working within the Clojure REPL (Read-Eval-Print Loop) environment, which encourages iterative development and testing. While lazy sequences are a great tool for quickly working with data in the REPL, they can lead to problems in long-running processes where resource management becomes critical. The dynamic nature of Clojure's REPL can make it difficult to track resource consumption over time, especially when lazy evaluation is involved.

Moreover, when using technologies like ClojureScript for frontend development, the same principles of lazy loading apply. While ClojureScript allows for highly efficient web applications, the challenge of resource management remains when handling lazy sequences, particularly when interfacing with backend services like databases. Clojure performance optimization strategies can help mitigate this issue, ensuring that resources are managed effectively while still taking advantage of the benefits of lazy evaluation.

In the end, balancing the use of lazy sequences with proper resource management is key to ensuring the stability and efficiency of your system. Understanding these trade-offs is essential for developers working in Clojure and requires thoughtful integration of tools and strategies from the Clojure software design toolkit.

So, I made a decision in next JDBC not to allow that. So, you either have fully realized result sets that have to fit in memory, or you use a plan and a reduction. If you're using a plan and a reduction, you're guaranteed that the resources are taken care of, and you don't have to have the whole result set fit in memory. This approach allows for much better handling of large data sets and optimizes Clojure productivity by ensuring that memory management is handled efficiently. In the context of Clojure performance optimization, using this method helps maintain performance even when dealing with large data volumes, as it avoids unnecessary memory overhead.

But if you're working with the other style of queries, which is called execute, it will fully execute the query. It will realize the entire result set into a Clojure data structure, and then you can essentially do whatever you want with the data structure because you're in pure Clojure land at that point. This is where Clojure software design shines, as it enables flexible, powerful data manipulation directly within the language's core structures. However, having that lazy sequence evaluation working with resource management is pretty tricky, so I try to discourage people from doing it. It's a complex balance between lazy evaluation and memory management, and while it can be tempting for the sake of simplicity, it can quickly lead to inefficiencies that reduce overall system performance.

This is where the Clojure REPL really helps, as it allows for real-time experimentation with these concepts, helping you understand their impact on performance without committing to a full build. Focusing on better alternatives, such as using a plan and reduction approach, can greatly enhance Clojure productivity, especially when dealing with large-scale data operations. In the end, adopting the right techniques for resource management and performance is key to building robust and scalable systems with Clojure.

Host:

Fair enough. Being specific about your intent within that library should really save you on issues moving forward now that the heavy work has been done with making sure the resource allocation has been managed using plans. At least the API for that's amazing.

So, I want to move—we've gone rather deep—but I want to zoom out a little bit and talk about closure in general and what it's like to work with it in a startup, and then moving forward and as it evolved. You've been working with closure where you are for about 10 years, and could you explain what it has meant to be productive in closure? Has it changed over time? Have you added things to make your team more productive over time? And maybe start with the size of your team and the distribution of work and how you've been able to manage that?

Sean Corfield:

Sure. One of the things with closure is you'll hear people say we can get a lot done with very few people. That's not to say there aren't huge teams of closure people. If you look at some of the big success stories with closure, you'll see companies like New Bank, who have, I think, 3 or 400 developers all doing closure. We are able to manage that 115,000 line codebase with just two developers on the back end. Over the time we've been using closure, we've had as many as three developers. We've never had more than three developers doing closure on the back end.

Part of what makes that possible is closure is inherently a very high-level language based on abstractions, so you don't have to think about the nuts and bolts of algorithms or data transformers because you're working with functions that naturally operate sort of at the level of sequences and collections. You're working with a lot of higher-order functions, so map, reduce, filter—the basic building blocks of what you're doing. And you're also working with a core set of immutable data structures. So you have these persistent... they're structural sharing data structures that allow very efficient manipulation of them. But there's a whole class of problems you can't run into because it's immutable data. It's thread-safe to work with the structural sharing, so it's efficient.

All of the operations you have are going to work on a sequence or a map or a set or a vector—that sort of thing. And so you tend to represent your data that way, and it becomes very easy to take your data and do these arbitrary transformations on it. In fact, just as I was helping a beginner yesterday, they had a set of inventory data they started with and they wanted to do a sort of a grouping transformation over it where they were able to pull out all of the prices on a per-store basis per product. Their initial approach had been this sort of big nested loop structure, and I said, if you pre-transformed the data to this simpler form, which you can do with this one line, then you can group the data and you're very close to the result you want, and now all you have to do is this small reduction. So, their big piece of code came down to two small lines, and they were pretty blown away by that. I said, this is the sort of thing that comes to you over time as you work through the Clojure development cycle. You get to realize just how powerful it is, and you start to recognize a lot of patterns where you can take a fairly complex data structure and see a simple way of transforming it, so you end up with small amounts of code doing a lot of heavy lifting. This is a core element of Clojure software design, where simplicity and efficiency are emphasized through abstractions that allow you to focus on the problem at hand rather than the mechanics of the language.

And the other thing that makes you very productive is the REPL, the read-eval-print loop. People talk about REPLs and consoles with a lot of different languages, but in general, those are a program you run that you can type code into, the code compiles or it just interprets and gives you the answers. But Clojure inherently works like that. Clojure takes a single top-level form, compiles it into JVM bytecode, and then executes it. This is how the Clojure REPL works, and it’s how the compiler works—one form at a time. One of the upshots of this is that if you have an application running and you have a REPL running in that application, you can connect to it from the outside, inspect the application, and manipulate and modify the application directly from, say, your editor just by attaching it to the REPL. This dynamic interaction is a fundamental part of the Clojure development cycle and provides a feedback loop that accelerates development.

So, the way that you program in Clojure is very organic. You work with expressions of data to explore the problem space and figure out possible solutions, and that all stays in your code as you're live evaluating it, and so you're building up your application at the same time. This organic development approach is particularly powerful in ClojureScript, where you can leverage this immediacy to quickly iterate on JavaScript-based web applications. In fact, I've done an online talk and a demo showing starting from essentially a completely empty project and adding in libraries and building code while the system's running to produce a running web app, without having to restart anything. So there's none of this, "Oh, make an edit, let's compile the code, let's deploy it, does it work, let's run the test suite." In Clojure, that edit-compile-test cycle is right there as you type the code. This instant feedback loop in the Clojure REPL is a huge part of Clojure performance optimization, allowing developers to focus more on problem-solving and less on setup and deployment.

So that makes you incredibly productive, and it really lets you focus on the solutions to the problems rather than having to think about programming mechanics. By streamlining the development process through tools like the REPL and embracing a functional design philosophy, Clojure ensures that developers can stay focused on what's important—delivering results efficiently and effectively.

Host:

If I understand you clearly, it's that, well, most programs have their REPL, and you can play with the application through that. The power of closure is that because you can interject at any point, your debugging cycle, your test cycle, your development cycle is always the same, regardless of the size of the application.

Sean Corfield:

Yeah, like I say, what a lot of people think of as a REPL in other languages isn't really the REPL in the Lisp sense. It's a console that accepts lines of code and often interprets them or does, at most, a bytecode compile and then runs the bytecode. Whereas closure is entirely designed to compile a form at a time, whether you're using the compiler or the REPL, it's the exact same process. So there is no separate process. You just—you are editing your code, you're evaluating your code live in your editor, seeing the results live, and evolving your functions and your application step by step that way. Which is really nice, because if you run into some sort of thorny problem, you're like, "I'm not sure how I solve this now," you can actually start to explore the data structures that represent the problem and how you would transform them. And closure has things like, for example, there's closure spec, which is a way of specifying data structures, and it also, from that, allows you to do things like generative testing, like QuickCheck for—so you can use it to generate conforming tests.

Random data generation is a powerful feature when working with Clojure. If your problem has a particular shape but you don't have good test data on hand, you can just tell Clojure, "This is what the data looks like, generate me some massive amounts of test data," and you can work with these things very quickly in the REPL. This makes the Clojure development cycle much faster, allowing you to test ideas and iterate rapidly. You can work entirely in memory if you want and develop the specifications for both data and functions as you go along. It's a very organic, iterative process that doesn’t really have a parallel in most other languages. A lot of other languages just have a sort of a REPL that you can pass code into, but Clojure's ability to inject directly into the application state is unique, and it really saves time.

When your cycle is mostly the same size, or at least doesn’t require restarting the application or worrying about its state, you save considerable time. With tools like QuickCheck, you can be pretty sure of what you've coded within the state of the application the moment you write it. This immediate feedback is invaluable and makes the development process much more efficient. It’s one of the reasons why Clojure’s approach to testing and state management stands out. The ability to dynamically interact with the system without worrying about reset states allows you to remain in the flow and tackle issues organically.

Regarding architecture, my philosophy has evolved through different experiences. One key takeaway is the importance of designing systems that are open for growth without over-engineering. This principle is central to Clojure software design. One of the mantras in Clojure is that everything should be open for extension, which encourages building flexible and adaptable systems. Many systems in other languages are often designed with rigid specifications, limiting their future extensibility. In contrast, Clojure's philosophy allows for more flexibility and encourages iterative improvements.

When I worked at Macromedia, for example, we rolled out an MQ messaging system with a hub-and-spoke architecture. This design was crucial because it standardized communications between departments that previously had ad hoc methods and different data formats. By standardizing, we were able to gradually integrate each department and roll the system out incrementally. The flexibility of this system was key—it allowed us to extend it to new systems without over-engineering. We just had to add new spokes and adapters, all while working with a common data format.

This open-for-growth mindset is something I've carried through to my work in Clojure. When designing systems, I focus on identifying the core components that need to communicate and the data they need to exchange. From there, I look for ways to tie these elements together in a way that doesn't block future extension. In Clojure, this architecture philosophy works well, as the language’s flexibility allows for the natural evolution of systems without the need for rigid, upfront design decisions. It’s a balance between structuring the system for current needs while leaving room for future changes, which is vital for long-term scalability and adaptability.

Host:

Okay, the antithesis of that statement is that I will not extend this interview any longer and I really appreciate the time taken it was great talking to you and if you could just tell them where to find you if they weren't looking for you how would they find you

Sean Corfield:

Sure I'm Sean corfield, I am on absolutely everything um on Twitter and GitHub and I think on Skype and Facebook and pretty much everywhere and I think I am almost the only Sean corfield out there in software so if you just Google my name you'll you'll find me, congrats on the Monopoly I think I got in there early I was able to pick up Corfield. Org, I've had it for nearly 20 years I think now yeah

Host:

All right well thank you so much

Sean Corfield:

You're very welcome it's been a pleasure chatting with you

Host:

Great talking to you

Recursive House

Recursive House provides consulting and development services tocompanies looking to integrate AI technology deeply into their companyoperations. Using our expertise we teach and build tools for companies to outcompete in marketing, sales, and operations.

Trusted Clients

Helping Clients Big and Small with
Clarity & Results

Drop us a line, coffee’s on us

What's better than a good
conversation and a cappaccino?

Address
Toronto, Ontario

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Looking for More Information?

Download our latest report on the AI market to gain valuable insights, understand emerging trends, and explore new opportunities. Stay ahead in this rapidly evolving industry with our comprehensive analysis.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
View all News

Lets Chat About Your Future

Unlock the power of AI with Recursive House’s tailored AI/ML and GenAI
services. Our expert team follows a proven development process to
deliver innovative, robust solutions that drive business transformation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.