Contact Info

Atlas Cloud LLC 600 Cleveland Street Suite 348 Clearwater, FL 33755 USA

[email protected]

Client Area
Recommended Services
Supported Scripts
WordPress
Hubspot
Joomla
Drupal
Wix
Shopify
Magento
Typeo3

John O’Hara shares insights on how advancements in hardware and software technology can be pivotal for founders and CTOs.

John O’Hara, who has co-founded and led various fintech ventures like Finbourne, Adaptive Financial Technology, and Taskize—a company sold to Euroclear, is well-acquainted with the tech industry’s leadership demands. Before founding technology startups, he held significant positions at Bank of America and JPMorgan and is the creator of AMQP, a protocol used in cloud services by major corporations including Amazon, Microsoft, and Red Hat.

The evolving landscape of software has a profound impact. QCon London is aimed at enriching the knowledge base and fostering innovation amongst software developers. It’s a conference targeting developers, technical leads, architects, and project managers keen on guiding their teams towards technological advancement.

O’Hara expressed his motivations and views at QCon, saying, “What drives me to stand here and speak when I have nothing to sell? It’s a mix of concern about where software is heading and immense excitement about the potential of hardware. This upcoming decade holds incredible promise for hardware innovations—much like the excitement preceding the release of the iPhone. It’s fascinating, and it’s up to you all to leverage these advancements.”

This is a talk about hardware. The hardware won’t feature till the end. Let’s talk about software. Let’s talk about business. Because I went from being a software architect and developer to someone who sits on the boards of companies and engages in venture capital activities, which completely changed my perspective on how software works. That journey was interesting as well. I want to share the perspective that comes with that.

I’ve developed a viewpoint on complexity that causes me to have a physical reaction to complexity of the illness type. It originated from a story when I was starting out as an engineer in my 20s. Back in 1997, I worked for a bank, involved in a very large system. I was part of a 100-engineer project, a rare experience in its own right. We wrote 1 million lines of C++, which was almost impossible in 1997. Imagine how long that took to compile. It was a service-based architecture with 10 services, each supported by its own team and database.

Does this sound familiar? The services communicated through a Kafka-like persistent messaging system with protobuf-like RPC marshaling and data representations. It featured a web user interface with TLS in 1997. It had a Docker-like management and install system. We built all of this ourselves. We ran the 300 programs that comprised this system on two supercomputers located 10 miles apart. This setup marked the first time synchronous disk replication had been accomplished over fiber optics at that distance. The business requirement was that the system should survive a nuclear bomb—not kidding. They paid accordingly. It processed $1 trillion notional of financial derivatives and significantly propelled the company ahead of its competition. It was an extraordinary system.

It was an amazing team. Everyone involved can’t believe how fortunate they were to be a part of it, and they still wonder at how the system actually worked in the end. Ten years later, it was completely rewritten, much more simplistically, and run on three servers, smaller ones. This experience provided me, at the early age of 27, with a tremendous insight into the nature of complexity.

Whenever you have encountered the CNCF projects diagram, you’ve likely noticed how every project is neatly compartmentalized into little boxes, each funded around $3 million and dedicated to cloud technologies, aiming to facilitate distributed operations. They seem designed not to directly solve business application issues, but why do we keep developing and distributing such projects? Even after 27 years of technology evolution, there hasn’t been considerable consolidation or convergence in this field, unlike with Linux.

Consider Linux, a system that dominates many devices globally and has clearly won the operating system race against Windows—my favorite example of its success. Originating as a small-scale hobbyist system, Linux has grown into a powerhouse capable of supporting supercomputers and scaling far beyond initial expectations, all while remaining free.

My name is John O’Hara. Currently, I’m a board member for three fintech companies: Adaptive, focusing on low latency for exchange connectivity; FINBOURNE, a data management firm helping banks and financial institutions tackle the notorious data challenges; and I’m also a mentor at Accenture’s FinTech Innovation Lab, helping guide upcoming technologies in the financial sector. Additionally, I am a venture partner at Fidelity International Strategic Ventures, have founded a startup after leaving the investment banking industry, and later sold it to Euroclear. Among my technical contributions, I invented AMQP and turned it into an ISO-standardized messaging system. My commitment to public speaking in the industry is driven by a desire to see collective progress, which led me to my work with AMQP and beyond.

Back in 2009, the concept of cloud computing was still immature, illustrated by an image of a Google server setup using simplistic means such as Velcro and cardboard. This rudimentary setup, aimed at reducing costs, included servers stacked on cardboard and batteries attached with Velcro serving as backup power supplies. It highlighted the inherent vulnerabilities: the cloud was perceived as unstable, insecure, and highly susceptible to failures.

Fast forward to today, cloud technology has undergone an immense transformation. Modern cloud infrastructure represents the pinnacle of hardware technology, with major players like Intel and AMD aligning their roadmaps to meet the demands of cloud service providers. Present cloud servers boast upwards of 200 processor cores, enhanced with robust fiber optic networks and optimized thermal controls, achieving unprecedented operational efficiency and reliability.

A significant indicator of the reliability of modern cloud infrastructure is the shift by tech giants like Microsoft and Amazon, extending the depreciation period of their hardware from three to five years. This change underscores the enhanced durability and reliability of today’s cloud technology. Contrary to past fears, current cloud infrastructure supports advanced cryptographic measures more effectively than ever, debunking the myth that cloud-based systems are inherently insecure or unreliable.

In a broader business context, understanding the evolution of technology, particularly in the realm of startups, sheds light on the driving forces behind innovation. Defining what constitutes a startup varies widely, but at its core, it relates to new ventures poised to leverage technology to disrupt existing markets or create new ones. This exploration not only illuminates the technological advancements but also aligns them with entrepreneurial dynamics in the tech industry.

One enduring perspective is that entrepreneurship is essentially an experiment. It’s about testing a new venture under conditions of extreme uncertainty. The very nature of this trial implies a high likelihood of failure; if success were assured, it wouldn’t truly be an experiment. Entrepreneurs must gather the requisite resources to unearth viable new business models and validate these models before the funding dries up. Within this dynamic, the interplay between a CEO and a CTO is crucial. It’s like a tandem race against time and financial constraints, with each role bringing distinct, pivotal capabilities to the forefront.

The venture’s success pivots significantly on the technical acumen and leadership of the CTO. This individual must not only develop effective products but also assemble a competent team, forge robust connections with customers, and navigate internal dynamics effectively. The strength of the product and the team’s cohesion under this leadership often make the difference between a company’s success and its failure. In conversations especially common in venture capital contexts, the question of whether a company has the “right” CTO emerges frequently and poignantly highlights the indispensable role of technology leadership within a startup.

What investors, CEOs, and clients alike are banking on is the CTO’s vision. The personnel drawn to startups often do so out of admiration for the founders and the promise of the technology leadership, sometimes underestimating the challenges ahead. As the company scales, the demand for quick and effective solutions grows. Here, the role of the CTO evolves to not merely providing answers but setting the parameters within which solutions can be found, shaping the path forward for the enterprise. It’s about delivering enough client satisfaction to sustain business growth, particularly when financial resources are waning.

This delicate balance of delivering satisfaction swiftly and sustainably becomes the crux of the CTO’s mandate. The disconnection between the technological aspects of product development and the commercial imperatives of business sustainability presents an ongoing challenge that technology leaders must navigate. In every aspect, the urgency imposed by limited funding adds a poignant urgency to these endeavours, underscoring the critical importance of the CTO in not just supporting, but actively driving forward, the business’s growth and stability.

They’re unaware of their economic impact, completely unaware. They think technology is why they were hired. They should be stepping back and looking at the business, looking at the client’s problem, looking at the context around it. It’s not just the technology. It’s not about the shiny new thing. It’s about, how long can you make the money last? It’s about how productively can you emit the product to delight the client. It’s so easy to be distracted from the core mission.

The interesting thing is, at the start of a startup, because you want to be a unicorn, every startup wants to be a unicorn, they want to be Airbnb, they want to be Uber. I’ll talk about how you build a unicorn in the next few slides. It’s absolutely fascinating. It’s terrifying. Every technical action this guy takes that doesn’t advance the business will be killing the business. It’s that stark. You’ve got no space to run someone else’s beta test.

In the early days of a startup, you have no customers. You call them design partners. It’s customers you want to sell to but you haven’t sold to yet, it’s 10 pressure VCs. They’re going, “Yes, that kind of sounds cool. I might buy that. Yes, I’ll tell you what the problem is.” You frantically could have went, would you buy this? Would you write me a check right now? That’s what’s happening. In the meantime, you know you’re going to be a unicorn, so let’s solve the scalability problem right now.

There’s a whole bunch of stuff from the CNCF I can use to build into my architecture to be ready for scaling, so whenever this thing takes off, I’m going to be ready. You start to do premature optimization. You start to build your organization as if it were Google or Amazon, as if it were 2009. As if you were worried your system was going to fail at any point in time. As if you were worried you couldn’t scale past the customer numbers that you need to get. You’re actually hurting your business. I’ll come back to exactly why.

This meme is frequently seen, particularly in the realm of modern, complex systems. These terms consistently recur in various professional contexts, such as business cases, pitch decks, websites, and cloud computing discussions suggesting the prevalence of modern, complex systems. In an attempt to maintain anonymity, I’ve omitted specific vendor details from a captured website image. The real question arises: what are the costs associated with these systems? For instance, consider a burgeoning company earning £6.5 million annually, a respectable figure. Yet, their cloud expenses amount to £2 million annually, significantly reducing their gross revenue even before employee salaries are disbursed. This brings to light the alarming situation similar to Stability AI’s recent financial woes, where they couldn’t cover their AWS expenses, amounting to $7 million in just one month. It raises a crucial question: are the profits earned by your business actually feeding into your growth or merely covering cloud vendor charges? Are you inadvertently becoming a cloud reseller?

This highlights the importance of examining cloud vendor relationships closely, especially to understand the correlation between increased product sales and the associated rise in compute, storage, and network usage. Such insights can be incredibly beneficial for business. For example, upon reviewing one company’s architecture, there was an immediate potential to cut their annual cloud expenditure by £400,000. The control over such costs often lies in the hands of the engineering teams, and at times, as I’ve observed, even the CEO’s credit card is managed by developers, which is quite alarming. Despite these observations, my stance isn’t entirely against cloud usage, but it’s crucial to acknowledge that the industry does promote increased consumption through various strategies. Developer advocates and specific messaging strongly influence this consumption, especially after reaching a certain market penetration level. Discussing or critiquing this prevalent industry narrative might not earn favors in most circles. Additionally, there is often fearmongering about needing scalable solutions immediately to accommodate potential, yet uncertain, future growth, akin to the scenarios illustrated in ambitious business plans.

We examined your business proposal, famous for its hockey stick graph. If you’re familiar with the TV show “Silicon Valley,” you’ll recognize the concept. It involves over-preparing for scalability from the start and the exaggerated fear about needing excessive reliability. Why not start smaller when cloud availability is quite reliable at 99.5%? While not bank-standard, it’s often sufficient. Even if a system resumes in four hours, most high-demand contracts are met. Any minor disruptions might be attributed by users to their internet or mobile service. Speaking of mobile services, a family member in that industry noted that many dropped calls occur simply due to server reboots.

This prompted me to almost fall off my chair because it contradicts common beliefs. There’s a widespread fear in big data that no data should be deleted as it might prove valuable. We’ll revisit this topic, but consider this: about half of the data stored is never revisited. Yet, the costs for storing, transferring, backing up, and accessing this data continue to accumulate.

The trend of distributed systems by default has led to a rise in middleware solutions, often focusing more on technical capabilities than on actual needs. While cloud services offer flexibility, they aren’t necessarily inexpensive. A key financial guideline for startups is to allocate about 10% to 15% of initial funds toward infrastructure. Exceeding this range could spell financial trouble.

Our conversation drifted a bit towards our experience in building software for clientele, which involves preparing it for the cloud. Modern software development is characterized as complex, agile, iterative, and continuous – essentially never-ending and constantly in demand. In fact, software development often continues until funding depletes. I once tried to analogize this to my wife during our home extension project, comparing it to a scenario where the builder never leaves after completing the work.

This is my wife in the background. This is the well-paid builder up front, done a great job. We got to maintain it, release frequently. Another situation was one of my investors once asked me, the guy from the accounting department, so when you’ve built the software, you can fire all the developers, yes? Back then I was 15 years younger. I said, no, that’s not how it works. You need to keep these guys forever. Would you accept this anywhere else? Really, you wouldn’t.

Back to the unicorn plan, because we’re building a unicorn. The unicorn plan, which is basically how to get to a unicorn in 7 to 10 years. This makes me laugh as well, because people will say, I’ve joined a startup, will be exited in 3 years. No, the fact that VC funds have a 10-year horizon should give you a clue. Most people don’t know that either. To get to unicorn status, you need to have reached $2 million revenue per year, by the end of the second year.

That means your product must be sufficiently valuable, hitting enough right notes for enough people that they’re handing over $2 million of their hard-earned money because they anticipate receiving $6 million or $20 million in value from it. Achieve this in the first 2 years, having already sold it to the customers. You’re genuinely progressing. There’s only space for one experiment in your startup, and that’s your business. Your role is never to act as a beta tester for anyone else. You can’t afford such a risk.

You must instill this mindset across your entire organization, through all engineering staff: smaller teams, better leverage, enhanced capability, increased reliability, adequate architecture. How can we create a system that can be adapted exceptionally fast? It’s the speed of adaptation, in a Darwinian sense, that propels you forward. This aligns with OODA, the concept of creative destruction, and the U.S. defense strategy approach to the world. Your approach is to build, measure, gather data, learn, and repeat as swiftly as possible. That’s your startup. Only space for one experiment.

This is where the numbers come from. Battery Ventures did a little bit of analysis and worked out for all the unicorns that they could see, what’s their commonality? They noticed that they all followed a similar track. Two years in, they were making about $2 million a year. Then, by year 4, they were making $50 million. Then, $100 million year 7, or higher. That’s usually where you get. When you get to $100 million, you’ve got a comfortable billion-dollar valuation. You’re there. You’re a unicorn. ChatGPT, thank you.

This is actually called the triple, triple, double, double, double pattern, because you have to triple your revenue in each of the first two years, and then double your revenue every year after that. Then you can be a unicorn in 7 to 10 years. That’s a really interesting thing to know. Because then you can actually ask yourself, of what shape of system would I need to do this? These days, people advocate selling business to business, because business to consumer is way too competitive, is what the current mantra is. B2B, easier sale, you can find someone with a really deep problem, you can deliver some real value.

You hopefully find a few thousand customers. You get a decent amount of money for it, because that was a valuable problem. I’m actually doing B2B SaaS growth and you’re at revenue at year 4. In a perfect world, by year 4, you’re charging an average of 6 digits for your product. You’re charging your enterprise customers hundreds of thousands of dollars for their subscription. That’s what you need to be doing. Let’s imagine you’ve fallen off the curve a bit, you’re only charging them $50,000 on average, that’s still a lot of money. Your software can replace an FTE and more.

Their math, you got 1000 clients, they’re paying $50k per annum, that takes me to my $50 million. Imagine each of them have got 100 staff that are using my system. This is Salesforce, it’s Confluence, it’s Jira, it’s an HR product, it’s something like that, 100 people are probably using it. Maybe they’re using it each day, intensively. Maybe they do 10 interactions per hour. That’s pretty heavy. If you’ve designed your app right, it’s not being too chatty. These are 10 meaningful, hefty interactions per hour. You’re getting 10 system calls per interaction. It’s like building the screen took 10 calls, because you built your app to be not too chatty, because you’re sensible. That gives you some mathematics.

Initially, it is essential to recognize that usage patterns do not scale uniformly. At particular peaks like the beginning and end of the day or on Monday mornings, the system experiences the majority of its usage. Picture every user trying to log in simultaneously, which results in the majority of the server calls being crammed into just a few hours. For instance, if there are 10 million calls to the server every day, the true demand could reach about 1500 calls per second during these peak times.

Imagine this, you have 1000 relevant server requests hitting your database systems each second, distributed over numerous clients. Perhaps you’re operating with 15 servers, but are these adequate by today’s standards? Consider 5 million requests per hour – this might have been challenging in 2010 but is quite manageable today. Consider this with a million requests per hour, which currently is not considered a high load. I often refer to pgbench, a benchmarking tool I prefer due to my affinity for Postgres, a robust, open-source database system. This benchmark effectively simulates real-world operations such as ATM transactions – retrieving and updating records.

Executing such a benchmark on a modern server with 96 CPUs and 384 GB of RAM, which would cost around $50,000 yearly to lease from a provider like Amazon, returns impressive results. Such hardware can handle 68,000 transactions per second, which accumulates to around 200 million per hour. This rate could easily handle a transaction from every internet-connected person on the planet each day. Generally, this indicates that many companies, especially in their initial years, do not require excessively complex architectures. For most, a standard moderate-sized server is sufficient to handle significant loads, enough to support nationwide user bases for the first several years.

Unless your operations include intensive tasks such as AI training or extensive telemetry processing, or unless your enterprise scales to the extent of industry giants like Google or Airbnb, extravagant infrastructure is often unnecessary. This realization underscores the rapid advancements in technology and system capabilities, a topic I will delve deeper into during the hardware-focused section of this presentation.

Moore’s Law remains pertinent, frequently outliving the predictions of its demise. Notably, Jensen Huang from NVIDIA, who declared it obsolete two years back, has recently revised his opinion, suggesting it could persist until 2030. Their approach involves transitioning to 3D configurations from the traditional flat chip design, increasing transistor density significantly. Intel anticipates achieving a trillion transistors by 2030, illustrating the immense potential. The supercomputers of the past, like the Starfire E25000, boasted 72 processors and 172 gigabytes per second of memory bandwidth, costing $4 million in the year 2000. In comparison, my current workstation, although several years old, supports 64 cores and 200 gigabytes per second, and cost just $10,000, efficiently running Call of Duty at three percent CPU usage and 160 frames per second, highlighting the advancements in modern hardware.

Many corporations involved in selling server space and storage prefer customers unaware of these advancements. However, AI developers, needing high performance, have exposed this reality. Despite the shared manufacturing facilities, both Intel and AMD stand on the verge of delivering transformative performance levels. My own experiences, reinforced by discussions at a recent conference with tech leaders, confirm the significant developments awaiting us in the hardware sector. Moreover, the software environment has evolved; C++, Solaris, and databases like Sybase and Postgres have significantly improved, with SQL maintaining its relevance effectively. Advances in middleware technologies, like AMQP and Kafka, further enhance the capabilities of modern computing infrastructures.

You have chosen Aeron. There are various middleware options tailored for specific purposes. Importantly, web services should not be conducted using HTTPS within your data center, as this extends beyond your direct control. Web technologies have evolved to a point where extensive browser support is unnecessary, thanks to the efficiency of JavaScript and its associated ecosystems. Languages like Rust and Zig also show promise, but the longstanding ecosystems of Python, Java, C++, C#, and SQL are incredibly robust and versatile, allowing integration with virtually anything.

This versatility is crucial in technology, where there is often a fascination with the latest and most complex solutions. Yet, it’s vital to remember your value to your organization. Your role impacts the sales team’s ability to market your products effectively, influencing company revenue and the eventual worth of stock options. Every choice you make has significant implications. As leaders in technology, it is your responsibility to align everyone with the primary goal of addressing the customer issues your company aims to resolve.

The architecture of applications can be quite straightforward. Take Ruby on Rails as an example, chosen here as a representative model due to its simplicity. Notably, DHH of Ruby on Rails has demonstrated significant cost savings by moving away from cloud services. Simple architectures underpin even large platforms like Shopify and GitHub which utilize Ruby on Rails. The tendency to complicate is often unnecessary. Interestingly, the world’s fastest trading systems are typically housed on single-chip systems, such as FPGAs. When chip manufacturers considered upgrades, the primary request from financial technologists was for unmatched single-threaded performance. This response underscores the critical need for high-speed processing in ordered market systems.

Exploring the concept of a modern chip reveals it to be more akin to a sophisticated network or a mini data center confined within a single chip. With the right middleware, you can seamlessly determine whether you’re operating on an InfiniBand network or an internal on-chip network, whether the processes are handled in-memory or through shared memory IPC. This opens up possibilities to craft an integrated system using traditional components that can communicate and interact dynamically, speeding up processes remarkably. This innovative approach attributes to Martin Thompson’s pioneering work; he initially created the Disruptor and was also a part of the architectural team at LMAX, later developing the Aeron messaging system now utilized by the Chicago Mercantile Exchange, which remains freely available as open-source software.

Engaging with this technology, the fundamental idea is to dedicate each CPU task solely per CPU core, optimizing all operations to remain within the L1 cache and employing Java for execution. This might surprise several tech enthusiasts, especially those familiar with FPGA, as Java efficiently adapts to the host machine’s capabilities, such as utilizing AVX-512 for recompiling critical loops in your code, potentially outperforming C++ in operational speed. By configuring each CPU core to operate at maximum capacity and utilizing lock-free structures in memory for data transmission via ring buffers, the system can process millions of requests every second, achieving latencies as low as 10 nanoseconds. This magnitude of speed is equivalent to the distance light travels in 10 billionths of a second, presenting a remarkable feat achievable within a solitary box without external dependencies.

Diving deeper into the significance of the L1 cache, interpreting CPU time in human relatable terms, an L1 cache reference is momentarily equivalent to half a second—or a single heartbeat in the human context. Comparatively, a main memory reference might feel as long as brushing your teeth, accessing a local server compares to a full working day, while retrieving data from a disk equates to taking a vacation. Extrapolating this to more extreme analogies, reading from a spinning disk is akin to starting a family, and accessing servers as distant as San Francisco can be compared to acquiring a master’s degree. These analogies help in grasping the otherwise incomprehensible differences in time scales managed by current technologies. Placing an entire program into the L1 cache can enhance its operation speed a thousandfold compared to other processes, carried by the machine. Smaller distances between operations maximize efficiency, boost speed, cut costs, and promote energy efficiency, making it an environmentally favorable choice in today’s energy-conscious world.

There are several intriguing developments within the arena of databases. Among the products gaining attention is DuckDB, an open source database. What sets it apart is its compatibility with PostgreSQL and its capability to seamlessly import any flat file into memory, promptly executing complex relational queries on it. This process is impressively quick, even with files as large as 10 gigabytes.

The efficiency of DuckDB transforms data analysis into a profoundly enjoyable experience, especially on high-performance machines. Jordan Tigani, who leads the entity commercializing DuckDB and a former Google Bigtable engineer, proclaims that traditional big data solutions have become obsolete as hardware has caught up. Tigani highlights a benchmark where a task previously requiring 3000 machines now needs just a single CPU. Today, most organizations, even substantial ones, maintain less than a terabyte of live data, easily manageable on modest hardware currently available for approximately $30,000—an indicator of how financial dynamics in computing have evolved.

This shift towards powerful, compact computing solutions is evident. For example, my workstation incorporates 128 CPU cores and houses 82 billion transistors, yet remains remarkably compact. AMD and Intel are competing in this space, with Intel planning a 380-core processor for 2025. Such advancements indicate an exponential growth in processing power, with some chips boasting 64 gigabytes of RAM integrated directly, allowing for systems to operate without additional memory. This integration approach is part of a broader trend towards Host Based Memory (HBM), which strategically places memory closer to the processing core, enhancing speed and efficiency.

The benefits of such technological integration are evident in popular products like Apple’s M1 and M3 Macs, which feature 16 cores, 40 GPUs, and 128 gigabytes of RAM—all packaged together, although not using HBM. This evolution in hardware design, characterized by increasing the proximity of key components, is pivotal in the rapid advancement of computing capabilities.

Imagine your iPhone, but on a grander scale. That describes the RAM situated on the SIM—a diminutive, specialized silicon PCB. This advanced RAM boasts an incredibly swift connection to the primary chip containing an impressive 92 billion transistors. Leading the pack, this chip integrates a staggering amount of GPUs, with figures possibly between 16,384 to 32,000 due to a dual chip configuration. Another fascinating innovation is the Apple M2 Ultra, which essentially combines two M2 Macs into one. This process mirrors folding a piece of cardboard, creating a powerful symmetrical setup.

Similarly, NVIDIA has adopted this approach by aligning and fusing two processors, achieving an astounding 208 billion transistors. These processors come equipped with 192 gigabytes of server memory, including 64 gigabytes of LDDR5. The memory bandwidth hits a peak of 5 terabytes per second, symbolically reminiscent of throwing hard drives, a significant leap from earlier versions of data storage throwing CDs or Blu-ray discs. NVIDIA’s primary application of this massive power lies in AI processing through their GPU setups, operating on Single Instruction Multiple Data systems.

This technological prowess isn’t exclusive; it’s expected to extend to x86, Arm, and other platforms. All these examples highlight real, purchasable technology, some even dating back a year and a half. We’re ushering in a new wave of exceedingly accessible hardware where, for the cost of current rentals, one might soon purchase outright. Enhancements aren’t just confined to processing speeds; networking advancements now reach speeds up to 100 gigabits per second, or 10 gigabytes per second, while the peak is at 800 gigabits per second.

Alongside this, there have been strides in storage technology comparable to tossing SSD cards. Historically, hard disks could manage 150 writes per second for running traditional relational databases. The core practice involved keeping the entire database in RAM for swift access, and sequentially logging transactions to the disk. The focal point has always been improving the speed of transaction log writes once your data resides in RAM, emphasizing efficiency in modern computing infrastructures.

How fast can you write a transaction log? It comes down to IOPS. That’s the significance of IOPS. Purchasing a high-end enterprise SSD provides you with 4 terabytes of space, 2.5 million read IOPS, and 400,000 write IOPS per second. This is a stark contrast to the 150 IOPS offered two decades ago—the improvement to 400,000 IOPS for a single drive signifies persistent storage advancements. Does this change how you approach software development? It should.

Data centers are becoming more integrated, resembling chips. Yet, the world remains complex and divided into what are often referred to as autonomous entities or political domains by IETF experts. Within these domains, it’s possible to manage and simplify complexity, allowing teams to thrive and overcome challenges. When collaborating across different teams, each with its unique methodologies for managing complexity, the necessity for effective communication and interfacing becomes apparent.

This interaction is facilitated through what we call APIs, serving as treaties, while gateways and adapters act as special envoys. The goal for any agile organization is to align its system architecture with its political domain, optimizing control without reaching a point of fragility. This is often achieved in small, efficient teams, much like the successful models of WhatsApp and Instagram, which scaled tremendously with minimal engineering staff before being acquired for billions.

The current availability of open-source technology and advancements provided by silicon technology resembles what might be considered magic. These tools allow for detailed system insights and efficient problem-solving directly from the operating system, negating the need for extraneous solutions. With such capabilities, there is no need to design systems as if we are still in 2009.

You definitely require cloud technology. For startups, it’s almost a validation of credibility. Big enterprises expect you to be on familiar platforms like AWS, Microsoft Azure, or GCP. Currently, these services are widely trusted. Being on these platforms implies a level of security and reliability. Think of the cloud as nothing more than a tool, follow a strict budget, money dictates your operations. The harsh reality is, without sufficient funds, everything halts—reflect on experiences of companies crashing down due to financial droughts. It’s sudden and intense. Learning to handle cloud as though you are constructing an actual data center with cloud components helps leverage its full potential for global reach, scalability, backup solutions, and security enhancements.

Opting for serverless solutions should be strategic since it resembles having unlimited resources—it’s appealing but tricky like unlimited leave policies. This model suits entities with massive scalability needs—governments tracking vast data, genome projects, or financial exchanges where geographical distribution and cost efficiency are more critical than operational performance. Such scenarios benefit immensely from the cloud, especially to manage AI processes extensively. But, remember, not everyone’s budget allows for this. It’s essential to manage resources wisely and design systems that are simple, modern, and unburdened by unnecessary complexity.

Participant 1: You seem to criticize the reliance on cloud, deeming it expensive and often unnecessary. Is your contention with the overarching use of distributed systems, recent shifts in tech practices, or things like Infrastructure as a Code?

O’Hara: The industry has arguably become bloated. There’s a saying about ‘the Kubernetes industrial complex’. Needing more equipment just to expand server capability illustrates a misalignment in resource allocation. This question was posed initially: how many here are part of a company with over $100 million tech budget? It’s a different reality in such environments.

Participant 2: We’re moving to cloud because of compliance, and future features, not replace features.

O’Hara: Pretend it’s a data center, make it small. Keep your costs low. Make your runway long. Make your people productive, so you can please your clients as quickly as possible. Some people have very simple technologies that perform miracles.

 

See more presentations with transcripts

Aug 22, 2024


Welcome to DediRock, your trusted partner in high-performance hosting solutions. At DediRock, we specialize in providing dedicated servers, VPS hosting, and cloud services tailored to meet the unique needs of businesses and individuals alike. Our mission is to deliver reliable, scalable, and secure hosting solutions that empower our clients to achieve their digital goals. With a commitment to exceptional customer support, cutting-edge technology, and robust infrastructure, DediRock stands out as a leader in the hosting industry. Join us and experience the difference that dedicated service and unwavering reliability can make for your online presence. Launch our website.

Share this Post
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x