Towards a Middleware for Large Language Models (2024)

Narcisa Guranguran@vss.uni-hannover.deLeibniz University HannoverHannoverGermany,Florian Knaufknauf@vss.uni-hannover.deLeibniz University HannoverHannoverGermany,Man Ngongo@vss.uni-hannover.deLeibniz University HannoverHannoverGermany,Stefan Petrescupetrescu@vss.uni-hannover.deLeibniz University HannoverHannoverGermanyandJan S. Rellermeyerrellermeyer@vss.uni-hannover.deLeibniz University HannoverHannoverGermany

(2024)

Abstract.

Large language models have gained widespread popularity for their ability to process natural language inputs and generate insights derived from their training data, nearing the qualities of true artificial intelligence. This advancement has prompted enterprises worldwide to integrate LLMs into their services. So far, this effort is dominated by commercial cloud-based solutions like OpenAI’s ChatGPT and Microsoft Azure. As the technology matures, however, there is a strong incentive for independence from major cloud providers through self-hosting “LLM as a Service”, driven by privacy, cost, and customization needs. In practice, hosting LLMs independently presents significant challenges due to their complexity and integration issues with existing systems. In this paper, we discuss our vision for a forward-looking middleware system architecture that facilitates the deployment and adoption of LLMs in enterprises, even for advanced use cases in which we foresee LLMs to serve as gateways to a complete application ecosystem and, to some degree, absorb functionality traditionally attributed to the middleware.

Authors contributed equally and are listed in alphabetical order.

1. Introduction

Large language models (LLMs) have found mainstream success with end users due to their ability to accept natural language as input and return insights gained from massive amounts of training data.In this regard, they have closed the gap towards what we currently consider the properties of truly intelligent artificial intelligence.This success has inspired companies worldwide to augment their existing services with these new capabilities, or at least strongly consider the adoption of LLMs for this purpose.So far, the landscape is dominated by commercial cloud-based LLMs such as OpenAI’s ChatGPT or Microsoft Azure’s equivalent offering explicitly geared towards enterprise adoption.

As the technology matures and finds further adoption, however, there are strong incentives to reach independence from the hyperscalers and AI cloud services and host own solutions, either on-premise or in commodity clouds. We call the simplest form of adoption LLM as a Service.The main reasons for taking this step are privacy concerns(yao2024survey, ), cost(chen2023frugalgpt, ), and the ability to fine-tune the model for specific domains(hu2023llm, ; xu2021raise, ). Unfortunately, self-hosting LLMs is not trivial as many of the established methods for hosting code do not directly apply to LLMs (Section3). Parts of this can be attributed to the hidden complexity of LLM services that go beyond the raw model and accumulate significant, often distributed session state, and also to the non-trivial integration of LLMs into an existing (micro-) service ecosystem. We consider both genuine and traditional middleware concerns.

In the mid-term perspective, we see further-reaching potential for adopting LLMs in the enterprise, as we see similarities with the shift experienced in the past decade when web technology and mobile device adoption led to a proliferation of enterprise portals which federated existing, often initially fragmented services behind a common user interface. LLMs have the potential to become the next step in this evolution, adding another modality of interaction to enterprise applications by enabling the interaction through natural language prompts or, in more advanced applications, through complex multi-step conversations. In such scenarios, the LLM effectively becomes the gateway but, as we show in Section5, also serves as the enterprise application integrator, another traditional middleware domain. One can argue that this could lead to parts of the middleware being absorbed by the language model in that service discovery, binding, and even protocol adaptations can be handled by the model, as we further describe in Section6.

In many cases, relying solely on the LLM’s response is insufficient, even when enhanced by prompt engineering or fine-tuning. This is because external services are often required to handle tasks the LLM cannot manage alone, such as accessing up-to-date information, verifying data, or executing actions. So, while prompt engineering or similar techniques can augment the LLM’s capabilities, two key scenarios emerge: one where the LLM generates responses entirely independently, and another where it needs to collaborate with external services. Compared to the first scenario (where the LLM operates autonomously), the second is significantly more complex, as external applications play a critical role in reducing computational load, improving reliability, and ensuring the trustworthiness of responses, particularly in high-stakes business contexts.

In the long-term perspective, we see the need for ensuring deterministic guarantees when interacting with an LLM system. For instance, relying solely on LLMs, even as they improve, does not fully address issues like data freshness, domain-specific knowledge, or real-time decision-making. The LLM path to maturity, namely being applied directly into the wild, is thus critically hinged by the ability to provide reliable and accurate responses. We argue that the path forward to achieving this level of assurance increasingly requires LLMs to collaborate with external services, as this coordination is the most effective way to provide deterministic guarantees about the responses. Consequently, middleware is essential to facilitate these interactions, thus helping improve reliability and accuracy. We believe this integration layer is key to unlocking the full potential of LLMs, providing a scalable and reliable framework for their deployment.

In this paper, we outline our vision and present the architecture of a middleware system that could ease the deployment and adoption of LLMs in the enterprise. We describe and functionally evaluate a proof-of-concept implementation that uses the LLM as a facility for service discovery and as a protocol adapter to integrate an external microservice. We further identify research challenges in making a comprehensive middleware for LLMs become a reality.

2. Background: LLMs

\Description

Retrieval-augmented LLM deployment with multiple model variants and session cache

Only a decade ago, the promises of Deep Learning (DL) were a distant future to many. Machine Learning (ML), the more serious term used in the context of artificial intelligence, was the established niche, with significant research efforts in areas such as Computer Vision(stockman2001computer, ), Natural Language Processing (NLP)(Chowdhary2020, ), Bioinformatics(larranaga2006machine, ), among others. At the time, conventional wisdom held that while DL showed promise, many of its claimed benefits were to be treated with caution, as achieving them in practice remained a significant challenge(hinton2007learning, ).

However, the advent of AlexNet(NIPS2012_c399862d, ) showed that the promises of DL were within reach. It became evident that the way forward was training large neural networks on vast datasets. This breakthrough in Computer Vision rapidly inspired similar advancements in different domains, such as NLP (RNNs and LSTMs), Speech (Deep Belief Nets), Translation (Seq2Seq), Reinforcement Learning (Deep Q-Learning), and others. As researchers utilized increasing amounts of compute and data, new architectures were developed, with the Transformer(DBLP:journals/corr/VaswaniSPUJGKP17, ) emerging as the dominant model. Since its inception in 2017, it has replaced many earlier model architectures, becoming the foundational building block for LLMs.Given the potential of this technology, it appears that more tasks now have the potential to be replaced, and we might see an even greater shift toward more and more workloads to be taken upon ML. However, although GPT-like models are making LLMs increasingly of interest, the understanding of the technology itself is still in its infancy, and little work has gone into connecting its intricacies to show how it works in real-world systems. To some degree, this can be attributed to a knowledge gap between the often vastly simplified user perspective of LLMs as universal smart agents and the operational complexity behind such popular services.

The services perceived as smart LLM agents, however, are complex systems that combine the Transformer model with various additional components such as the conversational state, vector databases for information retrieval, prompt-engineering modules, or even standalone applications like a Python interpreter or calculator.

A common deployment setting is retrieval-augmented generation (RAG), where user-supplied promptsare augmented with context from a knowledge base(NEURIPS2020_6b493230, ). This is especially useful in deployments where the LLM is supposed to handle domain-specific tasks that require domain-specific knowledge and terminology.

To motivate how LLM applications could benefit from a middleware tailored to their specificneeds, Figure1 shows the main components of a conceivable retrieval-augmentedLLM deployment. The user is having a conversation with an LLM instance loaded from a modelrepository into one or more (in the case of very large models) units of compute availablefor this purpose. The model is processing several user sessions simultaneously.

The user supplies a prompt to a batch collector, which collects prompts from several usersfor simultaneous processing. Once a full batch is collected, the prompt batch is providedto the main LLM and to a context retrieval subsystem consistingofa vector database of context snippets. The contextsubsystem embeds the prompts in the context vector database’s key vector space to retrieverelevant context from that database. The original prompts are augmented with this context andappended to the ongoing conversations, which are provided to the main LLM. The main LLMminimizes computation time through the use of a KV cache that contains the model activation statefor the previous state of the conversations, such that computation is only required fornewly appended tokens. Finally, the LLM processes the prompt and returns an answer to the user(s).

In practice, the full deployment consists of many more components than just the language modelproper. The management and interconnection of these components raise typical middleware challenges and require decisions about colocation, scaling, and many more (see Section3).

Although the top performance for LLMs is still attributed to the commercial closed source models like GPT-4(openai2024gpt4, ) or Claude(Anthropic_Claude, ), increasingly more efforts are made to make LLM technology available open source, with Meta’s Llama series of models(Meta_LLAMA, ) currently being the most capable.

Key differences between closed source and open source models are that the former are currently better performing with a lower barrier for adoption (as all of the infrastructure for hosting and serving the LLM is managed by the provider), but advantages for using open source LLMs stand strong, with privacy, access to model weights for fine-tuning, and long-term cost benefits, among others. Unfortunately, companies are reluctant to make closed-source models publicly available(Open-2024-05-28, ),as that would give away competitive advantages, and also pose risks for the organization, such as, potentially leaking valuable information through the model weights. Nevertheless, open-source models are still a viable alternative, with an ability to obtain same-order-of-magnitude accuracy against closed-source models.

As the value derived from conversational LLMs increases for enterprises, it becomes economically viable to invest in the training or fine-tuning of custom-tailored models and host these models on owned or rented (cloud) infrastructure rather than relying on pre-packaged services.

3. Challenges in Deploying LLMs as a Service

The current trend in the industry is to augment existing enterprise application ecosystems with LLMs to add smart(er) text-based user interfaces(liao2023proactive, ), advanced search and information retrieval functions(ziems2023large, ; zhu2023large, ), and complex analytics tasks(nasseri2023applications, ; li2024can, ).This is often implemented by conceptually adding an LLM as a service to complement the existing services.From the presentation and discussion of the dataflow view of a retrieval-augmented LLM service (Figure1), we can derive the following challenges:

3.1. Complexity

Packaging and hosting the LLM as a service is significantly more involved than traditional software for which convenient frameworks and toolchains exist(humble2010continuous, ; shahin2017continuous, ). The model needs to be containerized together with a model server in order to be integrated into traditional software ecosystems that rely on RPC-style coupling. Any additional components for managing session state, etc., need to be tightly integrated into the dataflow, which is also insufficiently covered by conventional middleware and scaleout systems which often rely on state being fully contained in an external database.

3.2. Integration with Existing Services

To utilize LLMs as a part of a company’s software ecosystem they need to be adapted to integrate seamlessly. This involves bridging a significant semantic gap between the world of natural language, which is the primary interface of the LLM, and the world of network protocols, which are the interfaces of the existing microservices. As a part of a microservice chains, the LLM needs to be able to determine which service it needs to invoke next and how to invoke it, effectively requiring service discovery and the ability to speak the protocol(s) of the discovered and selected service.

Towards a Middleware for Large Language Models (1)

\Description

LLM as a microservice.

3.3. Resource Allocation and Multi-Tenancy

Systems like Docker (Swarm) and Kubernetes have originally been designed around managing CPU resources effectively. In contrast, LLMs are typically deployed on GPUs due to the much higher performance caused by the higher degree of hardware parallelism. While strategic loopholes have been introduced into the underlying containers and better matching was implemented, the efficient exploitation of multi-tenancy on GPUs is still significantly more complex(9407125, ).

At this point, GPU memory is still the limiting factor for LLMs and often mandates a single model instance to fully utilize the GPU. However, this is likely to change with a tighter integration of the GPU into the system, e.g., through coherent attachment(280792, ) and with the development of smaller, more customized models. Furthermore, when running different customizations of the same base models, efficiently paging in and out of model revisions is challenging to avoid a cold-start problem(234835, ).

3.4. Model parallelism

Opposite multi-tenancy, many larger LLMs are too large to run inference on a single GPU. In these cases,there exist approaches to split the model across several GPUs and run inference in parallel.(shoeybi2020megatronlm, )These cases present new challenges for resource allocation and introduce additional challengesfor communication between model parts, e.g. when the model is split layerwise across multiple GPUs,layer activations need to be propagated from the GPU where they are calculated to the GPUwhere they are required for calculation. For the largest LLMs, even individual layers are toolarge to run on a single GPU(large-scale-llm-training, ), compounding these challenges.

3.5. Scalability and Elasticity

The high degree of statefulness of the conversation makes elastic scaling of LLM conversational services difficult since disruptions of this state due to scaling activities would be perceived by users as a failure of the service.

The LLM model has an autoregressive nature, which means that all generated tokens depend on the previous ones. That ultimately signifies that each request that is made to the LLM is processed sequentially, so in order to increase throughput these requests can be batched. Memory management becomes a critical factor at this stage as each computational state of the request needs to be stored in the memory. The intermediate states of each prompt and generations for each request are stored in a key-value (KV) cache and reusing these states for requests that share the same prompt proved to increase the performance. Each token can take up to 800KB for a 13B model in the KV cache (liu2024optimizing, ).If we scale up to a 70B model(Meta_LLAMA, ), each token will take up to 4300KB in memory. Consequently, a request of 8192 tokens would need 35GB of space. The specific LLM model must fit into the VRAM of the GPU. For instance, a Llama 3.1 8B model(Meta_LLAMA, ) in bfp16 precision requires at least 16GB of VRAM for inference, which is just within the capacity of a single A6000 GPU with 48GB of VRAM.

In advanced deployments, the session state may involve components beyond the LLM itself, further increasing the session’s footprint. Consequently, GPU scaling becomes an important aspect as the system handles with an increasing number of requests, which can be batched to fit into a single GPU or distributed across a GPU cluster.

3.6. Caching

LLMs have a tendency to scale quadratic in both cost and latency with the token length(zhang2024nomadattention, ). This is especially concerning when considering that conversational context is typically injected into the prompt, as discussed in the previous section. In large enterprise applications, it is instrumental to apply caching at various levels to avoid repeated recomputation of results.

Activation Caching:The most common LLM families in use today employ the Transformer architecture in a decoder-only setup(llm-survey, ; palm, ), such that the processing of each conversation tokendepends only on the tokens before it in the context window. This enables the cachingof internal model state111the key and value tensors for each token and layer, which are required to compute the attention values for subsequent tokens to save compute(efficiently-scaling-transformer-inference, ) since the addition of a follow-up userprompt to the context window does not influence the model results for previous tokens.

However, the cacheable internal model state per conversation is quite large (on the order of gigabytes)(efficiently-scaling-transformer-inference, ),and so a challenge for the middleware is how and when to efficiently store and restore it, and howto minimize swapping to and from the GPU where the model is evaluated.

Response Caching: Caching responses for previous queries is a crucial mechanism to ensure performance and reduce cost in enterprise applications(app-level-caching, ). Thisis particularly true for LLM queries, where inference is expensive and often requires dedicated hardware or the use of billed-by-the-tokencloud APIs(bang-2023-gptcache, ).In contrast to classical services that often have to provide the answer toexactly identical queries to multiple users, in the LLM scenario we expectperfectly identical prompts in perfectly identical contexts to be rare. Here, the challengeis to identify previously processed prompts and contexts that are similar enough to the currentprompt and context that they can be expected to have the same (or a sufficiently similar) answer.

Model CachingWhen multiple LLM-driven services are offered in the same deployment, the available compute (GPUs) hasto be shared between them. As such, one challenge for an LLM middleware is to provide access tomodels in a timely fashion, either by loading them quickly on demand or by pre-loading modelson unused compute. An LLM middleware should take advantage of optimization techniques in thisarea, such as the loading of model weight deltas in scenarios where multiple LLMs are fine-tuned from the same base model.(yao2023deltazip, )

Additionally, when no previous responses are relevant, an effective mechanism for swapping fine-tuned model deltas(yao2023deltazip, ) for different users is crucial. In this scenario, caching can help to quickly switch between various versions of the LLM, improving the accuracy of the responses. Another solution for this could be to route requests to machines that already contain the model state, which requires specific scheduling strategies (Section4.2).

Dialogue State Tracking intends to gain an overview of the user’s intentions. It accumulates information on user goals presented as pre-defined slots specified by a schema. The representations of the user’s intentions are updated based on the conversation, e.g. destination=Paris (lee-etal-2021-dialogue-state-tracking, ). After the model collects enough information, it takes action to satisfy the task. The user’s intention guides the model when taking action. However, current state-of-the-art research does not apply to real-world use cases since the datasets do not reflect real-world conditions. Further, dialogue state tracking systems do not generalize well and are subject to domains not representing the distribution of their training data (jacqmin2022_dialogue_state_tracking, ).

3.7. Explainability

Given that more and more companies are adamant about adopting LLMs, understanding their fundamental properties and the limitations of the returned results have never been more critical. Despite the remarkable accuracy demonstrated by LLMs in numerous tasks, such as text summarization or language translation, several challenges persist, particularly regarding anomalous behavior. With the currently dominant LLM architectures that solely rely on stochastic prediction of word sequences, such hallucinations are inherent and difficult to avoid(xu2024hallucination, ). While mitigation strategies exist, they need to be properly integrated into the control- and dataflow of the LLM model component(ji2023towards, ), which is, again, a middleware concern.

Towards a Middleware for Large Language Models (2)

\Description

3.8. Maintenance and Updates

When LLMs augment or even replace portions of conventional business logic, they have to ultimately be treated with the same rigour as services manually developed in code. This involves the entire DevOps cycle from requirement engineering to documentation, deployment, and continuous updates. However, AI models add a new layer of complexity to this problem and LLMs are no different. Inherently, these models are trained on hand-selected data. Once deployed, their accuracy critically depends on the production inputs being from the same or close to the same distribution as the training data. If this is not the case, e.g., because of concept shift then it can cause accuracy issues that, when undetected, can be detrimental to the reliability of the entire application ecosystem(Lu_2019_Review_Concept_Drift, ).In addition to this input drift, which affects all DL models, LLMs can potentially cause output drift, sometimes casually referred to as hallucinations(huang2023survey_hallucination, ). They therefore have to be monitored with even greater rigor and potentially be updated in case of significant deviations. In complex ecosystems, this leads to a classic observability problem, especially when an LLM is used as a microservice in multiple applications, as drift might only occur in specific uses while others do not encounter drift. It is therefore crucial to have a trace-based observability approach, that can distinguish between different user contexts.

4. A Middleware Architecture for LLMs as a Service

The degree and severity of the challenges depend on the degree of integration of the LLM into the existing application ecosystem. We therefore consider two different cases. As the baseline, we consider the LLM as a Service use case where the conversational agent augments the existing services but does not directly have to interact with them. Examples of this scenario include an existing web portal that gets enhanced by a chat agent window. In the baseline scenario, the primary challenges are tracking users sessions and scaling LLM workloads to ensure the quality of service and low-latency responses. Orthogonal challenges like explainability and model maintenance also apply.

In contrast, we consider the case of full integration in the form of LLM as a Gateway. Here, the LLM becomes the new front end and needs to interact tightly with the existing components, which can be done to a degree where the LLM absorbs parts of a traditional middleware stack. We discuss this scenario in the following Section5.

With the aforementioned scenarios in mind, we propose a middleware architecture based on the following functional (F) and non-functional (NF) requirements:

  1. (F)

    Low-barrier of adoption. Easy to integrate — easy to use in the cloud while also allowing for local development

  2. (F)

    Knowledge-Bases Access. Facilitate the use of techniques such as RAG to ground LLM predictions in external knowledge bases

  3. (F)

    Extensibility. Allow for extending service registry with custom functionality

  4. (NF)

    Performance. Enable state-of-the-art response performance with 10 or more tokens per second Time Per Output Token (TPOT)(Databricks_LLM_Inference, ), or alternatively at least four or more words per second(nie2024aladdin, ).

  5. (NF)

    Scalability. Ensure the ability to scale out with an increasing number of users, requests, and services. As the number of users increases, the system should scale accordingly, achieving a performance level within the same order of magnitude as mentioned at the point above

Figure3 shows the essential components of a middleware architecture to support LLM as a Service: (1) User registry, (2) Scheduler, (3) Cache, (4) Observability, and (5) Explainability. While the last three components are technically optional, production systems typically include these components for performance and reliability reasons(LangSmith_Observability, ). In the following, we describe the role of each of the aforementioned components.

4.1. User Registry

The User Registry (Figure4) is responsible for managing user onboarding to the middleware framework. Its main responsibilities include tracking service permissions for users and storing information to facilitate access control. As this is a prevalent practice in cloud commoditized services, such as AWS Lambda, which require manually granting permissions for a Lambda’s function roles, we also consider it relevant to ensure a low barrier of adoption and compatibility with current interfaces in the cloud. This implies, however, that users need to be aware of the range of applications that are available for the LLM to use.

Towards a Middleware for Large Language Models (3)

\Description

4.2. Scheduler

The Scheduler is a critical component of the system, not only responsible for assigning workloads to available machines but also in deciding the most suitable worker type for each workload. This includes determining whether a workload requires GPUs, or if conventional CPU worker machines suffice. Consequently, to optimize system throughput and utilization, incoming requests are scheduled on a sticky-routing policy. Using metadata about active user sessions, if a request requires GPUs, the Scheduler ensures routing to machines that already have the models in GPU memory. Alternatively, for non-GPU workloads, the Scheduler leverages information about active user sessions to route to machines that may already contain state from previous executions, depending on the exact service needed for the workload.

Furthermore, to enhance performance, SOTA practices to disaggregate the inference serving mechanism can be applied, such as dividing worker resources into two pools: one for handling the prompt-processing phase and another for the token generation phase. Loading model weights and all of the attention keys and values at every decoding step creates a significant memory bandwidth bottleneck(ainslie2023gqa, ). To alleviate the issue, techniques like Multi-query Attention (MQA), Multi-head Attention (MHA) and Grouped-query Attention (GQA) can be used(shazeer2019fast, ; ainslie2023gqa, ).

As the system enables having different LLMs as a service (potentially derived from the same base model but fine-tuned for different tasks), we need to ensure that the LLM serving mechanism is efficient. As one might imagine, having different LLMs for different tasks can be prohibitively expensive: serving 5 LLMs with a similar size to GPT-3 —175 billion parameters- would require 2.1 TB of GPU memory (roughly 350 GB per model). Furthermore, if the LLM is on the critical path for inference, loading the model in memory is slow and can significantly affect latency. Potential solutions to alleviate this issue can be to use Delta Compression(yao2023deltazip, ) or by using paradigm parameter efficient fine-tuning (PEFT)(chen2023punica, ).

Additionally, some models are so large that they need to be deployed to multiple GPUs in parallel. This problem arrives in two distinct stages: some models are too large to run on a single GPU by themselves, but the layers are still small enough that each layer can be deployed to a single GPU, but the very largest models require even the layers to be split across several GPUs. Ideally, fragments of the same LLM need to be available at the same time to synchronize intermediate results without delays.

4.3. Cache

The Cache is an essential element of the middleware, responsible for storing LLM deltas with an eviction policy. For query caching, tools like GPTCache(GPT_Cache, ) can be used to create semantic caches. The scope of caching — whether responses are cached per user/session or globally across users — can be debated. Caching per user/session is simpler as it bounds the context to single users, but storing a high number of users may be prohibitive. On the other hand, caching responses across different users could pose challenges because different users may have access to different services, and a shared cache might leak information from services that restricted users should not access.Regardless of the specific option chosen, to further improve performance, caching can be separated into two stores: prompt caches (similar prompts for a given service configuration) and session caches (conversation history).

4.4. Observability

An observability component is introduced to monitor not only the traditional operational aspects of observability, i.e., throughput, compute and memory consumption, etc., but also the critical functional properties like the incoming data and the model’s behavior based on the incoming data. Additionally, the component should detect changes within the model input and output distribution. Within the area of out-of-distribution (OOD) detection, the literature differentiates between three different settings: OOD data is available, OOD data is not available but in-distribution (ID) data labels are available, and OOD data and ID labels are not available. While the first two settings have been subject to extensive research, the last one is not widely investigated within the NLP community (lang2023survey_nlp_ood, ). Nevertheless, considering labelled data is expensive, having only ID non-labelled data seems to be the most realistic use case. Therefore, there is a strong need to investigate this setting within the NLP community.

4.5. Explainability

This component aims to increase the output’s robustness and avoid hallucinations, which are faulty results obtained by an LLM that seem illogical or do not match the originally provided input (huang2023survey_hallucination, ).

While the literature describes a variety of techniques, some require extensive changes to the model and can therefore not be integrated through a middleware. In our architecture, we therefore focus on non-invasive techniques and providing interceptors into the data flow to enable their effective integration.

LLM Reasoning is often used to remediate issues of LLMs, in particular low accuracy and hallucination. One technique to induce reasoning into a model is by few-shot chain-of-thought prompting it (wei2023chainofthought, ). With that, the model can resolve the problem by breaking up the problem into intermediate problems, solving each one subsequently. The model returns the step-by-step process results representing the model’s reasoning in a human-interpretable fashion (zhao2023explainability, ).Decomposing complex input tasks the LLM can pinpoint and orchestrate the services to call for solving the task efficiently. This also aids in understanding the LLM’s decision-making process. However, all reasoning steps are based on the model’s internal representation which is not necessarily grounded in the external world. To alleviate the problem of relying only on internal representation, ReAct prompting can be applied (yao2023react, ). It intertwines reasoning and action steps. After choosing which steps to take, the LLM can adjust the next steps based on the results. Additionally, the LLM is supposed to be able to interact with the Internet as the source for the external representation. By combining reasoning with step-by-step actions ReAct can retrieve information from the internet to provide a sensible answer.

5. LLM as a Gateway

With full integration of the LLM-based conversational agent into the existing service ecosystem, the LLM effectively becomes the gateway (Figure5) by providing an alternative endpoint through which actions on a variety of services can be triggered and which knowledge base and exposed capabilities depend on the availability of services around it. From a middleware perspective, this requires an extensible plugin system which tracks those capabilities and integrates them into the natural language-based interface.

Towards a Middleware for Large Language Models (4)

\Description

Natural language is not an ideal format for data transfer between microservices but a great way to communicate with humans.As such, a natural role for LLMs in a service ecosystem is to bridge the gap between human-understandability andmachine-understandability. In this setting, the human operator would formulate tasks in natural language, pass them into the LLM, which

Towards a Middleware for Large Language Models (5)

\Description

would translate them into a machine-understandable format (such as requests to a REST API or a SQL statement), perform the requested action, and translate the results back into natural language. This couldinvolve the use of other services in the ecosystem, e.g. to host result files. Figure 6shows an imaginary idealized version of such a conversation.

One of the challenges to overcome, however, is the need for precision when enabling this interface, particularly when the prompts involve circumstances that LLMs tend to struggle with, such as the parsing of numbers. Figure7 shows an example. A possibleworkaround would be to potentially keep the human in the loop if the modelis uncertain of its understanding. Of course, this hinges on detecting uncertainty with sufficient reliability, which can be implemented in the explainability component.

Towards a Middleware for Large Language Models (6)

\Description

Consequently, in such scenarios, the complexity of the problem increases significantly. The architecture of the middleware not only has to deal with concerns like enabling connecting multiple services to an LLM but also requires addressing subsequent challenges, including caching for individual applications, creating unified interfaces for services communication, load-balancing various applications, etc., making the design and operation of such middleware far more complex.

5.1. Service Discovery and Routing

Service discovery(zhu2005service, ), i.e., finding and binding clients dynamically to available endpoints, is a classic middleware concern with various established solutions available. Typical approaches are either name-based (e.g., DNS in Kubernetes(liu2018high, )), attribute-based (e.g., SLP(guttman1999service, )), or based on semantic mappings (e.g., based on UDDI(paliwal2011semantics, )). In most traditional applications, however, it is assumed that the client knows the interface of the service it wants to bind to, e.g., because the shared interface is explicitly defined in the form of an IDL. When elevating LLMs to become gateways, this assumption can be weakened because of the inherent semantic gap between natural language and network protocols which naturally requires a more fuzzy, opportunistic matching.

To bridge the semantic gap between natural language and the world of traditional middleware with service registries, network protocols, etc., we have two fundamentally different options available. The first, safer option is to use an external service registry and bridge the gap by turning the discovery and binding process into a ranking problem. The second, more ambitious approach is to use the LLM itself as the service registry and protocol adapter.

5.1.1. Service Routing as Utterance Ranking

When imagining a collection of deployed microservices to which the LLM provides a human interface, the processing of a prompt to the final answer decomposes into the following broad steps, namely (1) Identification of the service, (2) Transformation of the prompt into a query that the service can handle, (3) Invocation of the service, and (4) Presentation of the results. We call the first two service routing and propose that they can be tackled together. The service invocation in our vision requires theprovision of a common interface layer that we model after Amazon Alexa (see Section8 for a discussion of their architecture). In the following, we assume therefore that a service provides to the LLM gateway several procedures, information about the required parameters and a list of example utterances that the service would be able to handle.

In this scenario, service routing is the task to identify the utterance that is most relevant to the prompt and to extract the parameters for the associated procedure.This task is closely related to the task of ranking documents employed in search engines: treating the prompt as a query and the utterances as documents, service routing is a top-1 ranking task plus parameter extraction.

Ranking techniques. In search engines, there are two main approaches: cross-attention re-rankersand two-tower models.(colbert-ranking, ) In the cross-attention technique, the most promising n𝑛nitalic_n documents (typically n𝑛nitalic_n is 1000) documents are identified through a relatively cheap, classical method such as BM25, each of which is concatenated to the query (prompt) and a relevance score inferred by a language model on the query-document pair. Alternatively, the two-tower (or dual-encoder) technique(two-tower-ranking, ) conceptually splits the rankerinto two towers: one that embeds queries and one that embeds documents, then defines the relevance score as the closeness (often dot product) of the embedding vectors. This enables embedding the documents ahead of timeso that at query time language modelinference needs only be run on the query. However, two-tower models are more difficult to train and tend to achieve lower accuracy than cross-attention models(in-defense-of-dual-encoders, ).

\Description

\Description

Service Routing as Ranking As long as the service registry is reasonably small, one can consider approachingservice routing in the way of cross-attention rerankers and do inference over all utterances in the registry for every prompt. In this scenario, the ranking task andmodel need to be modified to predict not only a relevance score but also a list of parameters for the procedure call, as shown in figure 8(a).This extension seems straightforward if trainingdata can be obtained or generated.

Two-tower pre-routing. Scaling the language registry to more utterances or procedures will eventually render the pure cross-attention approach cost-prohibitive. At that point, it is desirable to pre-process the set of utterances into vector embeddings for indexing and split the routing tasks into their constituent parts: service identification and parameter extraction.Here the prompt would be embedded separately from the utterances and the mostrelevant utterance retrieved from the vector index, as shown in figure 8(b). This approach could identify the mostrelevant service, but the extension of this method to parameter extraction isless obvious.

5.2. Service Identification and Binding through the LLM

To explore the more provocative idea of integrating the LLM as a component of the middleware itself, we employ it to identify the appropriate services for different prompts and handle the binding of responses. Below, we showcase two examples (service discovery and binding), that demonstrate this process:

formatted_prompt = [

{"role": "system", "content": f"Given the following list of applications: {meta_information_app_registry}, return only the app which you think is appropriate to help with the following prompt. If you think the app is not appropriate or not relevant to help, simply return an empty string."},

{"role": "user", "content": prompt}

]

In the listing above, the LLM is used specifically for service discovery by accessing meta-information about the application registered in a service registry. The meta_information
_app_registry
variable contains descriptions of registered applications assisting the LLM in the process of service discovery; this meta information can be provided in various ways, for instance, provided by a RAG module or manually added by users during the application registration process.

Towards a Middleware for Large Language Models (7)

\Description

Once a relevant application has been identified, in the following listing, the LLM binds the allowed operations and their arguments from the user’s prompt. This demonstrates an alternative method of integrating the LLM within the middleware to bind specific services with the appropriate arguments and operations associated with a given prompt. By consulting the allowed_operations variable, similar to the previous example, the LLM’s capabilities are enhanced with information about the system, enabling it to identify the relevant arguments and to create a mapping that associates each discovered argument with its corresponding operation.

formatted_prompt = [

{"role": "system", "content": f"Given the following allowed operations: {allowed_operations}, identify which elements from the prompt should be associated with what operation, and only return a JSON formatted list of that (operation, and numbers). For example, the JSON should look like this: [{{\"operation\": \"add\", \"numbers\": [3, 3]}}] The response should only contain the JSON."},

{"role": "user", "content": prompt}

]

Consequently, the two examples above, which can also be seen as a two-step process, showcase practical ways to include the LLM in processes typically managed either explicitly by users or by middleware. Although this approach provides significantly fewer guarantees compared to traditional middleware, integrating the LLM in this manner, from service identification to binding, enhances the system’s ability to autonomously identify and bind services, with the main advantage of reducing direct user intervention.

6. Middleware Support for LLMs as a Gateway

Figure9 showcases the extended architecture that supports use cases in which the LLM acts as a gateway for traditional services. The architecture has been enhanced by adding several new components to the middleware: Service Registry, Scheduler (with enhanced functionality), Service Identifier, and Execution Graph. These additions are intended to ensure the successful integration of multiple services and LLMs, and we describe their roles below.

6.1. Service Registry

The Service Registry component maintains a complete list of available services and provides a mechanism for their discovery and invocation, inspired by dynamic service discovery in microservice orchestration frameworks like Kubernetes(erdenebat2023challenges, ). Additionally, the Service Registry component offers a unified interface for registering new services, adhering to a RPC-like abstraction for invoking services, once user permissions are verified.

6.2. Scheduler

Apart from the responsibilities of the Scheduler mentioned at Section4.2, this component includes additional functionality related to the integration of potential multiple services and other middleware components. This component is intended to schedule incoming requests (workloads) to available worker machines and keep track of the resources available in the system, with respect to active users. The workers can be in a pool of resources that can be accessed based on permissions. For example, we may want to restrict users’ access to specialized hardware (GPUs) by default. The scheduler prioritizes the workload on a first-in-first-out basis (FIFO queue). In other words, requests that arrive at the scheduler are served on a first-come-first-serve basis. For every workload, the scheduler has information about the nature of the computation, such as what particular service it needs to call. Figure10 shows a schematic overview of the scheduler.

Towards a Middleware for Large Language Models (8)

\Description

6.3. Service Identifier

This component is responsible for identifying services to be used in responding to user prompts. The Service Identifier is connected to a vector database to facilitate grounding predictions using information about available services (Service
Registry.get_available_services())
. Since the Sched-
uler
can access information about users’ permissions, the task of excluding out-of-scope applications is handled separately. Consequently, the vector database is queried on a subset of data relevant to the query, ensuring compliance with user permissions. As a result, the Service Identifier generates a potential execution graph to solve the query — something that needs to be checked and parsed for correctness.

Towards a Middleware for Large Language Models (9)

\Description

6.4. Execution Graph

The Execution Graph component maintains a chain of services that loop until the initial prompt has been resolved (at the bottom of Figure11). Inspired by TensorFlow’s dataflow graph(199317, ), this component ensures systematic processing. This component contains a chain of services that loop until the initial prompt has been solved. If additional information is needed from the user, the component will stop the execution process to obtain the required information and will resume execution once the user has provided the necessary input. Additionally, during the execution, caching per application is attempted for each service involved, with new computations executed only if the cache is unavailable. Additionally, this component includes a parser for each potentially intermediate output step to validate the calls made to the Service Registry.

7. Prototype and Evaluation

We implemented a conceptual prototype of our LLM middleware architecture in Python, focusing particularly on the service binding and protocol adaptation through the LLM. While the current implementation does not do its own scheduling and scaling, it models the user registry, service registry and service identifier components and allows new services to register with the middleware to extend the capabilities of the LLM. Additionally, we check the differences between a baseline LLM and our prototype when using a calculator to first showcase the benefits of augmenting LLM capabilities with conventional tools, and further we investigate how LLM responses scale with a varying number of arguments and a varying number of processes that share GPU resources. Lastly, we provide a forward-looking discussion on the scalability of LLM-based middleware, specifically regarding the Execution Graph Generator.

7.1. Proof-of-concept: LLM with a Calculator Service

We experiment with prompting an LLM to solve math questions typically handled by a calculator. To do this, we use two setups:

  1. (1)

    LLM baseline: we directly prompt the LLM to solve the math question

  2. (2)

    LLM + Calculator Application: we connect the LLM to a calculator application. The LLM here helps with identifying the operations and parameters needed to solve the question, and then it formulates the exact operations (add, subtract, etc.) and arguments (5, 5, 4, etc.) for the calculator.

Whereas for the former, we simply record the LLM responses and compare against ground truth samples, for the latter the workflow of Figure12 has been followed. We showcase the results in Table1. As it can be seen, the LLM + Calculator Application significantly outperforms the LLM baseline, especially as the number of arguments are increased, thus showcasing how accuracy can be significantly increased for certain types of tasks, just by connecting with an external application.

Args No.LLM baseline accuracyLLM + Calculator accuracy
286/100100/100
337/100100/100
51/10093/100
100/10097/100
150/10099/100
200/10099/100

We observe that as we increase the number of arguments in the prompts, the LLM takes increasingly longer to calculate the response. For example, when the number of arguments is 2 (two numbers to be added), the LLM takes 1.031 s to generate a response. As the number of arguments increases to 5, the response time grows to 1.685 s. With 15 arguments and 20 arguments, the response times further increase to 2.235 s and 3.108 s respectively. Consequently, this indicates that as we increase the number of arguments in the query, the complexity increases accordingly, having a significant impact on the processing time of the LLM. Additionally, we test the response time of the LLM when we vary the number of processes that query it. Specifically, we notice that the response times are getting significantly lower when the GPU has to be shared. For instance, with 2 arguments, the response time increases from 1.031 s to 2.058 s when two processes share the GPU. Even worse, for the same query type with two arguments, when three processes share the GPU, the response time increases from 1.031 s to 8.548 s. This further highlights the importance of resource management and ability to scale accordingly in scenarios where multiple processes might be accessing an LLM simultaneously.

Towards a Middleware for Large Language Models (10)

\Description

7.2. Scaling of LLM Execution Graph Generator

As the Execution Graph is the key component for processing user prompts and returning results, its scalability is of critical importance. Specifically, in future work, we aim to explore its throughput, in scenarios where we vary the number of services available in the framework. For example, it would be interesting to determine the performance of this component when the Service Registry contains only one service, as well as when it includes a larger number of services. Additionally, to ensure the feasibility of using this component, it would be important to investigate the potential impact of varying the amount of information appended to the query, such as providing either more thorough or shorter descriptions for the services. This is especially relevant if the Service Identifier has to decide what services to use based on this information. Moreover, understanding the performance of using a vector database in this context would also be critical. For instance, as some services might rely on grounding their responses in documents (for LLM-based services), filtering the vector database before querying it might yield performance advantages. Specifically, for answering queries, different parts of the database can be ignored by design, given that different users have access to only subsets of the database space due to varying service permissions.

8. Related Work

While none of the existing approaches come close to our vision of a comprehensive middleware for LLMs, several fragmented approaches exist enabling the integration of LLMs into larger systems.

LangChain(topsakal2023creating, ) is a framework enabling the development of LLM-based solutions. As the name connotes, the built LLM solutions can consist of several chains of microservices.The applications can access additional data sources within the deployed environment using RAG pipelines. LangChain also utilizes agents for internal reasoning based on ReAct (Section4.5). Thus, it can access external knowledge over the Internet. Users can interact with LangChain in a chat via prompts(LangChain_Concept, ).LangChain also implements other functionalities, like LangSmith to enable AI Observability: LangSmith allows users to trace the LLM pipeline starting. The trace can contain information on the user prompt, the retrieved model output, and further metadata. All collected traces are also available in a monitoring tool allowing for visual analytics (LangSmith_Observability, ). In comparison, our vision enables self-hosting LLMs on-premise, ensuring data privacy while extending beyond Lang Chain’s functionality by integrating and orchestrating multiple LLMs. Unlike LangChain, which focuses on chaining services mainly around a single LLM, we envision a system that facilitates interactions not only between multiple LLMs and users but also among the LLMs themselves. This approach addresses a more complex problem, enabling the efficient use of multiple LLMs at scale and on premise, integrating external applications, and potentially incorporating LLMs as part of the middleware.

PrivateGPT has been created to allow the usage of generative AI for privacy-related documents (Martinez_Toro_PrivateGPT_2023, ). It implements a RAG pipeline and exposes APIs for document ingestion.PrivateGPT offers three different usage modes that must be selected before interacting with the LLM with prompts. The first mode is a simple, non-contextual LLM chat. Only previous chat messages but not ingested documents are considered in this mode. In another mode, users can ask the LLM questions about already ingested documents. Here, previous chat messages are also taken into account. The last mode offers a search within the ingested documents. The search returns the four most relevant text chunks including the source documents.PrivateGPT is restricted in its usage by only interacting with the user and their ingested documents but not with other systems. The different modes limit its functionality.

Amazon offers a similar service with Amazon Bedrock. It enables the integration of foundation models (FM) into the user’s application using AWS tools.Bedrock allows the customization of AI models and implements RAG. It can also interact with the enterprise system and data sources using agents. Agents fill the bridging gap between the FM, the user, and the user’s end-system. They are responsible for understanding the user request to break it down into smaller steps. Based on that, they perform API calls to the end system. Agents can re-prompt the user or query data from a knowledge base if any information is missing. They retrieve the API call results and invoke the FM to understand the output. Afterwards, agents evaluate whether further steps or information is needed, or if the result is sufficient. Depending on the decision, they reiterate the process of doing more API calls and prompting the model, re-prompts, or returning a result to the user (Amazon_Bedrock_Agents, ).Similarly to LangChain, Amazon Bedrock focuses on embedding LLMs into the ecosystem. However, compared to our vision, Bedrock is cloud-based, while our solution can be run on-premise ensuring data privacy. Further, Bedrock does not facilitate chaining up LLMs to enable LLM-to-LLM communication. Bedrock’s tracing feature captures all metadata within logs at each possible step. Nevertheless, currently, there is no functionality to detect whether drifts within the data or the model happened which is part of our vision.

Seldon and NVidia offer a different software solution. Rather than using LLMs for service and task orchestration, they provide functionalities to integrate AI models into the environment as microservices (NVIDIA_Triton, ; Seldon_Core, ).They are deployed into given environments. NVidia provides an inference server for the models, whilst Seldon employs a Kubernetes cluster. The environment manages, orchestrates, and observes deployed microservices. Further, both platforms focus on integrating different ML models and boosting the system’s efficiency. They are not directly taking ML microservices into account for inter-microservice communication, thus, not considering the usage of LLMs within microservice chains.

Amazon Alexa can be seen as an early example of an extensible agent system that shows properties similar to our proposed LLM as a Gateway design. It integrates classically programmed services and a natural language user interface. In this regard, it can serve as a gateway to Internet services for home users and aggregate information and services in the manner of a personal assistant. Service developers provide a specifically shaped interface (skill) for the natural language component to latch onto, and the natural language component identifies the skill to use and parameters to pass into it. Each skill bundles a number of intents, each of which is effectively a function call: it has a name and an arbitrary number of typed slots, performs some actions and returns a result that the natural language component renders into natural language. For intents that require many slots, confirmation or other multi-turn scenarios, there is a dialogue manager(Amazon_dialogue, )that can be classically programmed or the Conversations framework(Alexa_Conversations, ),in which a language model is trained on utterances and example conversations to keep track of the conversation and extract intent and slot values overmultiple prompts.

9. Conclusions and Future Research

The increasing interest in integrating LLMs within enterprise environments proves their tremendous potential in enhancing user interfaces, search capabilities, and analytics tasks. While commercial cloud-based solutions like OpenAI’s ChatGPT or Anthropic’s Claude have led the way, there is a notable gap in the availability of on-premise LLM solutions, particularly those that provide robust middleware support.

In this work, we have outlined our vision for a comprehensive middleware, discussed where hosting and integrating LLMs introduces challenges beyond traditional software components, but also shown ways in which LLMs can become an active part of a middleware to address these challenges.

Many aspects of our vision, e.g., effective scheduling of multiple versions of big LLMs or the observability challenge, are touching on problems that have significant research gaps or areas in which theoretical results have not been successfully implemented in practice. We therefore consider the problem of designing and building middleware for LLMs to become a longer research effort for our community.

References

  • (1)Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, DerekG. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.TensorFlow: A system for Large-Scale machine learning.In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, Savannah, GA, November 2016. USENIX Association.
  • (2)Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, etal.Gpt-4 technical report, 2024.
  • (3)Joshua Ainslie, James Lee-Thorp, Michiel deJong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai.Gqa: Training generalized multi-query transformer models from multi-head checkpoints, 2023.
  • (4)Amazon Alexa.Define the dialog to collect and confirm required information.https://developer.amazon.com/en-US/docs/alexa/custom-skills/define-the-dialog-to-collect-and-confirm-required-information.html, 2024.Accessed: 2024-05-30.
  • (5)Amazon Alexa.Dialog management with alexa conversations.https://developer.amazon.com/en-US/alexa/alexa-skills-kit/dialog-management, 2024.Accessed: 2024-05-30.
  • (6)Amazon.Agents for amazon bedrock.https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html, 2024.Accessed: 2024-04-16.
  • (7)Anthropic.Meet claude.https://www.anthropic.com/claude, 2024.Accessed: 2024-04-16.
  • (8)FuBang.GPTCache: An open-source semantic cache for LLM applications enabling faster answers and cost savings.In Liling Tan, Dmitrijs Milajevs, Geeticka Chauhan, Jeremy Gwinnup, and Elijah Rippeth, editors, Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023), pages 212–218, Singapore, December 2023. Association for Computational Linguistics.
  • (9)Wei-Cheng Chang, FelixX. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar.Pre-training tasks for embedding-based large-scale retrieval.CoRR, abs/2002.03932, 2020.
  • (10)Lequn Chen, Zihao Ye, Yongji Wu, Danyang Zhuo, Luis Ceze, and Arvind Krishnamurthy.Punica: Multi-tenant lora serving, 2023.
  • (11)Lingjiao Chen, Matei Zaharia, and James Zou.Frugalgpt: How to use large language models while reducing cost and improving performance, 2023.
  • (12)K.R. Chowdhary.Natural Language Processing, pages 603–649.Springer India, New Delhi, 2020.
  • (13)Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, HyungWon Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, YiTay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, AndrewM. Dai, ThanumalayanSankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel.Palm: Scaling language modeling with pathways.Journal of Machine Learning Research, 24(240):1–113, 2023.
  • (14)Databricks.Llm inference performance engineering: Best practices.https://www.databricks.com/blog/llm-inference-performance-engineering-best-practices, 2024.Accessed: 2024-05-30.
  • (15)Baasanjargal Erdenebat, Bayarjargal Bud, and Tamás Kozsik.Challenges in service discovery for microservices deployed in a kubernetes cluster–a case study.Infocommunications Journal, 15(SI):69–75, 2023.
  • (16)Donghyun Gouk, Sangwon Lee, Miryeong Kwon, and Myoungsoo Jung.Direct access, High-Performance memory disaggregation with DirectCXL.In 2022 USENIX Annual Technical Conference (USENIX ATC 22), pages 287–294, Carlsbad, CA, July 2022. USENIX Association.
  • (17)Erik Guttman.Service location protocol: Automatic discovery of ip network services.IEEE Internet computing, 3(4):71–80, 1999.
  • (18)GeoffreyE Hinton.Learning multiple layers of representation.Trends in cognitive sciences, 11(10):428–434, 2007.
  • (19)Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Ka-Wei Lee.Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models.In The 2023 Conference on Empirical Methods in Natural Language Processing, 2023.
  • (20)Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu.A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, 2023.
  • (21)Jez Humble and David Farley.Continuous delivery: reliable software releases through build, test, and deployment automation.Pearson Education, 2010.
  • (22)Léo Jacqmin, LinaM. Rojas-Barahona, and Benoit Favre.”do you follow me?”: A survey of recent approaches in dialogue state tracking, 2022.
  • (23)Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, and Pascale Fung.Towards mitigating llm hallucination via self reflection.In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1827–1843, 2023.
  • (24)Omar Khattab and Matei Zaharia.Colbert: Efficient and effective passage search via contextualized late interaction over bert.In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’20, pages 39–48, New York, NY, USA, 2020. Association for Computing Machinery.
  • (25)Alex Krizhevsky, Ilya Sutskever, and GeoffreyE Hinton.Imagenet classification with deep convolutional neural networks.In F.Pereira, C.J. Burges, L.Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume25. Curran Associates, Inc., 2012.
  • (26)Hao Lang, Yinhe Zheng, Yixuan Li, Jian Sun, Fei Huang, and Yongbin Li.A survey on out-of-distribution detection in nlp, 2023.
  • (27)LangChain.Conceptual guide.https://python.langchain.com/v0.2/docs/concepts/, 2024.Accessed: 2024-05-30.
  • (28)LangChain.Langsmith tracing.https://docs.smith.langchain.com/concepts/tracing, 2024.Accessed: 2024-05-28.
  • (29)Pedro Larranaga, Borja Calvo, Roberto Santana, Concha Bielza, Josu Galdiano, Inaki Inza, JoséA Lozano, Rubén Armananzas, Guzmán Santafé, Aritz Pérez, etal.Machine learning in bioinformatics.Briefings in bioinformatics, 7(1):86–112, 2006.
  • (30)Chia-Hsuan Lee, Hao Cheng, and Mari Ostendorf.Dialogue state tracking with a language model using schema-driven prompting.In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih, editors, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4937–4949, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
  • (31)Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela.Retrieval-augmented generation for knowledge-intensive nlp tasks.In H.Larochelle, M.Ranzato, R.Hadsell, M.F. Balcan, and H.Lin, editors, Advances in Neural Information Processing Systems, volume33, pages 9459–9474. Curran Associates, Inc., 2020.
  • (32)Jinyang Li, Binyuan Hui, GeQu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, etal.Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls.Advances in Neural Inf. Proc. Systems, 36, 2024.
  • (33)Lizi Liao, GraceHui Yang, and Chirag Shah.Proactive conversational agents in the post-chatgpt world.In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3452–3455, 2023.
  • (34)Haifeng Liu, Shugang Chen, Yongcheng Bao, Wanli Yang, Yuan Chen, Wei Ding, and Huasong Shan.A high performance, scalable dns service for very large scale container cloud platforms.In Proceedings of the 19th International Middleware Conference Industry, pages 39–45, 2018.
  • (35)Shu Liu, Asim Biswal, Audrey Cheng, Xiangxi Mo, Shiyi Cao, JosephE. Gonzalez, Ion Stoica, and Matei Zaharia.Optimizing llm queries in relational workloads, 2024.
  • (36)Jie Lu, Anjin Liu, Fan Dong, Feng Gu, João Gama, and Guangquan Zhang.Learning under concept drift: A review.IEEE Transactions on Knowledge and Data Engineering, 31(12):2346–2363, 2019.
  • (37)Iván MartínezToro, Daniel GallegoVico, and Pablo Orgaz.PrivateGPT.https://github.com/imartinez/privateGPT, May 2023.
  • (38)Aditya Menon, Sadeep Jayasumana, AnkitSingh Rawat, Seungyeon Kim, Sashank Reddi, and Sanjiv Kumar.In defense of dual-encoders for neural ranking.In Kamalika Chaudhuri, Stefanie Jegelka, LeSong, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 15376–15400. PMLR, 17–23 Jul 2022.
  • (39)Jhonny Mertz and Ingrid Nunes.A qualitative study of application-level caching.IEEE Transactions on Software Engineering, 43(9):798–816, 2017.
  • (40)Meta.Introducing meta llama 3: The most capable openly available llm to date.https://ai.meta.com/blog/meta-llama-3/, 2024.Accessed: 2024-05-20.
  • (41)Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, and Jianfeng Gao.Large language models: A survey, 2024.
  • (42)Anup Mohan, Harshad Sane, Kshitij Doshi, Saikrishna Edupuganti, Naren Nayak, and Vadim Sukhomlinov.Agile cold starts for scalable serverless.In 11th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 19), Renton, WA, July 2019. USENIX Association.
  • (43)Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia.Efficient large-scale language model training on gpu clusters using megatron-lm.In Proceedings of the ACM/IEEE Supercomputing Conference, SC ’21, New York, NY, USA, 2021. Association for Computing Machinery.
  • (44)Mehran Nasseri, Patrick Brandtner, Robert Zimmermann, Taha Falatouri, Farzaneh Darbanian, and Tobechi Obinwanne.Applications of large language models (llms) in business analytics–exemplary use cases in data preparation tasks.In International Conference on Human-Computer Interaction, pages 182–198. Springer, 2023.
  • (45)Chengyi Nie, Rodrigo Fonseca, and Zhenhua Liu.Aladdin: Joint placement and scaling for slo-aware llm serving, 2024.
  • (46)NVidia.Overview of seldon core components¶.https://www.nvidia.com/de-de/ai-data-science/products/triton-inference-server, 2024.Accessed: 2024-05-29.
  • (47)AabhasV Paliwal, Basit Shafiq, Jaideep Vaidya, Hui Xiong, and Nabil Adam.Semantics-based automated service discovery.IEEE Transactions on Services Computing, 5(2):260–275, 2011.
  • (48)Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Jonathan Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean.Efficiently scaling transformer inference.In D.Sond, M.Carbin, and T.Chen, editors, Proceedings of Machine Learning and Systems 5 pre-proceedings (MLSys 2023), 2023.
  • (49)B.Pratheek, Neha Jawalkar, and Arkaprava Basu.Improving gpu multi-tenancy with page walk stealing.In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 626–639, 2021.
  • (50)Seldon.Overview of seldon core components¶.https://docs.seldon.io/projects/seldon-core/en/latest/workflow/overview.html, 2024.Accessed: 2024-05-29.
  • (51)Mojtaba Shahin, MuhammadAli Babar, and Liming Zhu.Continuous integration, delivery and deployment: a systematic review on approaches, tools, challenges and practices.IEEE access, 5:3909–3943, 2017.
  • (52)Noam Shazeer.Fast transformer decoding: One write-head is all you need, 2019.
  • (53)Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro.Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020.
  • (54)George Stockman and LindaG Shapiro.Computer vision.Prentice Hall PTR, 2001.
  • (55)Zilliz Tech.Gptcache : A library for creating semantic cache for llm queries.https://github.com/zilliztech/GPTCache, 2024.Accessed: 2024-05-30.
  • (56)Oguzhan Topsakal and TahirCetin Akinci.Creating large language model applications utilizing langchain: A primer on developing llm apps fast.In International Conference on Applied Engineering and Natural Sciences, volume1, pages 1050–1056, 2023.
  • (57)Deloitte UK.Open vs. closed-source generative ai.https://www2.deloitte.com/uk/en/blog/ai-institute/2023/open-vs-closed-source-generative-ai.html, 2024.Accessed: 2024-05-28.
  • (58)Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, AidanN. Gomez, Lukasz Kaiser, and Illia Polosukhin.Attention is all you need.CoRR, abs/1706.03762, 2017.
  • (59)Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, EdChi, Quoc Le, and Denny Zhou.Chain-of-thought prompting elicits reasoning in large language models, 2023.
  • (60)Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, and Fei Huang.Raise a child in large language model: Towards effective and generalizable fine-tuning.In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9514–9528, 2021.
  • (61)Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli.Hallucination is inevitable: An innate limitation of large language models.arXiv preprint arXiv:2401.11817, 2024.
  • (62)Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.React: Synergizing reasoning and acting in language models, 2023.
  • (63)Xiaozhe Yao and Ana Klimovic.Deltazip: Multi-tenant language model serving via delta compression, 2023.
  • (64)Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang.A survey on large language model (llm) security and privacy: The good, the bad, and the ugly.High-Confidence Computing, page 100211, 2024.
  • (65)Tianyi Zhang, JonahWonkyu Yi, Bowen Yao, Zhaozhuo Xu, and Anshumali Shrivastava.Nomad-attention: Efficient llm inference on cpus through multiply-add-free attention, 2024.
  • (66)Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, and Mengnan Du.Explainability for large language models: A survey, 2023.
  • (67)Feng Zhu, MattW Mutka, and LionelM Ni.Service discovery in pervasive computing environments.IEEE Pervasive computing, 4(4):81–90, 2005.
  • (68)Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji-Rong Wen.Large language models for information retrieval: A survey.arXiv preprint arXiv:2308.07107, 2023.
  • (69)Noah Ziems, Wenhao Yu, Zhihan Zhang, and Meng Jiang.Large language models are built-in autoregressive search engines.arXiv preprint arXiv:2305.09612, 2023.
Towards a Middleware for Large Language Models (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Greg O'Connell

Last Updated:

Views: 6858

Rating: 4.1 / 5 (62 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.