The proliferation of AI technologies, with machine learning at its core, has rapidly accelerated, becoming vital for society. However, conventional computing may approach technological limits in processing the continually growing data volumes with speed and accuracy, especially with advancements in sensor technology. Consequently, "quantum machine learning," which merges the high-speed data processing capabilities of "quantum computing" with "machine learning," is garnering attention for the future.

Quantum machine learning (QML) involves algorithms designed for execution on quantum computers, which operate on principles distinct from classical computers. Consequently, machine learning algorithms for classical computers can't be directly implemented on quantum ones. To utilize QML, algorithms must be adapted to the unique operational principles of quantum computers, a process more complex than typical porting across different operating systems or programming languages. Successful adaptation doesn’t ensure that a quantum computer will outperform its classical counterpart with the adapted algorithm. A quantum algorithm must demonstrably outperform its classical version to be considered valuable, a superiority known as quantum transcendence.
Quantum Machine Learning (QML) intertwines quantum computing and machine learning, presenting a novel approach to handling computational tasks and data processing. Quantum computers, utilizing quantum bits (qubits), operate fundamentally differently from classical computers, which use classical bits (Bit) that represent either "0" or "1". Qubits, on the other hand, can represent both "0" and "1" simultaneously through a phenomenon known as superposition. Various types of qubits, such as "superconducting qubits" and "optical qubits," achieve superposition differently, impacting the theory and apparatus used in calculations.



Machine Learning (ML) can be categorized based on the nature of the data and the algorithm used, whether classical or quantum. If either or both the data and algorithm are quantum-based, as illustrated in the preceding figure, the computation is deemed to involve quantum computing, labeled as QQ, QC, or CQ. While this classification is not rigid and several hybrid algorithms exist, we will adhere to this categorization throughout the paper. In certain instances, for example, only the optimization task is performed by a quantum processor, while the remaining processes utilize a classical one.

QML is gaining traction due to its potential to process large data volumes at high speeds, especially in the era of Noisy Intermediate-Scale Quantum (NISQ) computers. NISQ computers, while susceptible to noise and errors, have found a niche in QML due to the latter's tolerance for minor errors and noise. In some instances, qubit fluctuations due to noise can even have a positive impact on certain machine learning algorithms. In the realm of NISQ, Quantum Machine Learning can be considered a viable application due to its inherent tolerance for some level of errors and noise, given that ML algorithms are generally designed assuming the presence of noise and do not require absolute precision. Interestingly, NISQ, with its intrinsic noise, is relatively less prone to overfitting compared to classical computers. Techniques like "quantum circuit learning" (QCL) exhibit characteristics that make them less susceptible to overfitting than classical computers and can achieve high performance with smaller circuits. However, noise, while sometimes beneficial, is fundamentally a hurdle. The "quantum-classical hybrid" NISQ computer was developed to enhance versatility, utilizing a classical computer for calculations where noise is undesirable. This hybrid approach can handle classical and quantum data simultaneously, allowing "complex information" in the quantum world to be expressed directly using qubits, while tasks requiring rigorous and "complex calculations" can be computed using classical bits.
In some paper, the authors devised the quantum kernel method and the projected quantum kernel method as quantum algorithms corresponding to the classical kernel method, and compared their prediction performance. As a result, they proved that the quantum algorithm outperforms the classical algorithm under certain conditions (large model size and large geometric differences.

– File missing when checked in Oct 2025
Quantum deep learning techniques include quantum CNN, hybrid CNN, and quantum RNN. The report also presents research cases where quantum techniques are used to solve natural language processing problems, including the use of quantum techniques to classify hate speech. Quantum machine learning, which can process multiple states simultaneously by superposition, has the potential to significantly shorten the process of learning large amounts of data and trial-and-error in a given environment, which are the characteristics of classical machine learning. Furthermore, quantum machine learning is expected to extract features from complex events that have been difficult to solve with classical machine learning.

Despite the promising prospects, QML is still in its nascent stages, with current systems being resource-intensive and often showing subpar performance compared to classical ML systems. The development of QML systems that are deployable in real-life scenarios necessitates substantial effort and research. The theoretical prospects of QML are numerous, but practical implementation is contingent upon overcoming challenges related to quantum computing hardware, algorithm development, and ensuring that quantum versions of algorithms demonstrably outperform their classical counterparts.





Error creating thumbnail: File missing – File missing when checked in Oct 2025

Error creating thumbnail: File missing – File missing when checked in Oct 2025
The "Company" is a privately held Quantum Computing services company with a mission to bring to market a Quantum Machine Learning SaaS (Software-as-a-service) product. To achieve this mission, the company has to develop best in class Quantum Computing hardware via a hybrid of off the shelf purchases and in-house development. This hardware will then power our cutting edge Quantum Machine Learning algorithms for a variety of B2B (business-to-business) use cases. The table below shows our main strategic drivers and the company's alignment to them so far:

The table below summarizes the various Quantum Computing Hardware off-the-shelf options we either have to compete with or have at our disposal for purchase:

The table above is a subset of the table at: https://en.wikipedia.org/wiki/List_of_quantum_processors#Circuit-based_quantum_processors. We are only considering commercially available hardware and/or SaaS platforms that we can build our QML service on.
The H1-1 from Quantinuum is currently our front-runner quantum computing hardware solution to adopt in terms of balancing between fidelity and number of physical qubits (expressed as a high Quantum Volume). Pricing data is uncertain and thus presents a challenge in terms of developing a financial model for our SaaS offerings. It is therefore somewhat uncertain what we can charge for our QML algorithms as a service - this will be highly dependent on use cases and how big of a problem we are solving for our clients.
To better understand our decision for the H1-1 device, we have designed a morphological matrix to more clearly delineate why we chose the device (highest Quantum Volume of the options from Quantinuum). See below:
Error creating thumbnail: File missing – File missing when checked in Oct 2025
The best developed mathematical model for the performance of a Quantum Computer is the Quantum Volume (QV) metric defined by IBM. This equation relates two key Figures of Merit (FOM’s) for a quantum computer, number of qubits and error rate, both of which are essential for characterizing how performant a quantum computer is. The equation is:
Error creating thumbnail: File missing – File missing when checked in Oct 2025
QV is a unitless measure of performance that is meant to compare the performance of disparate quantum computing architectures and devices using FOM’s that are common to all of them in this nascent industry. Generally, the larger the QV, the more complex the problems a quantum computer can solve. Unlike classical computer buts, Qubits tend to decohere which results in a loss of performance for the computer. Therefore, there is a tradeoff between number of qubits in the computer and how often they decohere, and so a few fault tolerant qubits tend to be more performant than a larger number of noisy, error-prone qubits.
To characterize the sensitivity of the quantum volume to each of the FOM’s, we take the gradient derivative approach with respect to each FOM which yields:

Taking a nominal design point of n=128 Qubits and p=5%, we calculate that:

Therefore, for the chosen nominal design point, we see that the QV is ~3x more sensitive to the incremental change in error rate than it is to the incremental change in the number of qubits. This is a substantial factor, but from what we can see, the differences are within the same order of magnitude. To gain more confidence in this sensitivity analysis, we shall normalize the values of dQvdn and dQvdp based on the corresponding value of Qv for a few design points and producing a table as follows:

What we see in the data is evident from the normalization equations i.e.

This produces a flame chart as follows:

We can therefore conclude that the error rate has a more pronounced effect on the QV of the quantum computer than the number of qubits, which supports the earlier assertion that a few fault tolerant qubits tend to be more performant than a larger number of noisy, error-prone qubits. As the field of Quantum Computing and Quantum Machine Learning grows, It will not be surprising if error rate plays a more crucial role in calculating the computational efficiency of a Quantum Computer system/device in a future governing equation.
For our 2nd FOM, we look at the slightly less complex Stable Qubits (SQ) FOM. This is simply a linear relationship between the number of qubits and the error rate as follows:
Error creating thumbnail: File missing – File missing when checked in Oct 2025
The overall effect of a non-zero error rate is to reduce the number of qubits that are usable for computation from the nominal number of qubits in the quantum computer. Sensitivity analysis for SQ is this more straightforward as follows:

Normalizing each of these sensitivities as before yields:

Taking a nominal design point of n=128 Qubits and p=5% as before, we calculate that:

Again we see that the error rate has a more pronounced effect on the SQ of the quantum computer than the number of qubits, upholding the assertion that a few fault tolerant qubits tend to be more performant than a larger number of noisy, error-prone qubits.
Our team estimated the NPV (Net Present Value) for Quantum Machine Learning development projects. Given that Quantum Machine Learning represents an emerging market, our analysis will focus on investment trends derived from the financial statements of major ML engine developers that currently utilize classical computing in the ML market. This analysis will include average values to mirror the investment patterns in Quantum companies. The data for these leading ML engine developers is sourced from NVIDIA, IBM, Google (Alphabet), and Microsoft. Amazon is excluded from this analysis because its significant retail segment revenues do not align with the trend of pure development investments.
Error creating thumbnail: File missing – File missing when checked in Oct 2025
Although the Quantum Machine Learning market is currently highly competitive and witnessing an influx of new entrants, we anticipate that over time, many of these players will be phased out, leaving a few companies to dominate the market share. The target companies in our calculations are projected to adopt investment trends similar to those of the current ML giants, positioning themselves as major players in the Quantum Machine Learning market with an estimated 34.8% market share by 2035. The results of this simulation are presented below, showing an NPV (Net Present Value) of $32,211 million. To calculate the sales, we used the current market size and its Compound Annual Growth Rate (CAGR) forecast provided by Virtue Market Research. The formula is based on the S-curve theory, which assumes that the market's growth rate will gradually decline. Other parameters, as previously mentioned, are derived from the analyzed investment trends and have been incorporated into the formula.

This section documents a list of Quantum Processor projects by manufacturer and commissioning date as follows:
List of projects:
Our strategic objective is to pioneer the integration of Quantum Computing in the field of Artificial Intelligence and Machine Learning, aiming to achieve transformative computational capabilities by 2035. Our focus is on harnessing the unparalleled processing power of quantum computers to solve complex AI/ML problems that are currently intractable with classical computing methods. To realize this vision, we will invest in several key R&D initiatives:
