Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster expectation values #491

Open
1 task done
JacobHast opened this issue Sep 13, 2024 · 3 comments
Open
1 task done

Faster expectation values #491

JacobHast opened this issue Sep 13, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@JacobHast
Copy link
Contributor

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

When calculating displacement-operator expectation-values of pure states in the Fock representation, the current implementation is ~10x slower than extracting the vectors and matrices of the state and operator and multiplying them together manually. So it seems something is limiting the speed of the current implementation.

Implementation

If the current method cannot be sped up, one could use bare matrix multiplication:

def expectation_matmul(state: mm.State, operator: mm.Operation) -> complex:
    state_fock = state.fock(settings.AUTOSHAPE_MAX)
    operator_fock = operator.fock(settings.AUTOSHAPE_MAX)
    return state_fock.T.conj() @ operator_fock @ state_fock

image

How important would you say this feature is?

2: Somewhat important. Needed this quarter.

Additional information

No response

@JacobHast JacobHast added the enhancement New feature or request label Sep 13, 2024
@elib20
Copy link
Contributor

elib20 commented Sep 13, 2024

I think this is related to the fact that Fock objects get converted to Bargmann, which might not be the optimal thing to do here (i.e. it's faster to convert the Dgate to fock).

@JacobHast
Copy link
Contributor Author

Here's a modified version of the above code, which generates new states every attempt to rule out speed improvements from caching

import mrmustard.lab_dev as mm
import numpy as np
from mrmustard import settings
import timeit

def make_state_and_operator():
    state = mm.Ket.from_fock([0], np.random.random(10)).normalize()
    operator = mm.Dgate([0], x=1)
    return state, operator

def expectation_matmul() -> complex:
    state, operator = make_state_and_operator()
    state_fock = state.fock(settings.AUTOSHAPE_MAX)
    operator_fock = operator.fock(settings.AUTOSHAPE_MAX)
    return state_fock.T.conj() @ operator_fock @ state_fock


def expectation_buildin() -> complex:
    state, operator = make_state_and_operator()
    return state.expectation(operator)

%timeit expectation_matmul()
%timeit expectation_buildin()
%timeit make_state_and_operator()

@JacobHast
Copy link
Contributor Author

Here also testing using .to_fock() on the operator. Faster, but still slower than the direct multiplication

import mrmustard.lab_dev as mm
import numpy as np
from mrmustard import settings
import timeit

def make_state_and_operator():
    state = mm.Ket.from_fock([0], np.random.random(10)).normalize()
    operator = mm.Dgate([0], x=1)
    return state, operator

def expectation_matmul() -> complex:
    state, operator = make_state_and_operator()
    state_fock = state.fock(settings.AUTOSHAPE_MAX)
    operator_fock = operator.fock(settings.AUTOSHAPE_MAX)
    return state_fock.T.conj() @ operator_fock @ state_fock


def expectation_buildin() -> complex:
    state, operator = make_state_and_operator()
    return state.expectation(operator)

def expectation_buildin_to_fock() -> complex:
    state, operator = make_state_and_operator()
    return state.expectation(operator.to_fock())


%timeit expectation_matmul()
%timeit expectation_buildin()
%timeit expectation_buildin_to_fock()
%timeit make_state_and_operator()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants