Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ran into a small error whose root lies in argmaxtools/test_utils.py lines 311, 377 #2

Open
adhulipa opened this issue Mar 16, 2024 · 0 comments

Comments

@adhulipa
Copy link

Summary

I ran the bench command recommended in the README to test out MLX and saw an error in test_utils.py possibly due to the way sw_vers result is being unpacked.

Solution Proposal

Please consider my recommendation to change test_utils.py to be

        os_type, os_version, os_build_number, *spurious = [
            line.rsplit("\t\t")[1]
            for line in sw_vers.rsplit("\n")
        ]

instead of what it is currently

        os_type, os_version, os_build_number = [.... 

Verbose Details:

Logs:

↪ python bench_mistral.py \
          --repo-a ml-explore/mlx --commit-a cbcf44a4caf3fb504ed29ef78091126134e197a3 \
          --repo-b ml-explore/mlx --commit-b 14b4e51a7c6455a61a74d24da9f47dfeb161023f \
          --output-dir external --hub-model-name mlx-community/Mistral-7B-Instruct-v0.2-4-bit \
          --max-context-length 800 \
          --fail-for-mismatch-before-n-tokens 800

WARNING:coremltools:scikit-learn version 1.3.2 is not supported. Minimum required version: 0.17. Maximum required version: 1.1.2. Disabling scikit-learn conversion API.
WARNING:coremltools:Torch version 2.1.2 has not been tested with coremltools. You may run into unexpected errors. Torch 2.1.0 is the most recent version that has been tested.
INFO:__main__:Test configuration: Namespace(commit_a='cbcf44a4caf3fb504ed29ef78091126134e197a3', commit_b='14b4e51a7c6455a61a74d24da9f47dfeb161023f', fail_for_mismatch_before_n_tokens=800, hub_model_name='mlx-community/Mistral-7B-Instruct-v0.2-4-bit', hub_url='github.com', max_context_length=800, measure_every_n_tokens=100, output_dir='external', repo_a='ml-explore/mlx', repo_b='ml-explore/mlx')
INFO:__main__:Cloning repo A: ml-explore/mlx@cbcf44a4caf3fb504ed29ef78091126134e197a3
Cloning into 'mlx'...
remote: Enumerating objects: 11239, done.
remote: Counting objects: 100% (1202/1202), done.
remote: Compressing objects: 100% (339/339), done.
remote: Total 11239 (delta 933), reused 1078 (delta 855), pack-reused 10037
Receiving objects: 100% (11239/11239), 11.98 MiB | 19.86 MiB/s, done.
Resolving deltas: 100% (8794/8794), done.
INFO:argmaxtools.utils:Successfuly cloned mlx repo
HEAD is now at cbcf44a Some fixes in cache / thread safety (#777)
INFO:argmaxtools.utils:Successfuly checked out cbcf44a4caf3fb504ed29ef78091126134e197a3 in mlx
INFO:__main__:Cloning repo B: ml-explore/mlx@14b4e51a7c6455a61a74d24da9f47dfeb161023f
Cloning into 'mlx'...
remote: Enumerating objects: 11239, done.
remote: Counting objects: 100% (1202/1202), done.
remote: Compressing objects: 100% (338/338), done.
remote: Total 11239 (delta 933), reused 1079 (delta 856), pack-reused 10037
Receiving objects: 100% (11239/11239), 11.98 MiB | 28.60 MiB/s, done.
Resolving deltas: 100% (8794/8794), done.
INFO:argmaxtools.utils:Successfuly cloned mlx repo
HEAD is now at 14b4e51 Improved quantized matrix vector product (#786)
INFO:argmaxtools.utils:Successfuly checked out 14b4e51a7c6455a61a74d24da9f47dfeb161023f in mlx
README.md: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.48k/4.48k [00:00<00:00, 1.20MB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 304/304 [00:00<00:00, 1.33MB/s]
.gitattributes: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 3.59MB/s]
tokenizer.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 493k/493k [00:00<00:00, 6.81MB/s]
weights.npz: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.26G/4.26G [01:43<00:00, 41.2MB/s]
Fetching 5 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [01:44<00:00, 20.82s/it]
E
======================================================================
ERROR: setUpClass (__main__.MLXMistral7bRegressionTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "bench_mistral.py", line 40, in setUpClass
    cls.inference_ctx = BenchContext().spec_dict()
  File "/Users/somename/miniconda3/envs/mlx-src/lib/python3.8/site-packages/argmaxtools/test_utils.py", line 311, in spec_dict
    "os_spec": self.os_spec(),
  File "/Users/somename/miniconda3/envs/mlx-src/lib/python3.8/site-packages/argmaxtools/test_utils.py", line 370, in os_spec
    os_type, os_version, os_build_number = [
ValueError: too many values to unpack (expected 3)

----------------------------------------------------------------------
Ran 0 tests in 108.467s

Perhaps, consider updating argmaxtools/test_utils.py:370 to be

    def os_spec(self):
        sw_vers = _term_exec("sw_vers")
        """
        % sw_vers
        ProductName:        xOS
        ProductVersion:     d.d
        BuildVersion:       dXd

        - d.  -> digit(s)
        - x,X -> letter(s)
        """
        os_type, os_version, os_build_number, *spurious = [
            line.rsplit("\t\t")[1]
            for line in sw_vers.rsplit("\n")
        ]

        return {
            "os_version": os_version,
            "os_type": os_type,
            "os_build_number": os_build_number,
        }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant