Partage
  • Partager sur Facebook
  • Partager sur Twitter

AutoModelForCausalLM error with accelerate and bit

    24 mars 2024 à 15:47:46

    Hy,
    I was running this code:

    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        device_map='auto',
        quantization_config=nf4_config,
        use_cache=False,
        attn_implementation="flash_attention_2"
    
    )

    when this error occured:

    ImportError                               Traceback (most recent call last)
    
    Cell In[27], line 1
    ----> 1 model = AutoModelForCausalLM.from_pretrained(
          2     model_id,
          3     device_map='auto',
          4     quantization_config=nf4_config,
          5     use_cache=False,
          6     attn_implementation="flash_attention_2"
          7 
          8 )
    
    File ~\anacondaNewV\envs\tensorflow\lib\site-packages\transformers\models\auto\auto_factory.py:563, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
        561 elif type(config) in cls._model_mapping.keys():
        562     model_class = _get_model_class(config, cls._model_mapping)
    --> 563     return model_class.from_pretrained(
        564         pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
        565     )
        566 raise ValueError(
        567     f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
        568     f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
        569 )
    
    File ~\anacondaNewV\envs\tensorflow\lib\site-packages\transformers\modeling_utils.py:3049, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
       3046     hf_quantizer = None
       3048 if hf_quantizer is not None:
    -> 3049     hf_quantizer.validate_environment(
       3050         torch_dtype=torch_dtype, from_tf=from_tf, from_flax=from_flax, device_map=device_map
       3051     )
       3052     torch_dtype = hf_quantizer.update_torch_dtype(torch_dtype)
       3053     device_map = hf_quantizer.update_device_map(device_map)
    
    File ~\anacondaNewV\envs\tensorflow\lib\site-packages\transformers\quantizers\quantizer_bnb_4bit.py:62, in Bnb4BitHfQuantizer.validate_environment(self, *args, **kwargs)
         60 def validate_environment(self, *args, **kwargs):
         61     if not (is_accelerate_available() and is_bitsandbytes_available()):
    ---> 62         raise ImportError(
         63             "Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` "
         64             "and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`"
         65         )
         67     if kwargs.get("from_tf", False) or kwargs.get("from_flax", False):
         68         raise ValueError(
         69             "Converting into 4-bit or 8-bit weights from tf/flax weights is currently not supported, please make"
         70             " sure the weights are in PyTorch format."
         71         )
    
    ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`
    
    

    The error still occured after running

    !pip install accelerate
    !pip install -i https://pypi.org/simple/ bitsandbytes



    I first thought it was because tensorflow used my cpu instead of my gpu. This issue was very helpful: tensorflow/tensorflow#63362. So I found a way that tensorflow use my gpu by running this: pip install tensorflow-gpu==2.10.0 in a conda environment. But the error still remain.

    Does someone have any idea? ( excuse for my english, it's not so good)

    Thank you very much

    -
    Edité par altrastorique 24 mars 2024 à 15:51:07

    • Partager sur Facebook
    • Partager sur Twitter

    AutoModelForCausalLM error with accelerate and bit

    × Après avoir cliqué sur "Répondre" vous serez invité à vous connecter pour que votre message soit publié.
    × Attention, ce sujet est très ancien. Le déterrer n'est pas forcément approprié. Nous te conseillons de créer un nouveau sujet pour poser ta question.
    • Editeur
    • Markdown