本地部署DeepSeek開源多模態(tài)大模型Janus-Pro-7B實操教程
本地部署DeepSeek開源多模態(tài)大模型Janus-Pro-7B實操
Janus-Pro-7B介紹
Janus-Pro-7B 是由 DeepSeek 開發(fā)的多模態(tài) AI 模型,它在理解和生成方面取得了顯著的進步。這意味著它不僅可以處理文本,還可以處理圖像等其他模態(tài)的信息。
模型主要特點:Permalink
統(tǒng)一的架構(gòu): Janus-Pro 采用單一 transformer 架構(gòu)來處理文本和圖像信息,實現(xiàn)了真正的多模態(tài)理解和生成。
解耦的視覺編碼: 為了更好地平衡理解和生成任務(wù),Janus-Pro 將視覺編碼解耦為獨立的路徑,提高了模型的靈活性和性能。
強大的性能: 在多個基準(zhǔn)測試中,Janus-Pro 的性能超越了之前的統(tǒng)一模型,甚至可以與特定任務(wù)的模型相媲美。
開源: Janus-Pro-7B 是開源的,這意味著研究人員和開發(fā)者可以自由地訪問和使用它,推動 AI 領(lǐng)域的創(chuàng)新。
具體來說,Janus-Pro-7B 有以下優(yōu)勢:
圖像理解: 能夠準(zhǔn)確地識別和理解圖像中的對象、場景和關(guān)系。
圖像生成: 可以根據(jù)文本描述生成高質(zhì)量的圖像,甚至可以進行圖像編輯和轉(zhuǎn)換。
文本生成: 可以生成流暢、連貫的文本,例如故事、詩歌、代碼等。
多模態(tài)推理: 可以結(jié)合文本和圖像信息進行推理,例如根據(jù)圖像內(nèi)容回答問題,或者根據(jù)文本描述生成圖像。
與其他模型的比較:
超越 DALL-E 3 和 Stable Diffusion: 在 GenEval 和 DPG-Bench 等基準(zhǔn)測試中,Janus-Pro-7B 的性能優(yōu)于 OpenAI 的 DALL-E 3 和 Stability AI 的 Stable Diffusion。
基于 DeepSeek-LLM: Janus-Pro 建立在 DeepSeek-LLM-1.5b-base/DeepSeek-LLM-7b-base 的基礎(chǔ)上,并對其進行了多模態(tài)擴展。
應(yīng)用場景:
Janus-Pro-7B 具有廣泛的應(yīng)用場景,例如:
內(nèi)容創(chuàng)作: 可以幫助用戶生成高質(zhì)量的圖像、文本和其他多媒體內(nèi)容。
教育: 可以用于創(chuàng)建交互式學(xué)習(xí)體驗,例如根據(jù)文本描述生成圖像,或者根據(jù)圖像內(nèi)容回答問題。
客戶服務(wù): 可以用于構(gòu)建更智能的聊天機器人,能夠理解和回應(yīng)用戶的多模態(tài)查詢。
輔助設(shè)計: 可以幫助設(shè)計師生成創(chuàng)意概念,并將其轉(zhuǎn)化為可視化原型
1 啟動Anaconda環(huán)境
2 進入命令環(huán)境
conda create -n myenv python=3.10 -y git clone https://github.com/deepseek-ai/Janus.git cd Janus pip install -e . pip install webencodings beautifulsoup4 tinycss2 pip install -e .[gradio] pip install 'pexpect>4.3' python demo/app_januspro.py
3 遇到默認(rèn)配置下C盤磁盤空間不足問題
(myenvp) C:\Users\Administrator>python demo/app_januspro.py python: can't open file 'C:\\Users\\Administrator\\demo\\app_januspro.py': [Errno 2] No such file or directory (myenvp) C:\Users\Administrator>e: (myenvp) E:\>cd ai (myenvp) E:\AI>cd Janus (myenvp) E:\AI\Janus>dir 驅(qū)動器 E 中的卷是 chia-12T-1 卷的序列號是 0AF0-159B E:\AI\Janus 的目錄 2025/01/31 12:26 <DIR> . 2025/01/30 00:53 <DIR> .. 2025/01/30 00:53 115 .gitattributes 2025/01/30 00:53 7,301 .gitignore 2025/01/30 01:47 <DIR> .gradio 2025/01/30 01:18 <DIR> .locks 2025/01/31 12:26 0 4.3' 2025/01/30 00:53 <DIR> demo 2025/01/30 00:53 4,515 generation_inference.py 2025/01/30 00:53 <DIR> images 2025/01/30 00:53 2,642 inference.py 2025/01/30 00:53 5,188 interactivechat.py 2025/01/30 01:04 <DIR> janus 2025/01/31 12:25 <DIR> janus.egg-info 2025/01/30 00:53 2,846,268 janus_pro_tech_report.pdf 2025/01/30 00:53 1,065 LICENSE-CODE 2025/01/30 00:53 13,718 LICENSE-MODEL 2025/01/30 00:53 3,069 Makefile 2025/01/30 01:47 <DIR> models--deepseek-ai--Janus-Pro-7B 2025/01/30 00:53 1,111 pyproject.toml 2025/01/30 00:53 26,742 README.md 2025/01/30 00:53 278 requirements.txt 2025/01/30 01:18 1 version.txt 14 個文件 2,912,013 字節(jié) 9 個目錄 9,387,683,614,720 可用字節(jié)
3.1 設(shè)置HF_DATASETS_CACHE環(huán)境變量沒解決問題
(myenvp) E:\AI\Janus>set HF_DATASETS_CACHE="E:\AI\Janus" (myenvp) E:\AI\Janus>python demo/app_januspro.py Python version is above 3.10, patching the collections module. D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead warnings.warn( Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py:651: UserWarning: Not enough free disk space to download the file. The expected file size is: 9988.18 MB. The target location C:\Users\Administrator\.cache\huggingface\hub\models--deepseek-ai--Janus-Pro-7B\blobs only has 8154.37 MB free disk space. warnings.warn( pytorch_model-00001-of-00002.bin: 37%|███████████████▉ | 3.71G/9.99G [00:05<02:38, 39.5MB/s] Downloading shards: 0%| | 0/2 [00:06<?, ?it/s] Traceback (most recent call last): File "E:\AI\Janus\demo\app_januspro.py", line 19, in <module> vl_gpt = AutoModelForCausalLM.from_pretrained(model_path, File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\modeling_utils.py", line 3944, in from_pretrained resolved_archive_file, sharded_metadata = get_checkpoint_shard_files( File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\utils\hub.py", line 1098, in get_checkpoint_shard_files cached_filename = cached_file( File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file resolved_file = hf_hub_download( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 860, in hf_hub_download return _hf_hub_download_to_cache_dir( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 1009, in _hf_hub_download_to_cache_dir _download_to_tmp_and_move( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 1543, in _download_to_tmp_and_move http_get( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 452, in http_get for chunk in r.iter_content(chunk_size=constants.DOWNLOAD_CHUNK_SIZE): File "D:\anaconda3\envs\myenvp\lib\site-packages\requests\models.py", line 820, in generate yield from self.raw.stream(chunk_size, decode_content=True) File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 1066, in stream data = self.read(amt=amt, decode_content=decode_content) File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 955, in read data = self._raw_read(amt) File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 879, in _raw_read data = self._fp_read(amt, read1=read1) if not fp_closed else b"" File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 862, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() File "D:\anaconda3\envs\myenvp\lib\http\client.py", line 466, in read s = self.fp.read(amt) File "D:\anaconda3\envs\myenvp\lib\socket.py", line 717, in readinto return self._sock.recv_into(b) File "D:\anaconda3\envs\myenvp\lib\ssl.py", line 1307, in recv_into return self.read(nbytes, buffer) File "D:\anaconda3\envs\myenvp\lib\ssl.py", line 1163, in read return self._sslobj.read(len, buffer) KeyboardInterrupt ^C
3.2 設(shè)置環(huán)境變量HF_HOME解決問題
(myenvp) E:\AI\Janus>set HF_HOME=E:\AI\Janus (myenvp) E:\AI\Janus>python demo/app_januspro.py Python version is above 3.10, patching the collections module. D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead warnings.warn( config.json: 100%|████████████████████████████████████████████████████████████████████████| 1.28k/1.28k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|███████████████████████████████████████████████| 89.0k/89.0k [00:00<00:00, 1.67MB/s] model.safetensors.index.json: 100%|███████████████████████████████████████████████| 92.8k/92.8k [00:00<00:00, 2.99MB/s] pytorch_model-00001-of-00002.bin: 15%|██████▌ | 1.53G/9.99G [00:37<03:26, 41.0MB/s] Downloading shards: 0%| | 0/2 [00:37<?, ?it/s] Traceback (most recent call last): File "E:\AI\Janus\demo\app_januspro.py", line 19, in <module> vl_gpt = AutoModelForCausalLM.from_pretrained(model_path, File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\modeling_utils.py", line 3944, in from_pretrained resolved_archive_file, sharded_metadata = get_checkpoint_shard_files( File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\utils\hub.py", line 1098, in get_checkpoint_shard_files cached_filename = cached_file( File "D:\anaconda3\envs\myenvp\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file resolved_file = hf_hub_download( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 860, in hf_hub_download return _hf_hub_download_to_cache_dir( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 1009, in _hf_hub_download_to_cache_dir _download_to_tmp_and_move( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 1543, in _download_to_tmp_and_move http_get( File "D:\anaconda3\envs\myenvp\lib\site-packages\huggingface_hub\file_download.py", line 452, in http_get for chunk in r.iter_content(chunk_size=constants.DOWNLOAD_CHUNK_SIZE): File "D:\anaconda3\envs\myenvp\lib\site-packages\requests\models.py", line 820, in generate yield from self.raw.stream(chunk_size, decode_content=True) File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 1066, in stream data = self.read(amt=amt, decode_content=decode_content) File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 955, in read data = self._raw_read(amt) File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 879, in _raw_read data = self._fp_read(amt, read1=read1) if not fp_closed else b"" File "D:\anaconda3\envs\myenvp\lib\site-packages\urllib3\response.py", line 862, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() File "D:\anaconda3\envs\myenvp\lib\http\client.py", line 466, in read s = self.fp.read(amt) File "D:\anaconda3\envs\myenvp\lib\socket.py", line 717, in readinto return self._sock.recv_into(b) File "D:\anaconda3\envs\myenvp\lib\ssl.py", line 1307, in recv_into return self.read(nbytes, buffer) File "D:\anaconda3\envs\myenvp\lib\ssl.py", line 1163, in read return self._sslobj.read(len, buffer) KeyboardInterrupt ^C
3.3 如果沒下載好模型文件忽略這步
如果之前已經(jīng)下載好模型文件,將models–deepseek-ai–Janus-Pro-7B目錄拷貝到E:\AI\Janus\hub
(myenvp) E:\AI\Janus>python demo/app_januspro.py Python version is above 3.10, patching the collections module. D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead warnings.warn( Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:44<00:00, 22.13s/it] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. Some kwargs in processor config are unused and will not have any effect: ignore_id, num_image_tokens, add_special_token, mask_prompt, image_tag, sft_format. Running on local URL: http://127.0.0.1:7860 IMPORTANT: You are using gradio version 3.48.0, however version 4.44.1 is available, please upgrade. -------- Running on public URL: https://cf6180260c7448cc2b.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) Keyboard interruption in main thread... closing server. Traceback (most recent call last): File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\blocks.py", line 2361, in block_thread time.sleep(0.1) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "E:\AI\Janus\demo\app_januspro.py", line 244, in <module> demo.launch(share=True) File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\blocks.py", line 2266, in launch self.block_thread() File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\blocks.py", line 2365, in block_thread self.server.close() File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\networking.py", line 75, in close self.thread.join() File "D:\anaconda3\envs\myenvp\lib\threading.py", line 1096, in join self._wait_for_tstate_lock() File "D:\anaconda3\envs\myenvp\lib\threading.py", line 1116, in _wait_for_tstate_lock if lock.acquire(block, timeout): KeyboardInterrupt Killing tunnel 127.0.0.1:7860 <> https://cf6180260c7448cc2b.gradio.live ^C
4 強制使用顯卡
(myenvp) E:\AI\Janus>python demo/app_januspro.py --device cuda Python version is above 3.10, patching the collections module. D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead warnings.warn( Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.29s/it] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. Some kwargs in processor config are unused and will not have any effect: num_image_tokens, image_tag, ignore_id, mask_prompt, sft_format, add_special_token. Running on local URL: http://127.0.0.1:7860 IMPORTANT: You are using gradio version 3.48.0, however version 4.44.1 is available, please upgrade. -------- Running on public URL: https://342ecb20d5120e7d8c.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) Keyboard interruption in main thread... closing server. Traceback (most recent call last): File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\blocks.py", line 2361, in block_thread time.sleep(0.1) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "E:\AI\Janus\demo\app_januspro.py", line 244, in <module> demo.launch(share=True) File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\blocks.py", line 2266, in launch self.block_thread() File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\blocks.py", line 2365, in block_thread self.server.close() File "D:\anaconda3\envs\myenvp\lib\site-packages\gradio\networking.py", line 75, in close self.thread.join() File "D:\anaconda3\envs\myenvp\lib\threading.py", line 1096, in join self._wait_for_tstate_lock() File "D:\anaconda3\envs\myenvp\lib\threading.py", line 1116, in _wait_for_tstate_lock if lock.acquire(block, timeout): KeyboardInterrupt Killing tunnel 127.0.0.1:7860 <> https://342ecb20d5120e7d8c.gradio.live ^C
5 部分部署過程
(myenvp) E:\AI\Janus>pip install -e . Obtaining file:///E:/AI/Janus Installing build dependencies ... done Checking if build backend supports build_editable ... done Getting requirements to build editable ... done Preparing editable metadata (pyproject.toml) ... done Requirement already satisfied: torch>=2.0.1 in d:\anaconda3\envs\myenvp\lib\site-packages (from janus==1.0.0) (2.5.1+cu121) Requirement already satisfied: transformers>=4.38.2 in d:\anaconda3\envs\myenvp\lib\site-packages (from janus==1.0.0) (4.48.1) Requirement already satisfied: timm>=0.9.16 in d:\anaconda3\envs\myenvp\lib\site-packages (from janus==1.0.0) (1.0.14) Requirement already satisfied: accelerate in d:\anaconda3\envs\myenvp\lib\site-packages (from janus==1.0.0) (1.3.0) Requirement already satisfied: sentencepiece in d:\anaconda3\envs\myenvp\lib\site-packages (from janus==1.0.0) (0.1.96) Requirement already satisfied: attrdict in d:\anaconda3\envs\myenvp\lib\site-packages (from janus==1.0.0) (2.0.1) Requirement already satisfied: einops in d:\anaconda3\envs\myenvp\lib\site-packages (from janus==1.0.0) (0.8.0) Requirement already satisfied: torchvision in d:\anaconda3\envs\myenvp\lib\site-packages (from timm>=0.9.16->janus==1.0.0) (0.20.1+cu121) Requirement already satisfied: pyyaml in d:\anaconda3\envs\myenvp\lib\site-packages (from timm>=0.9.16->janus==1.0.0) (6.0.2) Requirement already satisfied: huggingface_hub in d:\anaconda3\envs\myenvp\lib\site-packages (from timm>=0.9.16->janus==1.0.0) (0.28.0) Requirement already satisfied: safetensors in d:\anaconda3\envs\myenvp\lib\site-packages (from timm>=0.9.16->janus==1.0.0) (0.5.2) Requirement already satisfied: filelock in d:\anaconda3\envs\myenvp\lib\site-packages (from torch>=2.0.1->janus==1.0.0) (3.17.0) Requirement already satisfied: typing-extensions>=4.8.0 in d:\anaconda3\envs\myenvp\lib\site-packages (from torch>=2.0.1->janus==1.0.0) (4.12.2) Requirement already satisfied: networkx in d:\anaconda3\envs\myenvp\lib\site-packages (from torch>=2.0.1->janus==1.0.0) (3.4.2) Requirement already satisfied: jinja2 in d:\anaconda3\envs\myenvp\lib\site-packages (from torch>=2.0.1->janus==1.0.0) (3.1.5) Requirement already satisfied: fsspec in d:\anaconda3\envs\myenvp\lib\site-packages (from torch>=2.0.1->janus==1.0.0) (2024.12.0) Requirement already satisfied: sympy==1.13.1 in d:\anaconda3\envs\myenvp\lib\site-packages (from torch>=2.0.1->janus==1.0.0) (1.13.1) Requirement already satisfied: mpmath<1.4,>=1.1.0 in d:\anaconda3\envs\myenvp\lib\site-packages (from sympy==1.13.1->torch>=2.0.1->janus==1.0.0) (1.3.0) Requirement already satisfied: numpy>=1.17 in d:\anaconda3\envs\myenvp\lib\site-packages (from transformers>=4.38.2->janus==1.0.0) (1.26.4) Requirement already satisfied: packaging>=20.0 in d:\anaconda3\envs\myenvp\lib\site-packages (from transformers>=4.38.2->janus==1.0.0) (24.2) Requirement already satisfied: regex!=2019.12.17 in d:\anaconda3\envs\myenvp\lib\site-packages (from transformers>=4.38.2->janus==1.0.0) (2024.11.6) Requirement already satisfied: requests in d:\anaconda3\envs\myenvp\lib\site-packages (from transformers>=4.38.2->janus==1.0.0) (2.32.3) Requirement already satisfied: tokenizers<0.22,>=0.21 in d:\anaconda3\envs\myenvp\lib\site-packages (from transformers>=4.38.2->janus==1.0.0) (0.21.0) Requirement already satisfied: tqdm>=4.27 in d:\anaconda3\envs\myenvp\lib\site-packages (from transformers>=4.38.2->janus==1.0.0) (4.64.0) Requirement already satisfied: psutil in d:\anaconda3\envs\myenvp\lib\site-packages (from accelerate->janus==1.0.0) (6.1.1) Requirement already satisfied: six in d:\anaconda3\envs\myenvp\lib\site-packages (from attrdict->janus==1.0.0) (1.17.0) Requirement already satisfied: colorama in d:\anaconda3\envs\myenvp\lib\site-packages (from tqdm>=4.27->transformers>=4.38.2->janus==1.0.0) (0.4.5) Requirement already satisfied: MarkupSafe>=2.0 in d:\anaconda3\envs\myenvp\lib\site-packages (from jinja2->torch>=2.0.1->janus==1.0.0) (2.1.5) Requirement already satisfied: charset-normalizer<4,>=2 in d:\anaconda3\envs\myenvp\lib\site-packages (from requests->transformers>=4.38.2->janus==1.0.0) (3.4.1) Requirement already satisfied: idna<4,>=2.5 in d:\anaconda3\envs\myenvp\lib\site-packages (from requests->transformers>=4.38.2->janus==1.0.0) (3.10) Requirement already satisfied: urllib3<3,>=1.21.1 in d:\anaconda3\envs\myenvp\lib\site-packages (from requests->transformers>=4.38.2->janus==1.0.0) (2.3.0) Requirement already satisfied: certifi>=2017.4.17 in d:\anaconda3\envs\myenvp\lib\site-packages (from requests->transformers>=4.38.2->janus==1.0.0) (2024.12.14) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\anaconda3\envs\myenvp\lib\site-packages (from torchvision->timm>=0.9.16->janus==1.0.0) (10.4.0) Building wheels for collected packages: janus Building editable for janus (pyproject.toml) ... done Created wheel for janus: filename=janus-1.0.0-0.editable-py3-none-any.whl size=16196 sha256=cdb0ebb0c36046bf768a84cbf9208824eadb31fadea888f3b6ff102de576f743 Stored in directory: C:\Users\Administrator\AppData\Local\Temp\pip-ephem-wheel-cache-dhnej7iy\wheels\e4\87\ba\dd6e5c70086c786d25bcd3e6bddaeb7c46f5ae69dc25ea8be0 Successfully built janus Installing collected packages: janus Attempting uninstall: janus Found existing installation: janus 1.0.0 Uninstalling janus-1.0.0: Successfully uninstalled janus-1.0.0 Successfully installed janus-1.0.0 (myenvp) E:\AI\Janus>pip install webencodings beautifulsoup4 tinycss2 Requirement already satisfied: webencodings in d:\anaconda3\envs\myenvp\lib\site-packages (0.5.1) Requirement already satisfied: beautifulsoup4 in d:\anaconda3\envs\myenvp\lib\site-packages (4.12.3) Requirement already satisfied: tinycss2 in d:\anaconda3\envs\myenvp\lib\site-packages (1.4.0) Requirement already satisfied: soupsieve>1.2 in d:\anaconda3\envs\myenvp\lib\site-packages (from beautifulsoup4) (2.6) (myenvp) E:\AI\Janus>pip install 'pexpect>4.3' ERROR: Invalid requirement: "'pexpect": Expected package name at the start of dependency specifier 'pexpect ^ (myenvp) E:\AI\Janus>pip install 'pexpect>4.3' ERROR: Invalid requirement: "'pexpect": Expected package name at the start of dependency specifier 'pexpect ^ (myenvp) E:\AI\Janus>pip install "pexpect>4.3" Requirement already satisfied: pexpect>4.3 in d:\anaconda3\envs\myenvp\lib\site-packages (4.9.0) Requirement already satisfied: ptyprocess>=0.5 in d:\anaconda3\envs\myenvp\lib\site-packages (from pexpect>4.3) (0.7.0) (myenvp) E:\AI\Janus>python demo/app_januspro.py Python version is above 3.10, patching the collections module. D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead warnings.warn( Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.25s/it] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. Some kwargs in processor config are unused and will not have any effect: ignore_id, sft_format, image_tag, num_image_tokens, mask_prompt, add_special_token. Running on local URL: http://127.0.0.1:7860 IMPORTANT: You are using gradio version 3.48.0, however version 4.44.1 is available, please upgrade. -------- Running on public URL: https://b0590adff3d54b2255.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) Keyboard interruption in main thread... closing server. Killing tunnel 127.0.0.1:7860 <> https://b0590adff3d54b2255.gradio.live (myenvp) E:\AI\Janus>python demo/app_januspro.py --device cuda Python version is above 3.10, patching the collections module. D:\anaconda3\envs\myenvp\lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead warnings.warn( Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.05s/it] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. Some kwargs in processor config are unused and will not have any effect: image_tag, sft_format, ignore_id, add_special_token, num_image_tokens, mask_prompt. Running on local URL: http://127.0.0.1:7860 IMPORTANT: You are using gradio version 3.48.0, however version 4.44.1 is available, please upgrade. -------- Running on public URL: https://72d4294c2d37f91dc8.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
6 使用效果
6.1 識別圖片
6.2 文生圖
6.2.1 浣熊師父身穿滴水服裝,扮演街頭歹徒。
Master shifu racoon wearing drip attire as a street gangster.
6.2.2 美麗女孩的臉
The face of a beautiful girl
6.2.3 叢林中的宇航員,冷色調(diào),柔和的色彩,細(xì)節(jié)豐富,8k
Astronaut in a jungle, cold color palette, muted colors, detailed, 8k
6.2.4 反光面上的一杯紅酒。
A glass of red wine on a reflective surface.
6.2.5 一只可愛又迷人的小狐貍,有著大大的棕色眼睛,背景中秋葉迷人,永恒、蓬松、閃亮的鬃毛、花瓣、童話般的氛圍,虛幻引擎 5 和 Octane 渲染器,細(xì)節(jié)豐富,具有照片級真實感,具有電影感,色彩自然。
A cute and adorable baby fox with big brown eyes, autumn leaves in the background enchanting,immortal,fluffy, shiny mane,Petals,fairyism,unreal engine 5 and Octane Render,highly detailed, photorealistic, cinematic, natural colors.
6.2.6 這幅畫中的眼睛設(shè)計精巧,背景為圓形,飾有華麗的漩渦圖案,既有現(xiàn)實主義的色彩,也有超現(xiàn)實主義的色彩。畫中焦點是一只鮮艷的藍(lán)色虹膜,周圍環(huán)繞著從瞳孔向外輻射的細(xì)紋,營造出深度和強度。睫毛又長又黑,在周圍的皮膚上投下微妙的陰影,皮膚看起來很光滑,但略帶紋理,仿佛隨著時間的流逝而老化或風(fēng)化。眼睛上方有一個類似古典建筑的石頭結(jié)構(gòu),為構(gòu)圖增添了神秘感和永恒的優(yōu)雅。這一建筑元素與周圍的有機曲線形成鮮明而和諧的對比。眼睛下方是另一個讓人聯(lián)想到巴洛克藝術(shù)的裝飾圖案,進一步增強了每個精心制作的細(xì)節(jié)所蘊含的整體永恒感??傮w而言,氛圍散發(fā)著一種神秘的氣氛,與暗示永恒的元素?zé)o縫交織在一起,通過現(xiàn)實紋理和超現(xiàn)實藝術(shù)的并置實現(xiàn)。每一個組成部分——從吸引眼球的復(fù)雜設(shè)計到上方古老的石塊——都以獨特的方式創(chuàng)造出充滿神秘魅力的視覺盛宴。
The image features an intricately designed eye set against a circular backdrop adorned with ornate swirl patterns that evoke both realism and surrealism. At the center of attention is a strikingly vivid blue iris surrounded by delicate veins radiating outward from the pupil to create depth and intensity. The eyelashes are long and dark, casting subtle shadows on the skin around them which appears smooth yet slightly textured as if aged or weathered over time.
Above the eye, there’s a stone-like structure resembling part of classical architecture, adding layers of mystery and timeless elegance to the composition. This architectural element contrasts sharply but harmoniously with the organic curves surrounding it. Below the eye lies another decorative motif reminiscent of baroque artistry, further enhancing the overall sense of eternity encapsulated within each meticulously crafted detail.
Overall, the atmosphere exudes a mysterious aura intertwined seamlessly with elements suggesting timelessness, achieved through the juxtaposition of realistic textures and surreal artistic flourishes. Each component—from the intricate designs framing the eye to the ancient-looking stone piece above—contributes uniquely towards creating a visually captivating tableau imbued with enigmatic allure.
到此這篇關(guān)于本地部署DeepSeek開源多模態(tài)大模型Janus-Pro-7B實操的文章就介紹到這了,更多相關(guān)DeepSeek開源多模態(tài)大模型Janus-Pro-7B內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Typora配置PicGo時提示Failed?to?fetch的問題解決(typora圖像問題)
這篇文章主要介紹了Typora配置PicGo時提示Failed?to?fetch的問題解決(typora圖像問題),本文給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下2023-04-04ElasticSearch 使用 Composite Aggregation 實
composite aggregation 是 Elasticsearch 中的一種特殊聚合方式,適用于需要分頁展示的聚合結(jié)果,本文給大家介紹ElasticSearch 使用 Composite Aggregation 實現(xiàn)桶的分頁查詢,感興趣的朋友一起看看吧2024-12-12基于Jupyter notebook搭建Spark集群開發(fā)環(huán)境的詳細(xì)過程
Jupyter Notebook是一個開源并且使用很廣泛項目,本文介紹如何基于Jupyter notebook搭建Spark集群開發(fā)環(huán)境,通過實例截圖相結(jié)合給大家介紹的非常詳細(xì),需要的朋友參考下吧2021-10-10antd通過 filterDropdown 自定義按某天時間搜索功能
這篇文章主要介紹了antd通過 filterDropdown 自定義按某天時間搜索功能,本文通過實例代碼給大家介紹的非常詳細(xì),具有一定的參考借鑒價值,需要的朋友可以參考下2019-08-08關(guān)于解決?“Error:?listen?EACCES:?permission?denied?0.0.0.0:
這篇文章主要介紹了在開發(fā)過程中常見的錯誤Error:listenEACCES:permissiondenied0.0.0.0:80,并提供了兩種解決方法,大家可以根據(jù)需求選擇對應(yīng)的方法,需要的朋友可以參考下2024-12-12