Apple M2 Max MacBook Pro vs Dell XPS 15 with the 13th Generation Intel and a Nvidia 4070 compete in a Python, C, and Machine Learning race. The Ryzen 9 joins in as well.
Monitor fans and temp on Mac: (affiliate)
More videos:
▶️ How to dual boot Linux and Windows – https://youtu.be/uqZIp4ay-3s
▶️ Whisper setup – https://youtu.be/n_2Lws2htBY
▶️ M1 Max vs Intel Core i9 Python Race | XPS 15 2022 – https://youtu.be/i_O8r1YQJVo
▶️ M1 Ultra vs Intel Core i9 Python Test DESKTOPs – https://youtu.be/b6l7aEIlLgw
▶️ M1 MacBook vs Intel i9 MacBook Python Test – https://youtu.be/H06R4tXXWZ0
▶️ C++ Sorting 1000000 Items – https://youtu.be/viRMHT6T6fo
Benchmarks Game Python code: https://benchmarksgame-team.pages.debian.net/benchmarksgame/program/mandelbrot-python3-7.html
#python #macbook #xps15
💻NativeScript training courses — https://nativescripting.com
(Take 15% off any premium NativeScript course by using the coupon code YT2020)
👕👚iScriptNative Gear – https://nuvio.us/isn
— — — — — — — — —
❤️ SUBSCRIBE TO MY YOUTUBE CHANNEL 📺
Click here to subscribe: https://www.youtube.com/c/alexanderziskind
— — — — — — — — —
🏫 FREE COURSES
NativeScript Core Getting Started Guide (Free Course) – https://nativescripting.com/course/nativescript-core-getting-started-guide
NativeScript with Angular Getting Started Guide (Free Course) – https://nativescripting.com/course/nativescript-with-angular-getting-started-guide
Upgrading Cordova Applications to NativeScript (Free Course) – https://nativescripting.com/course/upgrading-cordova-applications-to-nativescript
— — — — — — — — —
📱LET’S CONNECT ON SOCIAL MEDIA
ALEX ON TWITTER: https://twitter.com/digitalix
NATIVESCRIPTING ON TWITTER: https://twitter.com/nativescripting
It would be interesting to compare power draw from different chips
I'm curious. Is the Mac running in auto mode or high power mode? I'm not sure if it's 14 or 16 inch but the 16inch m2 max has a high performance mode.
Also I'm not a mac user so I don't know but the m2 is a SoC with a 26 core neural engine. Can't you utilize it to do AI/ML tasks instead of GPU?
Brother can you do video edit review! Thanks.
"Linux is slower?". Alex: "No no no 😂"
I've been using Windows + WSL and very happy with speed on the 13th gen intel. But it seems native Linux is quite better. Thanks for the comparison.
So what you are basically testing here is pure core speed, the programmes are only active in the CPU/SoC. No I/O to the SSD. IMHO this is a very limited test, it doesn't say a lot about real world performance when you are for instance compiling a big stack, editing video or photos, doing data mining, etc etc
Did you ran the tests on Windows too? I mean directly not on WSL?
Subbed! What a showdown!
Given the amount of RAM your studio has, is it capable to load the larger language models since even the 3090/4090 only has at most 24GB of ram?
Considering all the noises AMD have been making about PyTorch, it would be interesting if you were able to compare one of their GPUs.
Hi there from Portugal,
Recommendation: can you provide a grid with the times/Results side by side in the next videos 😀
Obrigado(Thanks)
Why do you bring old asus into this test?
For me personally, It comes down to – can I use the real Visual Studio 2022 or not. And since the Apple version of Vs2022 is just a faint copy of the "real thing", I have to stay on Windows a while longer. Yes yes, I know I can use VS Code – but as a 100% MS developer, Vs2022 offers a lot more than Code. For instance there are no scaffolding in Code. I did really think MS would get Visual Studio 100% on Mac this time – but alas no! Maybe Rider would work – but I have a MSDN license through my employer – so Rider I'd have to pay for myself.
What a finger system
I'm willing to bet if the whisper implementation supported the NPUs on the M2, the M2 would have had much closer times, if not beaten the 4070.
Alex, do you have the right drivers for the xps? i fear that you are a) using an old version of linux or b) using the integrated graphics on the windows laptops due to not having the right drivers.
Nvidia drivers have been particularly bad on linux lately
Sir, I have a problem with my VS code. Every time I write #include<conio.h>, the VS code tells me that I should update my include path, What should I do? Please reply to me,I am from Bangladesh.I am a macbook pro m1 max user.
In my test, i cant use a GPU from my computer to accelerate train process in Machine Learning like classification or clustering, but while i am using deep learning like image classification or image segmentation, the GPU from my computer can accelerate train process, so what if you make test on this device apple M2 chip and Intel Gen 13 to train 5000 image, wich one will finish first in train and testing model process?
Hey I need your help sir so fast
I'm learning python right now and I want to work so hard on web 3.0 and Ai, for These I am looking for a laptop and I don't know what should I buy, consider me as a rich person because it would be my future, so it's not important how much it Costs just tell me what you think the best Option is.
Thank you sir 🤝
Schwarzenegger V3.0 needs definitely thinner pipes. 😂😂😁😁❤️❤️
See if you can adjust whisper to use –device mps instead of cuda
Arnold can satisfy 3 women at once
thanks for proving that "Performance" mode on Ubuntu really works!
Im buying my first mac, should i buy the MBA 15?
Alex, would you recommend a 16inch m1max 64gb 1TB like new for 2800usd or an unopened 16inch m2max 32gb 1TB that cost 3250usd?
I'm not sure if the CPU really makes a difference in the real world, especially considering that i could get an m1max which has 64gb and has 6cycles. Basically new and cost less.
Does the m2max generally feel any faster or is it a waste of money and better just save up the extra for next years m3x?
Should I buy HP Elitebook 845 G8 Ryzen 3 16GB RAM 256 SSD for programming? I want to run Docker, Android studio & other day to day tasks. Or go with a Mac Mini?
Too difficult to use asdf on Apple Chip? Does it work installing sdk, download as x86 or ARM?
It seems that the M2 was running in CPU only, right? If one manages to employ the GPU for ML, then it gets a lot better. I did some training with Pytorch and using GPU is 10-11 times faster than CPU. Of course, it might not be trivial to run Whisper on M2 GPU…
That is the kind of benchmark test I like to see. Not some !@#$ about games or Cinebench R23.
Compiler test codes shows us the real power in computers. Can you do the 7940HS?