Yarn Mac OS
Yarn Mac OS
- Yarn install -force. This refetches all packages, even ones that were previously installed. Yarn install -har. Outputs an HTTP archive from all the network requests performed during the installation. HAR files are commonly used to investigate network performance, and can be analyzed with tools such as Google’s HAR Analyzer or HAR Viewer. Yarn install -ignore-scripts.
- If you are using virtualenv for your python, you need to deactivate it or point npm to use the OS's own python 2 installation for node-gyp to work. EDIT: Had another encounter with the same bug a few days ago. This time around node-gyp was not at fault. Apparently the module I was installing has a dependency on a very old version of node-gyp.
If you’ve moved your development environment to Docker, you might have noticed your web application stacks might be slower than another native environment you’ve been used to. There are things we can do to return your response times back down to how they were (or thereabouts).
Overview
1. Volume optimisations
Modify your volume bind-mount consistency. Consistency cache tuning in Docker follows a user-guided approach. We prefer delegated for most use-cases.
MacOS有几种安装yarn的方式,如下:使用homebrew安装yarnbrew install yarn使用homebrew安装,如果node.js没有安装,那么它会自动安装node.js使用MacPorts安装yarnsudo port install yarnMacPorts也会自动按node.js使用脚本安装yarnmacOS属于Unix系统的分支,在Unix环境下,使用脚本安装是很方便. Fast, reliable, and secure dependency management. Yarn is a package manager that doubles down as project manager. Whether you work on one-shot projects or large monorepos, as a hobbyist or an enterprise user, we've got you covered.
2. Use shared caches
Make sure that common resources are shared between projects — reduce unnecessary downloads, and compilation.
3. Increase system resources
Default RAM limit is 2GB, raise that up to 4GB — it won’t affect system performance. Consider increasing CPU limits.
4. Further considerations
A few final tips and tricks!
Introduction
Most of our web projects revolve around a common Linux, Nginx, MySQL, PHP (LEMP) stack. Historically, these components were installed on our machines using Homebrew, a virtual machine, or some other application like MAMP.
At Engage, all our developers use Docker for their local environments. We’ve also moved most of our pre-existing projects to a Dockerised setup too, meaning a developer can begin working on a project without having to install any prerequisites.
When we first started using Docker, it was incredibly slow in comparison to what we were used to; sharp, snappy response times similar to that of our production environments. The development quality of life wasn’t the best.
Why is it slower on Mac?
In Docker, we can bind-mount a volume on the host (your mac), to a Docker container. It gives the container a view of the host’s file system — In literal terms, pointing a particular directory in the container to a directory on your Mac. Any writes in either the host or container are then reflected vice-versa.
On Linux, keeping a consistent guaranteed view between the host and container has very little overhead. In contrast, there is a much bigger overhead on MacOS and other platforms in keeping the file system consistent — which leads to a performance degradation.
Docker containers run on top of a Linux kernel; meaning Docker on Linux can utilise the native kernel and the underlying virtual file system is shared between the host and container.
On Mac, we’re using Docker Desktop. This is a native MacOS application, which is bundled with an embedded hypervisor (HyperKit). HyperKit provides the kernel capabilities of Linux. However, unlike Docker on Linux, any file system changes need to be passed between the host and container via Docker for Mac, which can soon add a lot of additional computational overhead.
1. Volume optimisations
We’ve identified bind-mounts can be slow on Mac (see above).
One of the biggest performance optimisations you can make, is altering theguarantee that file system data is perfectly replicated to the host and container. Docker defaults to a consistent guarantee that the host and containers file system reflect each other.
For the majority of our use cases at Engage we don’t actually need a consistent reflection — perfect consistency between container and host is often unnecessary. We can allow for some slight delays, and temporary discrepancies in exchange for greatly increased performance.
The options Docker provides are:
Consistent | The host and container are perfectly consistent. Every time a write happens, the data is flushed to all participants of the mount’s view. |
Cached | The host is authoritative in this case. There may be delays before writes on a host are available to the container. |
Delegated | The container is authoritative. There may be delays until updates within the container appear on the host. |
The file system delays between the host and the container aren’t perceived by humans. However, certain workloads could require increased consistency. I personally default to delegated, as generally our bind-mounted volumes contain source code. Data is only changing when I hit save, and it’s already been replicated via delegated by the time I’ve got a chance to react.
Some other processes, such as our shared composer and yarn cache could benefit from Docker’s cached option — programs are persisting data, so in this case it might be more important that writes are perfectly replicated to the host.
See an example of a docker-compose.yml configuration below:
Docker doesn’t do this by default. It has a good reason, which states that a system that was not consistent by default would behave in ways that were unpredictable and surprising. Full, perfect consistency is sometimes essential.
Further reading:https://docs.docker.com/docker-for-mac/osxfs-caching/
2. Using shared caches
Most of our projects are using Composer for PHP, and Yarn for frontend builds. Every time we start a Docker container, it’s a fresh instance of itself. HTTP requests and downloading payloads over the web adds a lot of latency, and it brings the initial builds of projects to a snail’s pace — Composer and Yarn would have to re-download all it’s packages each time.
Another great optimisation is to bind-mount a ‘docker cache’ volume into the container, and use this across similar projects. Docker would then pull Composer packages from an internal cache instead of the web.
See an example of bind-mounting a docker cache into the container, we do this in the docker compose configuration:
3. Increasing system resources
If you’re using a Mac, chances are, you have a decent amount of RAM available to you. Docker uses 2GB of RAM by default. Quite a simple performance tweak would be to increase the RAM limit available to Docker. It won’t hurt anything to give Docker Desktop an extra 2GB of RAM, which will greatly improve those memory intensive operations.
You can also tweak the amount of CPUs available; particularly during times of increased i/o load, i.e running yarn install. Docker will be synchronising a lot of file system events, and actions between host and container. This is particularly CPU intensive. By default, Docker Desktop for Mac is set to use half the number of processors available on the host machine. Increasing this limit could be considered to alleviate I/O load.
4. Further considerations
This post isn’t exhaustive, as I’m sure there are other optimisations that can be made based on the context of each kind of setup. In our use cases though, we’ve found these tweaks can greatly improve performance.
Some final things to consider are:
- Ensure the Docker app is running the latest version of Docker for Mac.
- Ensure your primary drive, Macintosh HD, is formatted as APFS. this is Apple’s latest proprietary HDD format and comes with a few performance optimisations versus historical formats.
Final notes
Yarn Mac Os Catalina
Docker are always working on improving the performance of Docker for Mac, so it’s a good idea to keep your Docker app up to date in order to benefit from these performance optimisations. Most of the performance of file system I/O can be improved within Hypervisor/VM layers. Reducing the I/O latency requires shortening the data path from a Linux system call to MacOS and back again. Each component in the data path requires tuning, and in some cases, requires a significant amount of development effort from the Docker team.
Matthew primarily focuses on web application development, with a focus on high traffic environments. In addition, he is also responsible for a lot of our infrastructure that powers all of our various apps and systems.
You should also read…
Since publishing this post, I’ve learned from commenter Elio Struyf that there is an issue being tracked on when using Yarn on Windows for some nested dependencies. One of the dependencies affected is fsevents (a package intended only for macos). This is an optional dependency within something that SPFx’s build tools requires. When using Yarn on macOS, it ignores the dependency, but on Windows, it apparently tries to install it which is causing an error. This is a popular issue that’s being tracked (#2116 & #2142) so I’m sure it will get resolved soon… in fact it looks resolved in Yarn v0.18.0 which is currently in pre-release.
The SharePoint Framework (SPFx) uses a different style of development than what most traditional SharePoint developers are familiar with. Traditional SharePoint developers are used to .NET and the package manager NuGet. Microsoft has elected to use the more Node.js friendly approach for the toolchain and for package acquisition with SPFx.
You create a new web part project using the Yeoman generator. I blogged about using Yeoman a while ago to create Office Add-ins… check that post out if you want to learn more about Yeoman is. The SPFx Yeoman generator first scaffolds out the project folders and files first and then runs npm install
to get all the packages needed for the development and build process.
NPM is the tool that is most commonly used to acquire packages from https://www.npmjs.com. In the context of SPFx, it’s used to download the build tools, SharePoint workbench, gulp tasks, type definitions and other dependencies you need when building your client web part. While not unlike NuGet package restore, you end up getting a lot more dependencies because in Node.js, things aren’t compiled to binary DLL files among a few other things. While NPM has a concept of global packages, the SPFx and many of it’s dependencies are designed to be run with local dependencies. This means that with each new client web part you create, you’ll have anywhere from 300MB - 375MB in your node_modules
folder… and this takes time to download.
Mac Os Install Yarn
In a recent test I ran on a connection that tested around 40MB download & 5MB upload, after creating the web part and deleting the node_modules
, it took NPM about 91 seconds to download all dependencies… see for yourself:
Yarn Mac Os Download
Enter Yarn - NPM Replacement
There’s a long backstory as to the challenges Node.js developers have with dependencies, reliability and working offline with NPM that I’ll spare you in this post. Earlier this year, Yarn was announced (blog post here with more details on what Yarn is & how it works). Yarn is a collaborative effort by Facebook, Google, Tilde & Exponent. The idea was to replace NPM with a more scalable, faster & reliable package manager. From the user perspective, it works the exact same way. It adopted the same commands as NPM so you can do the simple npm install
with yarn install
.
How does it stack up? See for yourself! I ran the same test above using the same connection with Yarn instead of NPM. This time, downloading the entire node_modules
folder took just 53 seconds… a 41% improvement in speed!
Yarn Really Shines After the First Run
But that’s just the first run… now things get interesting. See, when Yarn downloads packages, it caches them locally. The next time a project… any project… needs that package, it will pull from cache rather than re-download the package like NPM does.
How does this impact SPFx client web part projects? After using Yarn to get the packages for my web part, I deleted the node_modules
folder and ran it again. This time it pulled packages from my local cache and was finished in 26 seconds, a 71% improvement in speed!
What if I create a second web part on my laptop? Does it help there? You bet! Here is Yarn downloading dependencies for a totally different web part project. It too only took about 26 seconds
Replacing NPM with Yarn for New SPFx Client Web Part Projects
So how can you leverage this in your SharePoint Framework client web part projects? Easy… first go get Yarn: https://yarnpkg.com/en/docs/install
Then, anywhere you would normally use NPM, just use Yarn. If you type npm install jquery --save
to get jquery, then type yarn add jquery --save
.
What about Yeoman SharePoint Generator?
So we have a challenge. Today the Yeoman SharePoint Generator always runs npm install
after scaffolding a project up. What I do is hit CTRL+C
to kill the generator when I see NPM start downloading packages. Then I type yarn install
to get the packages.
Using this process, you can get a complete web part scaffolded up with all dependencies in about 45 seconds:
We Need Your Help! Tell Microsoft to Support a –skip-install Flag!
There is an open issue asking Microsoft to support a --skip-install
or --no-install
flag on the generator so you could do something like:
It seems like they are going to do it, but make your voice heard. Click the thumbs up / upvote on the issue opened by @gavinbarron to show you want this too! It’s a little annoying as they sort of have this flag already in the generator, but they aren’t respecting it when it comes to the last step.
Updated March 30, 2017: The flag --skip-install
was added so you can add that to the yeoman generator to avoid NPM at the end of scaffolding the projects.
Yarn Mac OS