Quantcast
Channel: KWizCom SharePoint Blog
Viewing all 36 articles
Browse latest View live

Heads up - SP2016 developers

$
0
0

Upgrading full trust solutions from SharePoint 2013 to SharePoint 2016 (still in beta)

Background

During preparations for all our full trust products to support SharePoint 2016, I came across a few gotcha moments.

I must say, the vast majority of things just worked as is on SP2016 without any need for change in our code or packaging from SharePoint 2013.

Even the fact that the layouts URL is still _layouts/15 and not _layouts/16 didn't seem to fool our code, since we noticed O365 is running 16 version but still uses /15 in its URL path.

That said, we did have a couple of issues and these are things you can do today in your code to make sure you are ready for SP2016 upgrade.

Also, although it is a bit early to know for sure, but SharePoint 2016 really feels more like an update than an upgrade. Packing a lot of usability enhancements and UX enhancements, at the very core it looks and feels like SharePoint 2013 with many enhancements, which in my book is way better than re-writing the entire UI from scratch every time, don't you agree?

Anyway, here is a short list of things that we did need to change in our package:

Changes needed

Beware of non-API JavaScript functions

Like many other good SharePoint developers, we are, too, lazy in our core. Being a lazy developer is a good thing since you always try to use, reuse and recycle what the environment gives you before trying to write it yourself.
However, I have noticed many functions that were available as part of SharePoint core, init or other JavaScript files are now missing in SP2016. Probably, due to some sort of cleanup process done by the SharePoint team on functions that were no longer being used by the product itself.
For instance, many SharePoint examples on how to get the selected items in a SharePoint list view had the following code:
var selection = SP.ListOperation.Selection.getSelectedItems(context);
if (CountDictionary(selection) > 0)
CountDictionary was a function defined in inplview.js however this function is no longer available in SP2016.
As a general rule of thumb, MS does not recommend you rely on any JavaScript function that is not under the "SP" namespace, as they are not intended to be a part of the product's public API. In other words, if it does not start with "SP.*" don't use it.
For this specific example, it is safe to just check the selection.length since getSelectedItems seems to return an array.

SharePoint Versioned Paths

This one is tricky. Make sure you got your code logic right on this one.
When using any path under _layout, SharePoint 2016, v16, uses _layouts/15. The Same path as SharePoint 2013, v15.
But, when probing for files on the SharePoint root folder, SharePoint 2016 uses the 16 sub folder, not the 15.
In our logic, we check if the SharePoint version is 15 or greater, use _layouts/15 and as for the file system - we recommend you use the Microsoft.SharePoint.Utilities.SPUtility.GetVersionedGenericSetupPath API to get the correct folder.
If your code needs to be compatible with SharePoint 2010 and the above method is not available in the API, use reflection to call it. Here is a code example:
try { 
var method = typeof(Microsoft.SharePoint.Utilities.SPUtility).GetMethod("GetVersionedGenericSetupPath");
if (method != null) {//does not exist in sp2010
var result = method.Invoke(null, new object[] { folderPath, Constants.SharePointConstantsBase.SP_Version }) as string;
if (!string.IsNullOrEmpty(result)) return result;
}}catch { /*handle code in case function is not availalbe*/ 
return SPUtility.GetGenericSetupPath(folderPath);}

Top bar changed its name

If you are planning to hide or manipulate the top bar area, this is perhaps one of the most noticeable changes in SharePoint 2016. In your code, if you have referenced it using document.getElementById("suiteBar") you will now need to check if it does not exist, look for it with its new ID: document.getElementById("suiteBarDelta")

Our custom master page crashed

This one, I am not sure why did it happen or what technically has changed. But, our custom master page that we use for some settings pages crashed on SharePoint 2016. The fix was hard to find, but simple to implement. For some reason, it would not work unless we added this code to the top of the master page:
<%@Master language="C#"%>
You will see this is added to the SharePoint OOB master pages as well.

User Profiles and other services

Needless to say, if your product rely on some other SharePoint farm service like the user profile - you will probably need to redo these parts and fit them to SharePoint 2016. I noticed a few differences in the API for the user profile service, as well as a change in the DataBase schema in case you were probing that as well.

Client side OM

As for CSOM, it is pretty safe to say everything from SP2013 works on SP2016, with more things supported on 2016. I will write another post going from O365 apps and making them work on SP2013 which will cover some CSOM gaps you might notice going the other way around, but moving forward (upgrading) I didn't encounter any code to stop working, except for more heavy throttling being done on O365.

I hope this post helps, feel free to leave comments below



KWizCom Forms external values explained

CarFlix (Or: netflix for the car)

$
0
0
Ok, so this post is not really about SharePoint at all... but a day to day challenge I recently had to face in my personal life.

Last week I replaced my car to a new car that didn't have a DVD player.

I knew the kids won't like it, and I was determined to find a better solution than an after market DVD player for the car.

The problem with these, you see, is that the kids have to constantly drive me crazy with which DVD to put in, change it, press play, stop, skip, volume, or find the movie they want to watch.

That had to stop.

Then I was thinking, you know where we don't have that problem? At home. Since the kids can use their iPads to watch Netflix or shomi (in Canada) and switch between movies and shows as they please with no need for those pesky DVDs, which we buy only for the car.

But, I didn't want to kids eating up all my data plan on my phone while we were on the road...

So, I was looking for this setup:
1. Some storage that I can rip my DVDs into movie files and store on it
2. Some sort of wifi hotspot router that I can put in the car to create a personal wifi network
3. Make the storage available on the wifi network
4. Install some sort of media server that would serve the content from the storage device

Now, I said to myself there is no way someone built all that so I started looking at raspberry pi as a way to build such a setup myself, which I was able to figure out and with a lot of hard work and about 400$ would get it up and running. Only one problem: it would shut off abnormally when cutting the power, and the OS might not like it and fail, which will require keyboard and screen to fix such issues.

Well, the answer came from an unexpected search. I looked up Plex media server, and in one of the docs I found that some NAS storage devices can run a Plex server right on them (#1+#4).
I started reading on that and found that a WD NAS can run Plex and also have WiFi built in.

I started reading on "WD My Passport Wireless Pro" and found this amazing model:
"WD My Passport Wireless Pro WDBSMT0030BBK - Network drive - 3 TB"

Now, here is what it has:
1. 3TB of storage
2. Built in WiFi that creates a hotspot
3. Plex server installed
4. All day battery
5. USB charger

Basically, everything I was looking for and much more! The battery was a huge plus.

I bought it, copied some of the DVDs to it, set up the Plex server and configured the WiFi hotspot - all using my iPhone without hooking it up to the PC even once.
It was a breeze, easy, simple, and things just worked (a reboot after installing Plex was needed to sort it out).

Now, when we are in the car, the kids iPad can connect to the "CarFlix" WiFi, open the Plex app in their iPads (or use the browser) and watch whatever movie they wanted.

So, this is my "CarFlix" setup. What do you think?

P.S. I found the cheapest deal that was in stock from Dell Canada's web site. Everywhere else it was out of stock for months: http://accessories.dell.com/sna/productdetail.aspx?c=ca&l=en&s=dhs&cs=cadhs1&sku=A8995570

Speaking Engagements 2016

$
0
0
Just FYI, my speaking engagements for 2016 are posted here, If you are around – come see me!

If you were in one of my sessions, you can find links to the session code and presentation below.

Also – if you have any comments on my session – feel free to post it here!

April 7, 2016 collab365 - Building .NET Client Tools for Sharepoint Online

September 26-29, 2016 - Microsoft Ignite - Offering MVP 1 on 1 to attendees, book times with me prior to the event. The rest of the time, I'll be at our booth.

October 20, 2016 - collab365 - Substituting speaker: Nimrod Geva in 4 sessions

Speaking Engagements 2017

$
0
0
Just FYI, my speaking engagements for 2017 are posted here, If you are around – come see me!

If you were in one of my sessions, you can find links to the session code and presentation below.

Also – if you have any comments on my session – feel free to post it here!

February 10, 2017 - aOS Canadian TourWho Said You Have to Be a Power-User to Create Dynamic Forms in SharePoint/O365?

May 30-Jun 1, 2017 - SharePoint Fest Denver - Who Said You Have to Be a Power-User to Create Dynamic Forms in SharePoint/O365?

July 29, 2017 - SharePoint Saturday New York - SPFx: An ISV Insight to Microsoft’s latest customization model

August 19, 2017 - SharePoint Saturday Toronto - SPFx: An ISV Insight to Microsoft’s latest customization model


October 28, 2017 - SharePoint Saturday Ottawa - SPFx: An ISV Insight to Microsoft’s latest customization model

November 2, 2017 - Collab365 - SPFx: An ISV Insight to Microsoft’s latest customization model

Using Office UI Fabric JS in SPFx

$
0
0

SPFx and Office UI Fabric JS not playing nice

I recently started digging into SPFx web parts in a more serious way than the "hello-world" level web parts we did so far during the learning period of this great new platform for SharePoint add-ins.

My web part had some requirements that prevented me from using react in any way, but at the same time I had a requirement to use Office UI Fabric Components.

I was using KnockOut to render my UI, and naturally, I turned to OFfice UI Fabric JS and their great components.

Only then I discovered that the fabric.components.css had a version conflict with the SPFx and its built in react version of components, and they would mess each other's layout.

Buttons would not be aligned properly, drop downs from react would not open at all, popups and panels would look strange and almost every control I used had some deformity to it.

CSS conflict issues

I reported it to the SPFx, and they published a great article of explaining the issue in great detail, but basically there is nothing much that can be done since CSS is mostly global and there is no good way to scope it without the possibility of conflicts (I've had a very long discussion on this topic with our product dev manager, the great Kevin Vieira, as well, I was not convinced with any of the solutions we came up with).

Solution: renamed prefix

The only way I could think would allow us to use office ui fabric js without worrying that other versions or other vendors could interfere was to rename the fabric into our own unique name.

Meaning, changing the ms-class prefix into something else, like kw-class, and also changing the JS global fabric object to something like kwfabric.

Now, that seemed to be a lot of work to do, plus you will have to host your own version somewhere and won't be able to use the Microsoft CDNs directly.

So, I've decided we will host these files for everyone to use on our own apps.kwizcom.com server, and will generate the files for you with your own prefix on the fly, so you won't have to do it manually.

You are all welcome to use this to generate your own css/js files, or use them directly from our CDN. Our CDN is hosted in Azure and geo-replicated so it should have pretty goo response times, but KWizcom takes no responsibility if you want to us it.

By the way, since I am hosting the files on our servers, I was able to also fix quite a few bugs in the office ui fabric js script file, so our version might be a bit different than the one hosted by Microsoft.

So, here is how you use our version of fabric, assuming your chosen prefix is "kw" (you should use your own unique prefix):
* note: I limited the prefix to 4 letters max.

In SPFx project:

SPComponentLoader.loadCss(`https://apps.kwizcom.com/libs/office-ui-fabric-js/1.4.0/css/fabric.css?prefix=kw`);

SPComponentLoader.loadCss(`https://apps.kwizcom.com/libs/office-ui-fabric-js/1.4.0/css/fabric.components.css?prefix=kw`);

SPComponentLoader.loadScript(`https://apps.kwizcom.com/libs/office-ui-fabric-js/1.4.0/js/fabric.js?prefix=kw`, {globalExportsName:'kwfabric'});

In HTML:

<link href="https://apps.kwizcom.com/libs/office-ui-fabric-js/1.4.0/css/fabric.css?prefix=kw" rel="stylesheet" type="text/css"></link>

<link href="https://apps.kwizcom.com/libs/office-ui-fabric-js/1.4.0/css/fabric.components.css?prefix=kw" rel="stylesheet" type="text/css"></link>

<script src="https://apps.kwizcom.com/libs/office-ui-fabric-js/1.4.0/js/fabric.js?prefix=kw" type="text/javascript"></script>

You will use them with your provided prefix, for example: instead of ms-Button, use kw-Button and in JavaScript instead of fabric, use the kwfabric object.

Updates:

Using KnockoutJS in the new SPFx - containerless control flow removed

$
0
0
Many of you probably know how excited and happy I am with the SharePoint Framework (aka SPFx). Finally, a worthy client side framework for SharePoint extensibility.

SPFx promises a lot, and delivers even more with a very powerful engine that drives it.

One of the things it promises is giving developers the freedom to choose their client side development story, which platforms frameworks and libraries they want to use and how.

However, it is clear that ReactJS is the framework of choice for Microsoft, having the most complete Office Fabric components library (and probably the only one that is maintained by Microsoft pretty regularly), so by all means: when ever you can go react - don't look back.

While building our DataViewPlus web part (DVP), I had a requirement that prevented me from using react.
You see, react uses controls that are compiled into JavaScript objects, thus limiting the rendered HTML to what your developer had when he built the web part.

In our DVP web part, we wanted to give users the ability to customize, change, extend the html of the rendering templates or even provide their own HTML template for the rendered web part. Similar to what you could do with the SharePoint Designer data view, only without the nasty xsl language.

For that reason, I think KnockoutJS (KO) was the perfect fit for my project.

One of the features in KO that I love using is container-less commands. So, unlike other MVVM frameworks that had to output a tag to the page in order to do bindings - KO allows you to use HTML comments to do the bindings.
For example:
<!-- ko if: someExpressionGoesHere -->

I want to make this item present/absent dynamically

<!-- /ko -->

See, this is very useful in KO and I use it a lot. I think you can't really do much with KO without using at least some containerless statements. Once you grow beyond your hello world project, you will pretty quickly find yourself using a containerless if or foreach statement.

I noticed during my development stages with SPFx, using the workbench I had no problems with it at all and my web part worked perfectly.

However as soon as I built to production (gulp --ship) I noticed none of my KO templates rendered to the page.

After a bit of digging (and a lot of console.log...) I noticed all my KO HTML templates were loaded correctly with one small difference: All my HTML comments were removed!

It seems during production build, the html-loader plugin kicks in and calls UglifyjsWebpackPlugin.
This plugin by default remove all comments from the output, except important comments that contains:
/*!, /**!, @preserve or @license

Luckily, SPFx allows us to intervene with its build steps and make some changes in the way webpack works. (More on webpack in a future post)
There are several ways we can go about fixing this issue with the comments.
One is to replace the html-loader that SPFx uses with a KO friendly version.
Another would be to edit the configuration of the UglifyjsWebpackPlugin in the node_modules folder, which I don't recommend since you will have to re-do it every time you do npm install on a new machine.

So I dug deep into its code and found that the loader is checking for parameters in a query string format provided to it, and one of these parameters that we can pass is called "removeComments"!

So, by editing my gulpfile.js I was able to tell SPFx not to remove any comments from HTML template files I was loading.

Here is my new gulpfile.js:

'use strict';
constgulp = require('gulp');
constgutil = require('gulp-util');
constbuild = require('@microsoft/sp-build-web');
build.configureWebpack.mergeConfig({
additionalConfiguration: (config) => {
config.module.loaders.forEach((loader) => {
if (loader.loader === "html-loader") {
gutil.log("Got html loader " + JSON.stringify(loader));
loader.loader += "?removeComments=false";
}
});
returnconfig;
}
});
build.initialize(gulp);

I must note that it should also be possible to configure this to only keep KO comments, but that would require changing the way the loader is configured which might be more tricky (if not impossible) via the gulpfile.js

So, my conclusion is, although the guys @MSFT did an amazing job with SPFx and opening it to other frameworks, and while KO is actually included in the yeoman generator - there is still an advantage on working with ReactJS over KO or other frameworks, simply because that's what the developers are using so it makes sense it is more stable and tested more.
However, like you just saw, this framework is powerful enough that even when you need to use other less popular frameworks you can still make do and apply some tweaks and fixes yourself as needed.

Hope this helps you guys if you happen to choose KO for your SPFx web parts.

Please leave a comment and let me know which framework you are using with SPFx, and what is your experience with these platforms.

SPFx workbench certificate error in Chrome after v58

$
0
0
Anyone using the workbench in chrome either already noticed or will very soon that the workbench is no longer working as of update 58 of chrome.

If you use the local workbench - no problem. you will see a clear certificate error, and you will be able to proceed with an "unsafe" certificate.

But, if you try to use the online workbench (at /_layouts/15/workbench.aspx ) things will be a bit more tricky...

See, you won't get a certificate error right away. Instead your online workbench (which is on your SharePoint site's certificate, which is perfectly valid) will load up correctly and will show the message as if you are not running gulp serve:


Why? If you open the dev tools you will see under network that all the requests to localhost were blocked due to the same issue with the bad certificate.

There are 2 things you can do:

Temporary fix:

Browse to the local workbench in chrome, select to proceed with the unsafe certificate. This will allow chrome to access your localhost.
Now, your online workbench should also start working as expected.
This should last until you close your browser.

Permanent fix:

It seems a new certificate was created that works with Chrome, as a part of the "sp-build-web" npm package v1.0.1
(Thanks Ian, @iclanton read more here)

SPFx GA points to v1.0.0 which doesn't include the new certificate.

To make the fix permanent, here are the exact steps I took (after a lot of tries and other variations that just led to more trouble...):
  1. edit package.json and change ONLY the sp-build-web version to 1.0.1 (Not to the latest, as gulp serve will stop working if you do)
  2. delete node_modules folder (npm install will fail if you don't delete the entire folder)
  3. run npm install
  4. run gulp untrust-dev-cert (you should be prompted to delete a certificate)
  5. run gulp trust-dev-cert (you should be prompted to install a certificate)
  6. run gulp serve
Now, your local or online workbench should work as expected.

This took way longer than I expected, since any single different thing I tried resulted in either a broken project or npm install failure...

Good luck!


Debugging SPFx using VSCode - multiple projects in sub folders

$
0
0
I've been quite happy with Chrome TS debugging capabilities, honestly, so I never really cared to try VSCode debugging until now.
Thing is, I've been working on my "SPFx from an ISV point of view" session and wanted to cover all options in my presentation, so I thought I should probably set it up and write about it.
Finding the instructions was simple enough, and the guys @msft did a good job detailing how its done here.
However, in my situation (for reasons I will share later on) we have decided to create a root SPFx folder and place our projects as sub folders under that root folder. So, when I open my VSCode, the root folder is not my SPFx project root.
That folder structure broke the debugging configuration file and it couldn't find any maps files.
I could not find any documentation on the different properties in the JSON config file, but it was simple enough to understand.
After playing around with it for a bit, I figured the debugger used the "webRoot" property to find the source files inside VSCode.
All I needed to do is change:
"webRoot": "${workspaceRoot}"
to:
"webRoot": "${workspaceRoot}/DVP"
and my debugger connected correctly and started working.
Why do I use sub folders? That - I will explain in a later post, or in my talk at the end of the month @SPSNYC.

SPFx: Sharing Modules between projects

$
0
0
This is something that is not covered all that much in SPFx talks I've been to, and is definitely something everyone should know how to do.

Imagine you have several SPFx projects, and they all need to connect to the same LOB system. Or maybe you are an ISV that needs to check for product licenses. Perhaps you have your own UI library that your company built and you need to use, or even just a few helper functions that you always wished were a part of the JavaScript language and hate that you have to re-write them every time.

In all these cases, you will end up with a bunch of code that you wish you could put in once place and reuse in several SPFx projects.

For the .Net developers out there - that would probably go into a DLL that you would reference from your other projects. Right?

But, how do you do it in SPFx?

Well, turns out there are MANY ways to do it, they are different, and not all of them would lead to a good practice or comfortable code management.

I won't list them all, of course, but will show you what I chose to do and share some of the considerations behind it.

Relative folder

I guess the easiest way to deal with it is to just place your code in a separate folder.
You will create a folder with your module name, say "shared-code".
Now, when you want to use it from different projects - all you got to do is import it from the relative folder path, for example:
import Utilities from '../../../../shared-code/Utilities';
Now, say you want to use TypeScript in your shared module. You will need to create a tsconfig.json file and complile the TS into JS files before you can use them.
Example of tsconfig.json I use:
{
  "compilerOptions": {
    "target": "es5",
    "forceConsistentCasingInFileNames": true,
    "module": "amd",
    "moduleResolution": "node",
    "declaration": true,
    "sourceMap": true,
    "experimentalDecorators": true,
    "types": [
      "es6-promise",
      "es6-collections"
    ]
  },
  "exclude": [
    "node_modules"
  ]
}

* notice I use module "amd". This is important if you later plan to load it as external from CDN and don't want it to register a global object.


And if you are using TypeScript, you probably would want to import some SPFx types and objects to work with, right? Since we don't have a package, you will have to manually add the npm packages for these using these commands:
    npm install @microsoft/sp-core-library@~1.1.0

    npm install @microsoft/sp-webpart-base@~1.1.1


Now, when your code is ready, simply run "tsc" in that folder to compile it to JavaScript and your are ready to go.

Advantages

Quick and easy. You can automate the tsc compiler task so it will be completely transparent to you.

Disadvantages


  1. Since we don't have a package, there is no way to trace versions.
  2. You cannot set package dependencies and restore them automatically with npm install later on.
  3. Also, you cannot mark it as external and load it from a CDN. It must be bundled with your web parts.

Separate Package

If you want to be able to use your shared module as a package, you will need to create a package definition file.
Luckily, NPM created a helper that does it for you.
Go back to your "shared-code" folder, and run "npm init" command. Follow the instructions to create a package definition file. While running npm init, a couple of notes to keep in mind:

  • Only lower case and - are supported.
  • Author needs to be in this format: name <email> (web site)
  • To specify a license file, set it to: "SEE LICENSE IN <filename>"
Once this is done you will have a new package.json file created in your folder. Open it.
You will see your have a version number here. Every time you make changes to your code, you should update the version number. This is whats telling "npm update" command when to update your package.
Also, if your are using TypeScript, you should add "typings" to your package definition file.
Now, we can add the 2 SPFx node modules as proper dependencies instead of installing them manually. Add "dependencies" to the end of your package.json file with the 2 packages. Your file should look like this:
{
  "name": "shared-code",
  "version": "1.0.0",
  "description": "my shared code",
  "main": "utilities.js",
  "typings": "utilities.d.ts",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "KWizCom (http://www.kwizcom.com)",
  "license": "SEE LICENSE IN license.md",
  "dependencies": {
    "@microsoft/sp-core-library": "~1.1.0",
    "@microsoft/sp-webpart-base": "~1.1.1"
  }
}

Once you are done, save the file. You can now install dependencies using "npm install", and get updates using "npm update".



* You might also want to create a license file if you specified one during the npm init.

Last, you need to publish your package. Either you commit to GitHub or publish to npm using "npm publish" - you will be able to reference your package by name from that source.

When you want to use it in your SPFx projects, you can add it as a dependency to your package.json file:
"dependencies": {
  "@microsoft/sp-core-library": "~1.1.0",
  "@microsoft/sp-webpart-base": "~1.1.1",
  ...
  "shared-code": "*"
}

Advantages


  1. Having a package helps keep track on dependencies inside your package.
  2. A package file keeps track of versioning.
  3. When using a package, it is added as a dependency just like any other library you are using.
  4. You can mark packages as external in config/config.json file, to load it from a CDN instead of bundling it into your JS file:
"externals": {
  "shared-code": "https://apps.kwizcom.com/shared-code/utilities.js"
},

Disadvantages

You have to publish this package to a different project/source (npm or GitHub or both).
This makes tracking and managing your code a bit more complex, if you are not already using that system for your source control.

Separate local package

Much like the published npm package, you can follow all steps except - don't publish it to npm/GitHub separatly.
I keep all my projects including the shared packages in the same source control system, in the same folder structure.
When I set up the dependencies for my packages, since I store them all in the same source control and they will be available locally together - I can specify a local folder relative path to that package.
In the package.json file, I set it up as:
   "shared-code": "file:../shared-code"

Advantages

I find this to be the easiest way in my environment, since I don't have to keep track of two different systems or projects. When I sync changes from the server, I get updates for all projects and for my shared module at the same time.

Disadvantages

In my case, that works best. But if your projects or source control system does not set up to keep the folder structure and get updates from all projects at the same time - you might want to get your package from npm or GitHub. That's because if you ever get a new PC and run "npm install" it might not be able to find and get this shared package from a local folder if you didn't get it previously.

Type mismatch

There is one issue you might notice when working with TypeScript.
You might get an error reporting a type mismatch between the same object. For example, when you send your web part context into a function in shared-code, it might say the objects do not match.
This is caused because the SPFx libraries are not installed globally, so each project has its own definition of these objects. A simple solution is to send it as type any, I will discuss a better resolution for this conflict in a later post.

Closing

I know this sounds complicated, and honestly it is much more complicated than it should be. In C#, building a dll library was a simple common task.
But, like everything else, once you get used to it - it is not that bad.
(If you enjoy typing console commands and editing json files...)

Hope this helped you make sense of the procedure better, I didn't find a simple clear instructions on this topic yet so I thought it is worth sharing.

SPFx project breaking after moving to a new computer

$
0
0
So, as I was finishing my demo for #SPSNYC about SPFx, I moved my demo from my PC to my Mac laptop.

Running npm install everywhere, it all goes smooth, except I didn't have TypeScript installed.

So, naturally - I installed TypeScript:
npm install -g typescript

Next, I went to verify all demos were working.

Most were, one project refused to build.

Trying to build it yielded an error:

xxx:sharedcode xxx$ tsc
node_modules/@microsoft/sp-http/lib/httpClient/HttpClientResponse.d.ts(15,22): error TS2420: Class 'HttpClientResponse' incorrectly implements interface 'Response'.
  Types of property 'type' are incompatible.
    Type 'string' is not assignable to type 'ResponseType'.
node_modules/@types/react/index.d.ts(165,11): error TS2559: Type 'Component
' has no properties in common with type 'ComponentLifecycle
'.

Wow.

That probably means, one or more packages have changed since the time I created this project to the time I built it.

Luckily, finding out where the error comes from was rather simple.

See, the error complained an SPFx package was incorrectly implementing an interface.
That means, it is either missing a member, or have the wrong type of a member.
It even gave me the member name: Type, which was set to string instead of ResponseType.

So, I opened the HttpClientResponse.d.ts that reported the error, and noticed its "Type" member was of type "String".

Right click on the "Responce" interface, go to definition - indeed showed that Type was defined as "ResponseType".

I knew that my SPFx dependencies version didn't change, so I quickly checked the TypeScript version using "tsc -v"

On my laptop, it was 2.4.2
On my PC it was 2.1.5

So, the fix was simple:
1. npm uninstall -g typescript
2. npm install -g typescript@2.1.5

And viola! Everything back to normal.

Working with SPFx, you should know this is not the first time something like that happened, and unless steps are taken - this won't be the last either.

I will talk about this dependencies "hell" and what can be done about it during my session at #SPSNYC, so come see me if you can!

How to avoid breaking your SPFx project when dependencies change

$
0
0
Your SPFx project has a short list of dependencies.
They are listed in package.json

Thing is, these dependencies have a looong list of dependencies themselves, so that you end up with over 300MB folder of node_modules by the time you finish running npm install on a new SPFx project.

As most of you probably noticed, SPFx by default has a .gitignore file that excludes node_modules from ever going into your source control.

Which is a good thing - these packages are monitored and controlled by their authors and you don't need to manage all that code in your project, do you?

That means, whenever you move to a new computer or a new developer start working on a project, after cloning that project they must also run npm install before they can start working.

Now, npm install will start loading packages and dependencies as of TODAY, not as of when the project or the SPFx framework were compiled.

Some dependencies are specified by a version number, so no worries there (as long as the author did not re-publish new code with the same version number).

Others are specified as latest, or "*" or just with a major version - allowing minor versions updates. Since minor version updates suppose to be fully backward compatible and not contain breaking changes - sometimes that is not the case.

So, you go to your new PC, run npm install, and your project does not build.
You get a ton of errors, version mismatch, and have no clue why.

The reason would be probably a dependency you had was updated during that time (or god forbid - removed from npm!) which now breaks your project.

That is something I talked about during the first DevKitchen I attended, when I first saw SPFx. Having a huge list of dependencies I don't control, my project rely on, and that I don't have a copy of scared me.

And sure enough, since that day until today I had at least 4 episodes of dependency hell.

What can you do today

Without going into what Microsoft can and should do to avoid this in the future, here is what I plan to do and encourage you guys to do as well:

1. Build a test SPFx project, call it "Sanity"
2. Run npm install, and make sure the demo web part works by running gulp serve
3. Remove the .gitignore file
4. Store the entire huge project in your source control

Every so often, run npm update on this project.
Test if it is working - and if yes, commit the changes.

However, if your project fails to build, you know you got a breaking change. Do not commit, it will probably get fixed within a few hours.

Note that some breaking changes may come from a globally installed package, such as typescript.
So, it is a good practice to log these versions as well somewhere in your project. You can get that list by running "npm ls".

What to do if your project breaks

Now, once one of your project fails to build, look at where the errors are coming from.
It will probably tell you the packages and their dependencies.
Now, you can compare these packages with the ones you have in your "Sanity" project, and once you identify the ones causing the problem you can manually revert it by running:
npm uninstall {package}
npm install {package}@{last working version}

If you don't have that "Sanity" package, and do have a working version on a different computer - try to compare the versions between these two.

A quicker fix might be to simply copy the node_modules folder from the working project into your new broken projects.

It could be very frustrating spending valuable dev hours on maintenance instead of productive work, but hopefully these steps could help you cut down those hours.

SPFx: An ISV Insight to Microsoft's latest customization model

$
0
0
As promised, here is my blog post about this session from @spsnyc

Thanks for all the great reviews, the comment I got the most was that it had too much great info.

My greatest challenge writing this session was focusing on a few advanced topics that were not widely covered by other SPFx sessions.

There is an influx of SPFx sessions on every conference these days, and I find the majority of them (maybe rightly so) focus on beginner level introduction to SPFx.

What I wanted to do with this session is something different.

Focus on a few important more advanced topics that were not widely covered by other speakers.

This led me to build this level 300-400 session that focuses on these key points:

  • Publishing versions
  • Pushing updates through CDN
  • Sharing code between projects
  • Building a custom PropertyPane property
I cover these points in a deeper level, and accompany them with complete source code example of my solution.

Here is the slide show:


* I highly recommend downloading and viewing the slideshow in PowerPoint to view my speaker notes and comments on each slide.

Here is the code samples:

https://github.com/KWizCom/SPFxDemo

Please feel free to give me more comments and suggestions on what would you like to hear about, so I could update my session accordingly.

Also consider following the github project since I will publish updates to the source examples to it.

Thanks.

Updates and related posts:

Setting up automatic deployment to azure web app

$
0
0
This is something we have been working on for a long time, far too long. There are so many samples and options out there it was very difficult for us to set up, and now that we finally have our environment up and running I thought I'd share our experience.

Our set up

First, let me describe our current set up.

See, with SharePoint hosted add-ins, Provider hosted add-ins, click once applications and other services (like licensing, captcha, etc) - we had to set up a web app on azure.

That web app serves as a CDN for our JS/CSS/HTML/Images and a few asp.net apps, and we had to set it up to be globally available, so using a traffic manager we have a few servers around the world handling local traffic.

Now, we set up one extra server for what we call a "fast ring" release cycle.
That's where we publish our beta versions for testing, and after a short cooling period we push them to the production servers.

So, basically each app we release requires us to publish to 1 fast ring server, and later to a few production servers in what you can imagine is a very time consuming and error prone publishing process.

Also, every server we added to the mix required the developer to publish to that server as well.

So automating this was a high priority for us from the beginning.

Automation options

I've continuously discussed and read about azure automated deployments, and was very happy to see there were a lot of options to publish to azure but neither answered all of our needs.

Synced library (DropBox, OneDrive, etc)

This option was by far the easiest to set up. You basically register one of these services to be the source of your web app.
Surprisingly, no OneDrive for business support - only personal OneDrive. Actually, this is more than a surprise, it is flat out ridiculous since the target audience here is clearly a business user with a very high likelihood to be a O365 subscriber!

But that wasn't what threw me off.

You would get a sub folder in your service, and map it to a server.
So, I could build a "production" folder and "fast ring" folder and map them to their servers. Easy enough.
Then, I would have to sync that client to my dev machine, and "publish" my web apps into that local folder.
That would in turn sync the changes into the cloud, and at that point it gets tricky.
There isn't a way for azure to detect changes and start the update automatically.
You would have to visit each and every web app and click the "sync" button to have them sync their content with the storage service.
I'm sure I could write a script or some code that would automate this, but I was trying to avoid writing any code for something like this - so moving on.

Azure continuous deployment from visual studio online

In azure, under the web app there is a way to set up a connection to a project in Visual Studio online.
Since all our source code is stored in visual studio online anyways in Git repositories - I really had my hopes up.
Setting this up was just impossible.
Since we have an MSDN azure subscription where all our services are set up, and our O365 tenant is on a different subscription, our visual studio online was associated with our O365 tenant so that we can use those logins.
It seems it made it impossible to associate that visual studio online account with our azure MSDN subscription, so I tried and tried and it seemed to be impossible so I gave up.

Visual studio online "Build & Release"

Finally, I went to visual studio online and noticed there is a way to configure it to build your project on their servers and set it up to publish and release to many remote services, including azure.

I honestly struggled with setting up the build on the server. I got it to work on some projects, others threw errors and I was about to give up. I'm sure its possible and I will definitely work on it again until I get it working like I want it, but even though - getting it to publish a click once project for example would be complex, using our code signing tools and many of our pre/post build custom events and so on - it might be too complex to do.
But, I came up with a simpler way of doing it.
We would build our projects locally, and publish them locally.
Web applications - as a deployment package file. this produced a zip file, that we would check in the source control.
Next we can publish that zip file package to azure, and set up a trigger for the publish task on that zip file so that it would run automatically whenever the zip file changed.

For click once - we basically check in the "publish" folder, and did the same thing - put a trigger on it, and set up a task to publish that entire folder every time it changes.

This allows us to still build and publish using our dev machines, and have visual studio online job only handle publishing the content to the web servers in azure.

Next, we decided our "master" branch would be our production. Created a "fast ring" branch that would be published to the fast ring and we were done.

That was by far the easiest to set up (connecting to azure subscription took 2-3 minutes, but at least it worked), and was automatically triggering whenever we pushed a new version.

The only downside is, that I couldn't set it up to build the package on the server, but like I said - still working on it.

So, if you are working in visual studio online, obviously I would recommend this approach. But if you are not - you should check it out anyways, since the build & release tasks there support getting your source code from many sources (GitHub, Remote Git repo, Subversion) has a ton of build and publish tasks you can add and supports publishing to many services (not just Azure).

Also cool - is that I now get an email saying if a publish went through OK or failed. So I do have some monitoring on whats going on moving forward, which is great.

Definitely the easiest one to set up, and I wish we had done this a year ago.

If you have experience with something like that, please share your thoughts in the comments below, also questions are always welcome.

Controlling SPFx bundle file name in production builds

$
0
0
As promised, sharing the code as discussed in my SPFx advanced development session , in this blog post and in this GitHub discussion.


As you might know already, when building an SPFx component in production (gulp --ship) you could not control the produced file name of your bundle JS file.

The file would be generated as:
{product}.bundle-{hash}.js

That caused a lot of grief for our caching mechanism and our version management and product release strategy, preventing us from taking control over our cache strategy, using other java script optimization tools (due to the bundle file name) and publishing minor fixes without requiring our clients to install a new package.

It seems currently there is no supported way to control that, and that the production file name is being set hard coded in the copyAssetsTask.

After a lot of trials, I finally managed to find a way to control it, but it involves hacking into the build process in a way that I am pretty sure will not be future-upgrade proof.

So, until such time as the good people at Microsoft exposes this to us in a supported way, here is what you can do today.

Remove the hash from SPFx bundle file name

1. Load package-solution.json so that you can get the version number from it:
constpackageJson = require('./config/package-solution.json');

2. Add this code in your gulp file, just before build.initialize(gulp);
varoriginal_renameWithHash = build.copyAssets._renameWithHash.bind(build.copyAssets);
build.copyAssets._renameWithHash = function (gulpStream, getFilename, filenameCallback) {
varoriginal_GetFileName = getFilename;
getFilename = function (hash) { returnoriginal_GetFileName(hash)
.replace('_' + hash, '.' + packageJson.solution.version); };
returnoriginal_renameWithHash(gulpStream, getFilename, filenameCallback);
};

Remove the .bundle from SPFx bundle file name

This one, was suprisingly simple to do.
Thanks to @iclanton pointing out, all you need to do is edit the config/config.json file, and change the outputPath property.

Adding a post build task

As a part of my trial an errors, I tried adding a post build task. It didn't help in this case since it runs before the bundle process begins but here is how you do it just in case you might need to:

//Add post build task. This still runs before bundle task
letrenameFilesProductionTask = build.subTask'rename-files-production',
function(gulp, buildOptions, done) {
varproduction = (process.argv.indexOf('--ship') !== -1);
if(production)
this.log('Built in production!');
done();
});
build.rig.addPostBuildTask(renameFilesProductionTask);


I hope this helps. I will update this post once I learn if there is a better way to achieve this.

Thanks.

Office UI Fabric JS multi select support

$
0
0
Following up on my earlier post, where I shared our own version of Office Fabric JS that resolved a few major issues with versioning, isolation, and custom-prefix.

 As promised we will continue to deliver enhancements and bug fixes to our hosted version of Office Fabric JS library.

 In the near future, we hope to launch this as an official fork with open source in GitHub, but for now we still host it and you are welcome to use it.

 A reminder, here is the original post explaining how to use our version.

So, to the matter at hand.

Today, I am happy to announce we have released a new build that added a much needed feature to the office UI fabric JS library: support for multi-select.

As a design decision, Microsoft decided they will not support multi select control for usability reasons, fearing it won't be compatible with touch devices.

While you may or may not agree with their decision, there are times where using a multi select control will make sense, either as a requirement from your customer, your users or just what makes more sense for you in that particular situation.

(GitHub Discussion here)

So far, select controls that were enhanced with office UI fabric JS would just render as a regular single select.

Today we released an update to the library, once you set your select control to allow multiple selection, enhancing it with Office UI Fabric JS will now render a multiple selection control.

The main differences are:
1. After selecting a value, the drop down will remain open allowing you to select another one.
2. Clicking on a selected item will remove it from the selection.
3. The text box will now show all selected values separated by ,

Here it is opened:
And closed:

Here is the working JSFiddle:




Updating SPFx package still showing the older web part

$
0
0
Something I noticed while working with SPFx,
some times I would release a new package to my app catalog by dragging it over an existing older version.

SharePoint would prompt me to overwrite, I would say yes.

Now, every page refresh I would randomly get the older file loaded or the latest one loaded. It was completely unpredictable and didn't seem to depend on anything specific I did.

What happens?

It seems when you overwrite a package, you don't get the dialog to trust the package. Now, that dialog is important, since while you visit it, it plants a cookie in your browser that marks for SharePoint to drop its package cache and reload the manifest.
If that cookie is missing... you would be getting a cached version of the manifest.
Since SharePoint Online is a beast with many WFE servers in load balanced setup, you sometimes get a server that cleared its cache (maybe the one that handled the replace action) and other times a server that still has the older one cached.

This situation is not a big deal in production, since in 20 minutes or so the cache clears itself.

But for dev - here is what you have to do to avoid it:
Simply delete the package before you upload a new one, and don't use the overwrite option.

Simple, efficient but very confusing...

Hope this helps!

Versioning strategy for SPFx

$
0
0

Background


As an ISV, one of the most important aspects of our products is - you guesed it - product versioning.

Versioning is important for a lot of things:

  • It tells users if they are running the latest bits of their software
  • It helps support reproduce issues and report them more accurately
  • It helps developers track bugs and issues, when they occur and when they were fixed
  • It tells the product manager how the product is moving along its roadmap
... and more!

One thing I am not a big fan of is how SPFx handles product releases and versions.

See, as an ISV in the cloud-world, we find it important from time to time to push updates to our apps to all our customers at once, without having to ask them to download and install a new version of the application.

We do that by updating the resources in our CDN, expiring our cache and that makes sure all clients get the latest and greatest builds automatically.

Imagine you have a spelling mistake, typo, or that an update is rolling out that causes a critical failure of your product. In these situations, we usually push updates from our servers to all our customers, and they don't have to worry about updating anything themselves.

(Remember the days you had to go to the app store on your phone and hit "update" on every app? Yuck!)

So, for that reason, I find the way SPFx produces builds a bit lacking.
See, whenever you are ready to release a build, you type gulp --ship to build your scripts in release mode.
That produces a bundled JS file that contains a hash in the file name.
Meaning - every little change you do will produce a different hash code.


data-view-plus.bundle_d41aab51ecdd17f15f7f921d87534405.js

Now, the manifest points to that specific file with that specific hash, and requires you to produce a new app package, and install that new package on your app catalog.

Although the bundle JS file is most likely hosted in your CDN, each new version you release points to a new different file, so no one gets updates automatically.

Moreover, you have no way of controlling when your clients upgrade, so you can never remove older files from your CDN, making it a complete mess and nightmare to maintain in an ever-growing pile of files, most of which no one will ever use.

This is why we chose to go a different route when releasing our SPFx versions.

Instead of using the hash - we use a package version in the JS file name.
As long as we make small changes that we want to push automatically, we keep the same file name and push the changes to our CDN.
We track the build number using a JS variable inside our code to be able to keep track on what build is the current client running, since we do not change the package version on every release.


data-view-plus.1.0.0.5.js

Every time we have a bigger change, that changes our package manifest (added a web part, added or changed a dependency library) - that requires a new app package to be released. So only on these circumstances we advance the package version (config\package-solution.json) and release a new package with a new bundle file name to match it.

This significantly reduces the amount of script files we have in our CDN, makes each file very easy to understand which build it belongs to (instead of those hash codes) and allows us to push minor updates without requiring our users to install a new package in their catalog.


So, how do we do it?

I'm afraid we have yet to automate this part... but the steps are not too complicated to follow, I promise.
From this point I assume you have your SPFx project all ready and set up, configured with a CDN of your choice.

Releasing a new package version

  1. Change the version in your config\package-solution.json file
  2. Advance your "BuildNumber" variable in your code (optional)
  3. Run 'gulp --ship'
  4. Visit the new files outputted to temp\deploy
  5. Edit the bundle JS file name, replace the hash with the version you put in step 1
  6. Edit the [guid].json file, it should have the bundle JS file name in it - replace with with the new file name you came up with in step 5
  7. Save all changes
  8. Run 'gulp package-solution --ship'
  9. Now, you package will be ready for you under SharePoint/Solution and it will use the file name you set in step 5
  10. Don't forget to copy your bundle JS file to your CDN, do not remove older versions from the CDN if you have them.

Releasing patches or minor updates

  1. Advance your "BuildNumber" variable in your code (optional)
  2. Run 'gulp --ship'
  3. Copy the content of the bundle JS file from either temp\deploy or from the dist folder into the CDN, overwrite the content of the latest version file you got there
Hint: how can you tell if your change requires a new package or not? Simple. Add the web part to a SharePoint page (classic or modern) - not the workbench. It will use the old web part definition. If it runs - it means your changes did not break the signature of the bundle. If you get a nasty error message, time to release a new version.

Do you manage versions in SPFx? Do it differently? I would love to hear, please leave a comment!
Also, know how to automate my process? Please share!

Thanks for reading,
Shai.

SharePoint Online Search not returning all results

$
0
0
When building the data source for our SPFx list aggregator web part, we noticed that in some cases when we make a request to the SharePoint Online Rest Search API, some lists or content were not returned by the search.

I cannot confirm exactly what happens or why, but in some cases lists that had their title changed were not picked up by the search service days and weeks after the change.
In other cases, lists that were created a few months ago were not returned at all unless specifically mentioning their list name in the query.

After investigating this for a long time, we could not identify a specific common cause for all these lists that would cause this to happen. We did however find many people experiencing the same issue with the SharePoint Online search.

One workaround we found that would fix the issue within reasonable time (usually within a few minutes up to a couple of hours) was simply to visit those lists, go to the list settings and under advanced settings to click the "Reindex List" button.

This seem to fix the issue, which makes me thing that the problem is with the incremental search crawl logic not picking up all changes properly.



Hope that helps you guys.

My Interview with Collab365

Viewing all 36 articles
Browse latest View live