July 27, 2024

[ad_1]

Parsing URLs on the client side has been a common practice for two decades. The early days included using illegible regular expressions but the JavaScript specification eventually evolved into a `new URL` method of parsing URLs. While `URL` is incredibly useful when a valid URL is provided, an invalid string will throw an error — yikes! A new method, `URL.canParse`, will soon be available to validate URLs!
Providing a malformed URL to `new URL` will throw an error, so every use of `new URL` would need to be within a `try/catch` block:
“`
// The correct, safest way
try {
const url = new URL(‘
} catch (e) {
console.log(“Bad URL provided!”);
}

// Oops, these are problematic (mostly relative URLs)
new URL(‘/’);
new URL(‘../’);
new URL(‘/pornhub-interview’);
new URL(‘?q=search+term’);
new URL(‘davidwalsh.name’);

// Also works
new URL(‘javascript:;’);
“`

As you can see, strings that would work properly with an `` tag sometimes won’t with `new URL`. With `URL.canParse`, you can avoid the `try/catch` mess to determine URL validity:
“`
// Detect problematic URLs
URL.canParse(‘/’); // false
URL.canParse(‘/pornhub-interview’); // false
URL.canParse(‘davidwalsh.name’); //false

// Proper usage
if (URL.canParse(‘ {
const parsed = new URL(‘
}

We’ve come a long way from cryptic regexes and burner elements to this URL and URL.canParse APIs. URLs represent so much more than location these days, so having a reliable API has helped web developers so much!

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *