little foxes at the keyboards little foxes making clicky-clacky little foxes on the servers little foxes all untame there's a black hat and a white hat and a grey one and fun for everyone! and they're all making clicky-clacky and they're all in your mainframe
When using Windows I have been a reliable PuTTY user for as long as I can remember. At the same time as deploying some new workstations for personal use I felt compelled to stretch my legs and see what other options Windows admins are using in case I'm missing out on any killer features and the like. The following is a list of Windowns SSH clients I will be trying out as daily drivers over the coming weeks; while I will likely follow this article up at the end with some thoughts on those that kept my attention, it makes sense to compile this list in advance so I can use it during my system builds as an addendum to my Favourite Windows Software shortlist.
If I'm missing a client you think I should know about please pop by the new foxpa.ws General Discussion Telegram Geoup and let me know! While I am interested in bonus features like SFTP support, tunneling and X11Forwarding only free (as in free beer) or non-crippling evaluation Windows-native clients that implement a PTY and support modern protocols, crypto and at least password AND public key authentication will be considered. Naturally I'll be skipping the PowerShell implementation of OpenSSH since that's what I use all day on *nix machines anyway.
Modern machines boot (god willing) like greased lightning. Sometimes it helps to know what key to hammer on for dear life in advance of POST because the configuration keys particular to a given machine aren't displayed for your convenience - and even if they were they might not display long enough for our puny mammalian brains to register them. The following tables have been compiled from various sources and will be updated as I encounter noteworthy additions.
Since we are dealing with the pre-OS environment, software-configured niceties such as swapping the symbolic functions ("media keys" and the like) and the Function Keys proper (F1-F12+) on keyboards such as the Logitech K400 will not have been applied, as such consider hammering an alternating pattern of [Function Key], [Function Key] + [Fn Key] to ensure the correct signal at least some of the time...
Windows could not create a partition on disk 0. The error occurred while applying the unattended answer file's <DiskConfiguration> setting. Error code: 0x80042565
This usually means you are using an installation medium with an answer file configured to create a GPT partition table, compatible with a UEFI boot environment, but the machine - while it may be UEFI capable - has booted in Legacy Mode (in which only an MBR partitioning scheme may be installed).
To fix this, reboot and reconfigure your boot settings such that the machine will boot into UEFI or "UEFI firstthen fallback to Legacy" Mode; alternatively if your BIOS is capable of supporting both modes simultaneously your installation medium may have two boot device entries; one for UEFI mode and one for Legacy. Either ensure the UEFI version of the device is attempted first or use your system's one-off Boot Device Menu (where available) to select the appropriate entry.
mod_security / ModSecurity / ModSec / whatever the kids are calling it today is a battle-tested Web Application Firewall that plugs into the Apache HTTP daemon's modular framework and has been the main mechanism for implementing Intrusion Prevention and DoS mitigation to the LAMP stack for even slightly longer than I've been doing this - way back in the Apache 1.3.x era 19 years ago. If you've never used mod_security but have implemented any sort of IPS/IDS or DoS mitigation technology before your mind's gears have already sputtered "gee I bet that takes a hella lotta resources," buddy - you better believe it. While I would never leave my house and venture into The Wild without smothering myself in a thick lather of mod_security it is prudent to apply it intelligently in situations where available resources, cluster nodes or funding in general is not unlimited. That's why on high-traffic installations you might be enabling and disabling it or at least running radically different configurations with content-appropriate rulesets on a per-vhost basis, separating things like static content away from interpreted scripts and other attack surfaces more susceptible to, say, getting tricked into running Bob's shellcode zero-du-jour.
These days mod_security comes with a lot of rules and that's great (other than the false positives). But as anyone who admins a Snort or Suricata instance knows the more rules the more resources are demanded. PCREs in particular are powerful tools for, well, pattern matching - and that's most of what we're doing in this type of situation. mod_security essentially acts like a proxy, sitting between the end user making a request and the server-side end point that will process it, keeping track of vital statistics so it can judge the intent behind and risk of each request and running pattern match query after query after query on the data going in and coming out. Unfortunately powerful tools are also power hogs, so to prevent mod_security from itself instigating a resource starvation Denial of Service condition default limits on the total number of PCRE operations that can be run on any given transaction are imposed. The default behavior when that limit is reached is to err on the side of caution, instead of processing the transaction mod_security will call the destination up yet provide no input, meanwhile throwing a 500 Internal Server Error to the client. Depending on your configuration it may be very difficult to detect when this particular condition is at fault for your seemingly aborted transaction; often you will have to resort to prying open your Apache error logs, either those configured globally or that which is configured for the vhost at hand.
As you can see I recently ran into such a condition with the upload processing script for our imageboard, Ychan(NSFW). I'm already doing a lot of my own security testing inside that script (including several PCREs, in fact), for obvious reasons, so I should be free to do any number of things:
I can change what mod_security does when it encounters a match by changing the SecRuleEngine Apache configuration directive's value to DetectionOnly: SecRuleEngine DetectionOnly
This will take the teeth out of mod_security's response and only log the match instead of zapping the transaction. Obviously I would want to think long and hard about using this on a production system and then only apply it in a very limited perimeter - for example a specific <Directory>, <Files> or <Location>: <VirtualHost *:80>
I can raise the limits themselves with the SecPcreMatchLimit and SecPcreMatchLimitRecursion directives: SecPcreMatchLimit 150000
Going by Google, it seems I could add these lines to my php.ini file: pcre.backtrack_limit = 10000000
pcre.recursion_limit = 10000000
or, where supported, to the relevant .htaccess file: php_value pcre_backtrack_limit = 10000000
php_value pcre_recursion_limit = 10000000
But there's a problem... as with SecRuleEngine above, I'd like to be able to make these changes to SecPcreMatchLimit and SecPcreMatchLimitRecursion with some specificity, i.e.:
In my given vhost's configuration block (i.e. /etc/apache2/vhosts.d/mysub_domain_com.conf)
In the .htaccess file of the relevant directory, where AllowOverride All is in effect
This effectively renders about 90% of the articles and answers I have encountered in writing this essentially useless, or misleading at best. What's more, these directives don't even apply in versions 3 and over, but it's not likely that you found this article if that's your situation. Good thing we read the manual around here, right? :D
Back to the solution, it begs the question: what do these values do exactly?
These appear to be settings internal to the PCRE engine in order to limit the maximum amount of memory/time spent on trying to match some text to a pattern. The pcreapi manpage does little to explain it in layman's terms:
The match_limit field provides a means of preventing PCRE from using up a vast amount of resources when running patterns that are not going to match, but which have a very large number of possibilities in their search trees. The classic example is the use of nested unlimited repeats.
Internally, PCRE uses a function called match() which it calls repeatedly (sometimes recursively). The limit set by match_limit is imposed on the number of times this function is called during a match, which has the effect of limiting the amount of backtracking that can take place. For patterns that are not anchored, the count restarts from zero for each position in the subject string.
The default value for the limit can be set when PCRE is built; the default default is 10 million, which handles all but the most extreme cases. You can override the default by suppling pcre_exec() with a pcre_extra block in which match_limit is set, and PCRE_EXTRA_MATCH_LIMIT is set in the flags field. If the limit is exceeded, pcre_exec() returns PCRE_ERROR_MATCHLIMIT.
The match_limit_recursion field is similar to match_limit, but instead of limiting the total number of times that match() is called, it limits the depth of recursion. The recursion depth is a smaller number than the total number of calls, because not all calls to match() are recursive. This limit is of use only if it is set smaller than match_limit.
Since the PCRE library built-in default is 10000000, my guess is that the lower setting is suggested for mod_security in order to prevent requests from being held up for a long time.
In fact, according to the documentation the defaut value for mod_security is 1500 - very low indeed, until we consider that it is expected to process up to several thousands of transactions per second.
Not so fast. HTML5 is a language. But it's a language in sort of the same way design languages are languages (...what a terribly apt analogy, no?) It's not exactly a metaphor... it's better to think of it as somewhat of an abstraction: it's really a conceptualization of objects and abilities or things you can have and things you can do with them. It's decoupled from the implementation, that is to say how it is written. The W3C draws this distinction by saying HTML5 is a vocabulary that can be implemented, in one of two syntaxes: the HTML Syntax or the XML Syntax. The most important distinction for you to understand as a web developer is that the XML Syntax is valid XML and valid XML is well-formed. Or it's not XML. (It's crap. (And it's going in the bin.))
You may be an outlayer but I am willing to bet Euros to Yen that my experience here is virtually universal: I have, to this day, never encountered HTML5 implemented in XML Syntax in the wild. It's a simple matter of using the right tools for a given job: XML is generally used for exchanging information between machines (think AJAX, .docx documents, XMPP (Jabber, Skype), SVG graphics...). And it makes sense: there shouldn't be any blemishes in these formats - if there's an error something is very wrong and it is correct that an entire document or message or packet should be discarded. It is not meant to be pretty, nor made so - it is meant to accurately and precisely describe information itself. Those of us who tried to switch from HTML to strict XHTML in the late 90s and early 00s out of a idealistic and good-intentioned but naive belief that well-formed, machine-readable and human-friendly regime would dominate the future understand only too well today the very good reasons why a forgiving, error-tolerant human-oriented markup language makes sense for creating and delivering documents intended for consumption by humans.
All of that is to say nothing of the main ideological argument that pervaded until only rather recently. Virtually from the beginning, HTML was criticized for having its feet in two ponds: it looks like a markup language and markup languages are meant for describing data. But here we're talking about data that is intended, especially in the early days and still very certainly nominally, to be rendered in a web browser and presented to humans. If it was meant for machines to process and then do god only knows what, possibly including presenting it to humans also, it would probably be better accomplished in a different format. Like XML - which is exactly what XML is for. But we started off with tags clearly only relevant to displaying documents to humans, like >center>. Which, come to think of it, is pretty limited. Would the next iteration include tags for right-aligning content? What about justified text? With word-wrapping? Without?
Wouldn't it make so much more sense if a so-called Printer Friendly version was already a part of the document, after all we're just changing a couple of visual features - often the same ones thousands of times per website. And then there's the problem of just how visual those few formatting controls available were. People with disabilities use the web too - in fact if we put their needs and use-cases at the centre of our focus for a moment it rapidly becomes intoxicating to contemplate how the web could be a revolutionary tool to reach them - for them to reach you - and connect us all together in ways never even possible to conceive of before. Finally, the stresses gave way and continents shifted, into the ever-transitional, never-quite-done ecosystem Cascading StyleSheets' first wails into the dark night were heard and we could finally cleanly separate form from function, appearance from structure and get to the Good Work™ of making a markup language that describes how data should be organized, inter- and intra-related for presentation to a human being, complimented by a styling language that defines how that data should look, feel, sound, print... (..taste... ..smell... ..?)
...OR the W3C could dick around for a decade stubbornly buggering with XHTML 2.0 instead, which it would later abandon anyway. In so doing they issued a summary slap in the c*ck to every man woman and child whose life or livelihood is impacted by the unhindered evolution of a vibrant and economically crucial world wide web (YUP THAT'S YOU! :D)...). As they slept peacefully atop their wheel (Netscape Navigator?) the real world was passing them by, ever-more desperate for the self-declared arbiters of the most fundamental technology underpinning the web: namely the standardized language its participants must speak to be heard (and speak well to not waste everybody's goddamned time and money). Verily, by virtue of all that makes the Internet good and holy Web Hypertext Application Technology Working Group (WHATWG) self-assembled to meet the needs of the time:
The Web Hypertext Application Technology Working Group (WHATWG) is a community of people interested in evolving the web through standards and tests.
The WHATWG was founded by individuals of Apple, the Mozilla Foundation, and Opera Software in 2004, after a W3C workshop. Apple, Mozilla and Opera were becoming increasingly concerned about the W3C’s direction with XHTML, lack of interest in HTML, and apparent disregard for the needs of real-world web developers. So, in response, these organisations set out with a mission to address these concerns and the Web Hypertext Application Technology Working Group was born.
In 2017, Apple, Google, Microsoft, and Mozilla helped develop an IPR policy and governance structure for the WHATWG, together forming a Steering Group to oversee relevant policies.
Hell yeah! That's how it's done Internet Style! Imagine: the people behind the development of the major competing browser engines - the engineers that actually have to design the components that communicate, transact and render at the most fundamental level came together in opposition of indifference, stagnation and wanton disregard for the needs of the public writ large! Even today they still coordinate the implementation and future of the language of the Web itself. This was utterly unimaginable in the era of the Microsoft Antitrust Case. The benefits extend far beyond the introduction of HTML5 and every hobbyist and professional web developer active in both eras feels them whenever their work loads consistently in a second browser. That's not to say W3C's work on XHTML 2.0 and its later pivot to "modularization" was entirely for naught: fruits included SVG, MathML and Xforms which would come to influence the inception of HTML5.
By 2007 The W3C saw the writing on the wall: get with the program or resign to irrelevance and eventual disbandment. A new working group was created to onboard the WHATWG specification and a tentative cooperation was established. It didn't take long for the W3C to get ahead of its britches. In 2011, deciding that HTML5 needed a finalization in the same stuffy tradition as the broken, inflexible and inorganic versions prior. In lock-step with this action a new working group was chartered to begin work on a so-called HTML 6 specification, leaving some wondering if the whole affair weren't a cynical, surprise ploy by the W3C to gain the upper hand and secure the future against WHATWG by simply painting a number on it that nobody asked for or needed. The WHATWG disagreed, being headed by pragmatic industry players with skin in the game and pieces on the board a Living Standard that could respond to new challenges and ideas was preferred. An understandable decision given the active cooperation and participation of the major browser vendors fostered a venue that was uniquely capable of ensuring interoperability, consistency and the ability to both deploy and actually realize changes to the specification in time frames that would otherwise be impossible. Adhering to the old convention of monolithic versioning was not only outdated, unnecessary and - looking back it could be argued - demonstrably broken, it would effectively handicap an industry that fancies itself, not without merit, to be a bedfellow of innovation.
Finally, in May 2019, the two camps were sleeping together once more and all was well in the world. WHATWG, obviously, being the boy of the relationship and the W3C being the girl. Today both groups perform important functions in maintaining the HTML5 specification and its siblings and the W3C has come to accept the sensibility of the Living Standard. Indeed, one could say the Internet has been graced by a new era aligned under a Live and let live Standard.