I wanted a simple setup in Symfony where the programmer could define their ide in the parameters file. Sounds simple, right? Just add something like ide_url: 'phpstorm' to parameters.yml->parameters and ide: '%ide_url%' to config.yml->framework. And it worked great, however, my problem was much more convoluted.
I am actually running the Symfony server on another machine and am accessing the files via NFS on Windows. So, it would try to open PHPStorm with the incorrect path. Symfony suggests the solution to this is writing your own custom URL handler with %f and %l to fill in the filename and line, and use some weird formatting to do string replaces. So I wrote in 'idea://%%f:%%l&/PROJECT_PATH_ON_SERVER/>DRIVE_LETTER:/PATH_ON_WINDOWS/' (note the double parenthesis for escaping) directly in the config.yml and that worked, kind of. The URL was perfect, but IntelliJ does not seem to register the idea:// protocol handler like PHPStorm theoretically does (according to some online threads) with phpstorm://. So I had to write my own solution.
OK, so we’re almost there; I just had to paste the string I came up with back into the parameters.yml, right? I wish. While this was now working properly in a Symfony error page, a new problem arose. The Symfony bin/console debug:config framework command was failing with You have requested a non-existent parameter "f:". The darn thing was reading the unescaped string as 'idea://%f:%l&...' and it thought %f:% was supposed to be a variable. Sigh.
So the final part was to double escape the strings with 4 percent signs. 'idea://%%%%f:%%%%l&...'. Except now the URL on the error pages gave me idea://%THE_PATH:%THE_LINE_NUMBER. It was adding an extra parenthesis before both values. This was simple to resolve in the script I wrote, so I was finally able to open scripts directly from the error page. Yay.
So here is the final set of data that has to be added to make this work:
<?php
function DoOutput($S)
{
//You might want to do something like output the error to a file or do an alert here
print $S;
}
if(!isset($argv[1]))
return DoOutput('File not given');
if(!preg_match('~^idea://(?:%25|%)?([a-z]:[/\\\\][^:]+):%?(\d+)/?$~i', $argv[1], $MatchData))
return DoOutput('Invalid format: '.$argv[1]);
$FilePath=$MatchData[1];
if(!file_exists($FilePath))
return DoOutput('Cannot find file: '.$FilePath);
$String='"C:\Program Files\JetBrains\IntelliJ IDEA 2018.1.6\bin\idea64.exe" --line '.$MatchData[2].' '.escapeshellarg($FilePath);
DoOutput($String);
shell_exec($String);
?>
I was surprised in my failure to find a script online to download all of an author’s stories from Fiction Press or Fan Fiction.Net, so I threw together the below.
If you go to an author’s page in a browser (only tested in Chrome) it should have all of their stories, and you can run the following script in the console (F12) to grab them all. Their save name format is STORY_NAME_LINK_FORMAT - CHAPTER_NUMBER.html. It works as follows:
Gathers all of the names, chapter 1 links, and chapter counts for each story.
Converts this information into a list of links it needs to download. The links are formed by using the chapter 1 link, and just replacing the chapter number.
It then downloads all of the links to your current browser’s download folder.
Do note that chrome should prompt you to answer “This site is attempting to download multiple files”. So of course, say yes. The script is also designed to detect problems, which would happen if fictionpress changes their html formatting.
//Gather the story information
const Stories=[];
$('.mystories .stitle').each((Index, El) =>
Stories[Index]={Link:$(El).attr('href'), Name:$(El).text()}
);
$('.mystories .xgray').each((Index, El) =>
Stories[Index].NumChapters=/ - Chapters: (\d+) - /.exec($(El).text())[1]
);
//Get links to all stories
const LinkStart=document.location.protocol+'//'+document.location.host;
const AllLinks=[];
$.each(Stories, (_, Story) => {
if(typeof(Story.NumChapters)!=='string' || !/^\d+$/.test(Story.NumChapters))
return console.log('Bad number of chapters for: '+Story.Name);
const StoryParts=/^\/s\/(\d+)\/1\/(.*)$/.exec(Story.Link);
if(!StoryParts)
return console.log('Bad link format for stories: '+Story.Name);
for(let i=1; i<=Story.NumChapters; i++)
AllLinks.push([LinkStart+'/s/'+StoryParts[1]+'/'+i+'/'+StoryParts[2], StoryParts[2]+' - '+i+'.html']);
});
//Download all the links
$.each(AllLinks, (_, LinkInfo) =>
$('a').attr('download', LinkInfo[1]).attr('href', LinkInfo[0])[0].click()
);
jQuery('.blurb.group .heading a[href^="/works"]').map((_, El) => jQuery(El).text()).toArray().join('\n');
After a little over a year of waiting, Let’s Encrypt has finally opened its doors to the public! Let’s Encrypt is a free https certificate authority, with the goal of getting the entire web off of http (unencrypted) and on to https. I consider this a very important undertaking, as encryption is one of the best ways we can fight illegal government surveillance. The more out there that is encrypted, the harder it will be to spy on people.
I went ahead and got it up and running on 2 servers today, which was a bit of a pain in the butt. It [no longer] supports Python 2.6, and was also very unhappy with my CentOS 6.4 cPanel install. Also, when you first run the letsencrypt-auto executable script as instructed by the site, it opens up your package manager and immediately starts downloading LOTS of packages. I found this to be quite anti-social, especially as I had not yet seen anywhere, or been warned, that it would do this before I started the install, but oh well. It is convenient. The problem in cPanel was that a specific library, libffi, was causing problems during the install.
To fix the Python problem for all of my servers, I had to install Python 2.7 as an alt Python install so it wouldn’t mess with any existing infrastructure using Python 2.6. After that, I also set the current alias of “python” to “python2.7” so the local shell would pick up on the correct version of Python.
As root in a clean directory:
wget https://www.python.org/ftp/python/2.7.8/Python-2.7.8.tgz
tar -xzvf Python-2.7.8.tgz
cd Python-2.7.8
./configure --prefix=/usr/local
make
make altinstall
alias python=python2.7
The cPanel lib problem was caused by libffi already being installed as 3.0.9-1.el5.rf, but yum wanted to install its devel package as version 3.0.5-3.2.el6.x86_64 (an older version). It did not like running conflicting versions. All that was needed to fix the problem was to manually download and install the same devel version as the current live version.
Unfortunately, the apache plugin was also not working, so I had to do a manual install with “certonly” and “--webroot”.
And that was it; letsencrypt was ready to go and start signing my domains! You can check out my current certificate, issued today, that currently has 13 domains tied to it!
I’ve recently been having problems using the Google Readerwidget in iGoogle. Normally, when I clicked on an RSS Title, a “bubble” popped up with the post’s content. However recently when clicking on the titles, the original post’s source opened up in a new tab. I confirmed the settings for the widget were correct, so I tried to remember the last change I made in Firefox that could have triggered this problem, as it seems the problem was not widespread, and only occurred to a few other people with no solution found. I realized a little bit back that I had installed the HTTPS Everywhere Firefox plugin. As described on the EFF’s site “HTTPS Everywhere is a Firefox extension ... [that] encrypts your communications with a number of major websites”.
Once I disabled the plugin and found the problem went away, I started digging through Google’s JavaScript code with FireBug. It turns out the start of the problem was that the widgets in iGoogle are run in their own IFrames (which is a very secure way of doing a widget system like this). However, the Google Reader contents was being pulled in through HTTPS secure channels (as it should thanks to HTTPS Everywhere), while the iGoogle page itself was pulled in through a normal HTTP channel! Separate windows/frames/tabs cannot interact with each other through JavaScript if they are not part of the same domain and protocol (HTTP/HTTPS) to prevent Cross-site scripting hacks.
I was wondering why HTTPS Everywhere was not running iGoogle through an HTTPS channel, so I tried it myself and found out Google automatically redirects HTTPS iGoogle requests to non secure HTTP channels! So much for having a proper security model in place...
So I did a lot more digging and modifying of Google’s code to see if I couldn’t find out exactly where the problem was occurring and if it couldn’t be fixed with a hack. It seems the code to handle the RSS Title clicking is injected during the “onload” event of the widget’s IFrame. I believe this was the code that was hitting the security privilege error to make things not work. I attempted to hijack the Google Reader widget’s onload function and add special privileges using “netscape.security.PrivilegeManager.enablePrivilege”, but it didn’t seem to help the problem. I think with some more prodding I could have gotten it working, but I didn’t want to waste any more time than I already had on the problem.
The code that would normally be loaded into the widget’s IFrame window hooks the “onclick” event of all RSS Title links to both perform the bubble action and cancel the normal “click” action. Since the normal click action for the anchor links was not being canceled, the browser action of following the link occurred. In this case, the links also had a “target” set to open a new window/tab.
There is however a “fix” for this problem, though I don’t find it ideal. If you edit the “extensions\https-everywhere@eff.org\chrome\content\rules\GoogleServices.xml” file in your Firefox profile directory (most likely at “C:\Users\USERNAME\AppData\Roaming\Mozilla\Firefox\Profiles\PROFILENAME\” if running Windows 7), you can comment out or delete the following rule so Google Reader is no longer run through secure HTTPS channels:
That being said, I’ve been having a plethora of problems with Facebook and HTTPS Everywhere too :-\ (which it actually mentions might happen in its options dialog). You’d think the largest sites on the Internet could figure out how to get their security right, but either they don’t care (the more likely option), or they don’t want the encryption overhead. Alas.
Since I just released my AJAX Library, I thought I’d post a useful script that uses it. The function CrossDomainGetURL below uses the AJAX Library to make requests across domains in Firefox. It takes one more parameter (not in order) than the AJAX Library's GetURL function, which is an array of domains to pull cookies from for the AJAX request.
function GetCookiesFromURL(Domains) //Return all the cookies for Domains specified in the Domains array
{
var cookieManager = Components.classes["@mozilla.org/cookiemanager;1"].getService(Components.interfaces.nsICookieManager); //Requires privileges, which is granted in CrossDomainGetURL
var iter=cookieManager.enumerator, CookieList=[], cookie; //The object used to find all cookies, the final list of cookies, and a temporary object
while(iter.hasMoreElements()) //Loop through all cookies
if(((cookie=iter.getNext()) instanceof Components.interfaces.nsICookie) && Domains.indexOf(cookie.host)!=-1) //If a cookie whose host matches one of our domains
CookieList.push(cookie.name+'='+cookie.value); //Add it to our final list
return CookieList.join("; "); //Return the cookie list for the specified domains
}
function CrossDomainGetURL(URL, Data, CookieDomains, ExtraOptions) //See AJAX Library GetURL function. CookieDomains is an array specifying what domains cookies are pulled from for the AJAX call.
{
//Access universal privileges in Firefox (Required to get cookies for other domains, and to use AJAX with other domains). This functionality is lost as soon as this function loses scope.
try { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); }
catch(e) { return alert('Cannot access browser privileges'); }
if(CookieDomains instanceof Array) //If an array of domains is passed to get cookies from...
{
ExtraOptions=((ExtraOptions instanceof Object) ? ExtraOptions : {}); //Make sure extra options is an object
ExtraOptions.AdditionalHeaders=((ExtraOptions.AdditionalHeaders instanceof Object) ? ExtraOptions.AdditionalHeaders : {}); //Make sure extra options has an additional headers object
ExtraOptions.AdditionalHeaders.Cookie=GetCookiesFromURL(CookieDomains); //Get cookies for the domains
}
return GetURL(URL, Data, ExtraOptions); //Do the AJAX Call
}
The Google Search API returns a vastly stripped result set compared to using actual Google Search. I have checked and done a bit of research and have not found a good reason for this. And, no, it has nothing to do with local or personalized searches, which has been confirmed by using searches without any kind of cookies or localizations.
My guess is that the Google Search API and normal Google Search itself are just tapping into different result sets from the start :-(.
An example of this problem is as follows: Searching for “Fractal” in the Projects section returns the following results:
Has Google been removing search options to make things run faster?
I’ve been meaning to get searching working on my site for what seems like forever, and I decided to finally get around to getting some manner of search working via the temporary “use Google” solution. Unfortunately, it seems Google no longer does boolean searches completely properly as advertised. I am sure Google Search still supports boolean logic (as opposed to the assumed “and” between each word) because the Advanced Search, linked to from their front page, still has it, and it returns a few of the results it should.
As an example:
If I wanted to search the Projects and Updates sections of my sites for either the keywords fractal or font I would use the following search:
(site:www.castledragmire.com/Projects OR site:www.castledragmire.com/Updates) AND (Fractal OR Font)
This currently only returns 3 results, when it should return 11 different results, enumerated by using the 4 separate searches (with return results):
Because of this, I need to go ahead and get real searching up via MySQL (or possibly another solution), as originally planned, sooner than later, since Google will not work as a temporary solution for what I want.
I wrote up a paper on what could be done through Google Search over 5 years ago as a job request [to be posted soon], which I believe is very informative. I’m sure it’s a little outdated, but it shows how much can Google can [could] do for you.
To start off, sorry I haven’t been posting much the last couple of months. First, I got kind of burnt out from all the posting in August. More recently however, I’ve been looking for a J-O-B which has been taking a lot of my time. Now that I’ve found some work, I’m more in the mood to post again, yay. Hopefully, this coming month will be a bit more productive in the web site :-). Now on to the content.
Browser rendering [and other] bugs have been a bane of the web industry for years, particularly in the old days when IE was especially non-standards-compliant, so people had to add hacks to their pages to make them display properly. IE has gotten much better since then, but there are still lots of bugs in it, especially because Microsoft wants to not break old web sites that had to add hacks to make them work in the outdated versions of IE. Other modern browsers still have rendering problems too [see the acid tests], but again, these days it’s not so bad.
I just ran into one of these problems in a very unexpected place: Google Chrome. I kind of ignored the browser’s launch, as I’m mostly happy with Firefox (there’s a few major bugs that have popped up in Firefox 3.0 that are a super annoyance, but I try to ignore them), but needed to install Chrome recently. When I went to my web page in it, I noticed a major glitch in the primary layout, so I immediately researched it.
What it currently looks like Rendered in Firefox v3.0.3
What it looks like in Chrome v0.2.149.30 Which is apparently correct according to the CSS guidelines
So I researched what was causing the layout glitch, assuming it was my code, and discovered it is actually a rendering bug in Firefox and IE, not Chrome (I think)! Basically, DIV’s with top margins transfer their margins to their parent DIVs, as is explained here:
Note that adjoining vertical margins are collapsed to use the maximum of the margin values. Horizontal margins are not collapsed.
The text there isn’t exactly clear cut, but it seems to support my suggestion that Chrome has it right. Here is an example, which renders properly in Chrome, but not IE and Firefox.
In the above example, the green box’s top should be directly against the blue box, and the blue box inherits the margin and is pushed away from the top of the red box.
Honestly, I think this little margin-top caveat is quite silly and doesn’t make sense. Why collapse the margins when it would make more sense to just use the box model so the child has a margin against its parent. Go figure.
So to fix the problem, I ended up using “padding-top” on the parent instead of “margin-top” on the child. Blargh.
This isn’t the first bug I’ve discovered in Firefox either (which I usually submit to Firefox’s bugzilla).
At least one of the worst ones bugs I’ve submitted (which had already been submitted in the past, I found out) has been fixed. “Address bar should show path/query %-unescaped so non-ASCII URLs are readable” was a major internationalization problem, which I believe was a deal breaker for Firefox for anyone using any language that isn’t English. Basically any non-ASCII character in the address bar was escaped with %HEXVALUE instead of showing the actual character. Before Firefox got out an official bug fix, I had been fixing this with a nifty Firefox add-on, Locationbar2, which I still use as it has a lot of other nifty options.
One bug that has not yet been fixed that I submitted almost 2 years ago (it has been around for almost 4 years) is “overflow:auto gets scrollbar on focused link outline ”. I wrote up the following document on this when I submitted it to Mozilla:
I put this in an IFRAME because for some reason the bug didn’t show up when I inlined this HTML, go figure. The font size on the anchor link also seems to matter now... I do not recall it mattering before.
At least Firefox (and Chrome) are still way WAY more on the ball than IE.
Edit on 2009-7-26: The margin-top bug has been fixed on Firefox (not sure which version it happened on, but I’m running version 3.0.12 currently).
How can this be financially feasible for anyone?!?
I’ve apparently had an incorrect view on the exact schema and costs of how domain registration works. I had always assumed that to become a registrar (the companies that normal people register domains through) of any .COM domain, you just had to get accredited by ICANN, and then pay $0.20 per domain. However, accreditation is an expensive and tough process, including (taken verbatim from the link):
US$2,500 non-refundable application fee, to be submitted with application.
US$4,000 yearly accreditation fee due upon approval and each year thereafter.
Variable fee (quarterly) billed once you begin registering domain names or the first full quarter following your accreditation approval, whichever occurs first. This fee represents a portion of ICANN’s operating costs and, because it is divided among all registrars, the amount varies from quarter to quarter. Recently this fee has ranged from US$1,200 to S$2,000 per quarter.
Transaction-based gTLD fee (quarterly). This fee is a flat fee (currently $0.20) charged for each new registration, renewal or transfer. This fee can be billed by the registrar separately on its invoice to the registrant, but is paid by the registrar to ICANN.
So I had thought that becoming an accredited .COM registrar would pay itself off in the first year if you had ~1,177 domains registered...
BASE FIRST YEAR FEE=$2500 application + $4000 yearly + ~$1500 ICANN operation fee = $8000
PER DOMAIN DIFFERENCE= $7.00 to register a domain at a good registrar - $0.20 ICANN FEE = $6.80 savings per domain
TO BREAK EVEN= BASE FIRST YEAR FEE / PER DOMAIN DIFFERENCE = $8000 / $6.80 = ~1,177 domains
but unfortunately, I was incorrect in that you ALSO have to pay Verisign (who owns the .COM TLD) a hefty fee per domain.
So once you become an accredited ICANN register, you have to hook your system up to Verisign, who charges an additional (to the $0.20 ICANN fee) $6.42 per domain. Even worse is that they require you to pay all of their fees up front for the number of domains you plan to register on a yearly basis!!!!
Taking into account these new findings, it would actually take ~21,053 domains (with PER DOMAIN DIFFERENCE being adjusted to $7.00-$0.20-$6.42=$0.38) to break even the first year when becoming your own registrar (as opposed to going through another registrar), YIKES!
I've always personally recommend gkg.net as a registrar, but their registration prices recently took a major hike, like most registrars, due to Verisign raising their per domain fee. I may have to reevaluate registrars at some point because of this.
The day that truly tests if your servers can handle the load
I’m not going to get into politics in this post (nor will I generally ever); I just wanted to point out a few things that I saw on the Internet on election night that made me smile :-) .
Wikipedia actually errored out when I tried to get to Barack Obama soon after it was announced he won.
I was also really surprised that Fox News had up the following picture/announcement mere minutes after all the other stations reported Obama had won (when it was still only projections). I would have thought Fox News would hold out on announcing it to their viewers until it was more sure...
Never rely solely on information you receive from untrusted sources
One of the most laughable aspects of client/server* systems is client side based security access restrictions. What I mean by this is when credentials and actions are not checked and restricted on the server side of the equation, only on the client side, which can ALWAYS be bypassed.
To briefly explain why it is basically insane to trust a client computer; ANY multimedia, software, data, etc that has touched a person’s computer is essentially now their property. Once something has been on or through a person’s computer, the user can make copies, modify it, and do whatever the heck they want with it. This is how the digital world works. There are ways to help stop copying and modification, like hashes and encryption, but most of the ways in which things are implemented nowadays are quite fallible. There may be, for example, safeguards in place to only allow a user to use a piece of software on one certain computer or for a certain amount of time (DRM [Digital Rights Management]), but these methods are ALWAYS bypassable. The only true security comes by not letting information which people aren’t supposed to have access to cross through their computer, and keeping track of all verifiable factual information on secure servers. A long time ago at an IGDA [International Game Developers Association] meeting (I only ever went to the one unfortunately :-\), I learned an interesting truth that hadn’t occurred to me before from the lecturer. That is, that companies that make games and other software [usually] know it will sooner or later be pirated/cracked**. The true intention of software DRM is to make it hard enough to crack to discourage the crackers into giving up, and to make it take long enough so that hopefully people stop waiting for a free copy and go ahead and buy it. By the time a piece of software is cracked (if it takes as long as they hope), the companies know the majority of the remainder of the people usually wouldn’t have bought it anyways. Now I’m done with the basic explanation of client side insecurities, back to the real reason for this post.
While it is actually proper to program safeguards into client side software, you can never rely on it for true security. Security measures should always be duplicated in both client and server software. There are two reasons off the top of my head for implementing security access restrictions into the client side of software. The first is to help remove strain on servers. There is no point in asking a server if something is valid when the client can immediately confirm that it isn’t. The second reason is for speed. It’s MUCH quicker if a client can detect a problem and instantly inform the user than having to wait for a server to answer, though this time is usually imperceptible to the user, it can really add up.
So I thought I’d give a couple of examples of this to help you understand more where I’m coming from. This is a very big problem in the software industry. I find exploitable instances of this kind of thing on a very regular basis. However, I generally don’t take advantage of such holes, and try to inform the companies/programmers if they’ll listen. The term for this is white hat hacking, as opposed to black hat.
First, a very basic example. Let’s say you have a folder on your website “/PersonalPictures” that you wanted to restrict access to with a password. The proper way to do it would be to restrict access to the whole folder and all files in it on the server side, requiring a password be sent to the server to view the contents of each file. This is normally done through Apache httpd (the most utilized web server software) with an “.htaccess” file and the mod_auth (authentication) module. The improper way to do it would be a page that forwarded to the “hidden” section with a JavaScript script like the following.
if(prompt('Please enter the password')=='SecretPassword')
document.location.href='/PersonalPictures';
The problem with this code is two fold (besides the fact it pops up a request window :-) ). First, the password is exposed in plain text to the user. Fortunately, passwords are usually not as easy to find as this, but I have found passwords in web pages and Flash code before with some digging (yes, Flash files (and Java!) are 100% decompilable to their original source code, sans comments). The second problem is that once the person goes to the URL “/PersonalPictures”, they can get back there and to all files inside it without the password, and also give it freely to others (no need to mention the fact that the URL is written in plain text here, as it’s the same as with the password). This specific problem with JavaScript was much more prevalent in the old day when people ran their web pages through free hosting sites like Geocities (now owned and operated by Yahoo) which didn’t allow for proper password protection.
This kind of problem is still around on the web, though it morphed with the times into a new form. Many server side scripts I have found across the Internet assume their client side web pages can take care of security and ignore the necessary checks in the server scripts. For example, very recently I was on a website that only allowed me to add a few items to a list. The way it was done is that there was a form with a textbox that you submitted every time you wanted to add an entry to the list. After submitting, the page was reloaded with the updated list. After you added the maximum allowed number of items to the list, when the page refreshed, the form to add more was gone. This is incredibly easy to bypass however. The normal way to do this would be to just send the modified packets directly to the server with whatever information you want in it. The easier method would be to make your own form submission page and just submit to the proper URL all you want. The Firebug extension for Firefox however makes this kind of thing INCREDIBLY easy. All that needs to be done is to add an attribute to the form to send the requests to a new window “<form action=... method=... target=_blank>”, so the form is never erased/overwritten and you can keep sending requests all you want. Using Firebug, you can also edit the values of hidden input boxes for this kind of thing.
AJAX (Asynchronous JavaScript and XML - A tool used in web programming to send and receive data from a server without having to refresh a page) has often been lampooned as insecure for this kind of reason. In reality, the medium itself is not insecure at all; it’s just how people use it.
As a matter of fact, the majority of my best and most fun Ragnarok hacking was done with these methods. I just monitored the packets that came in and out of the system, reverse engineered how they were all structured, then made modifications and resent them myself to see what I could do. With this, I was able to do things like (These should be most of the exploits; listed in descending order of usefulness & severity):
Duplicate items
Crash the server (It was never fixed AFAIK, but I stopped playing 5+ years ago. I just put that it was fixed on my site so people wouldn’t look for it ^_^; )
Warp to any map from any warp location (warp locations are only supposed to link to 1 other map)
Spoof your name during chats (so you could pretend someone else was saying something - Ender’s game, anyone? ^_^)
Use certain skills of other classes (I have up pictures of my swordsman using merchant skills to house a selling shop)
Add skills points to an item on your skill tree that is not yet available (and use it immediately)
Warp back to save point without dying
Talk to NPCs on a map from any location on that map, and sometimes from other maps (great for selling items when in a dungeon)
Attack with weapons much quicker than was supposed to be allowed
Use certain skills on creatures from any location on a map no matter how far they are
Equip any item in any spot (so you could equip body armor on your head slot and get much more free armor defense points)
Run commands on your party/guild and in chat rooms as if you were the leader/admin
Rollback a characters stat’s to when you logged on that session (part of the dupe hack)
Bypass text repetition, length, and curse filters
Find out user account names
The original list is here; it should contain most of what I found. I took it down very soon after putting it up (replacement here) because I didn’t want to explicitly screw the game over with people finding out about these hacks (I had a lot of bad encounters with the company that ran the game, they refused to acknowledge or fix existing bugs when I reported them). There were so many things the server didn’t check just because the client wasn’t allowed to do them naturally.
Here are some very old news stories I saved up for when I wrote about this subject:
Just because you don’t give someone a way to do something doesn’t mean they won’t find a way.
*A server is a computer you connect to and a client is the connecting computer. So all you people connecting to this website are clients connecting to my web server.
**“Cracked” usually means to make a piece of software usable when it is not supposed to be, bypassing the DRM
Bad Programming: Only using file extensions as an indicator
According to a Microsoft KB article titled “Virtual directory names with executable extensions are not used correctly”, using a virtual folder ending in an executable extension (like .com, .exe, .dll, or .sh) under the web server for IIS [Microsoft’s Internet information services server suite] makes the contents inside the folder unviewable. This behavior itself is kind of silly, as you’d assume a web server would always check to see if something was a file or folder first.
Unfortunately, this doesn’t apply to just virtual folders, but all folders under an IIS web server, as I found out a few years ago when I backed up a site that I knew would be taken down very soon (ironically, because the company [SysInternals] was being taken over by Microsoft) and mirrored it on my Home Server, which runs IIS.
The solution I used was to add a character (in my case an underscore “_”) to the end of all the directory names ending in “.com” and then doing a global regular expression replace through all files in the mirror to replace any occurrences of these directories.
Yesterday I wrote a bit about the DNS system being rather fussy, so I thought today I’d go a bit more into how DNS works, and some good tools for problem solving in this area.
First, some technical background on the subject is required.
A network is simply a group of computers hooked together to communicate with each other. In the old days, all networking was done through physical wires (called the medium), but nowadays much of it is done through wireless connections. Wired networking is still required for the fastest communications, and is especially important for major backbones (the super highly utilized lines that connect networks together across the world).
A LAN is a local network of all computers connected together in one physical location, whether it be a single room, a building, or a city. Technically, a LAN doesn’t have to be localized in one area, but it is preferred, and we will just assume it is so for arguments sake :-).
A WAN is a Wide (Area) Network that connects multiple LANs together. This is what the Internet is.
The way one computer finds another computer on a network is through its IP Address [hereby referred to as IPs in this post only]. There are other protocols, but this (TCP/IP) is by far the most widely utilized and is the true backbone of the Internet. IPs are like a house’s address (123 Fake Street, Theoretical City, Made Up Country). To explain it in a very simplified manner (this isn’t even remotely accurate, as networking is a complicated topic, but this is a good generalization), IPs have 4 sections of numbers ranging from 0-255 (1 byte). For example, 67.45.32.28 is a (class 4) IP. Each number in that address is a broader location, so the “28” is like a street address, “32” is the street, “45” is the city, and “67” is the country. When you send a packet from your computer, it goes to your local (street) router which then passes it to the city router and so on until it reaches its destination. If you are in the same city as the final destination of the packet, then it wouldn’t have to go to the country level.
The final important part of networking (for this post) is the domain system (DNS) itself. A domain is a label for an IP Address, like calling “1600 Pennsylvania Avenue” as “The White House”. As an example, “www.castledragmire.com” just maps to my web server at “209.85.115.128” (this is the current IP, it will change if the site is ever moved to a new server).
Next is a brief lesson on how DNS itself works:
The root DNS servers (a.root-servers.net through m.root-servers.net) point to the servers that hold top-level-domain information (.com, .org., .net, .jp, etc)
Examples of these servers are as follows:
au
ns1.audns.net.au
biz
E.GTLD.biz
ca
CA04.CIRA.ca
cn
A.DNS.cn
com&net
A.GTLD-SERVERS.NET
de
Z.NIC.de
eu
U.NIC.eu
info
B9.INFO.AFILIAS-NST.ORG
org
TLD1.ULTRADNS.NET
tv
C5.NSTLD.COM
Next, these root name servers (like A.GTLD-SERVERS.NET through M.GTLD-SERVERS.NET for .com) hold two main pieces of information for ALL domains under their top-level-domain jurisdiction:
The registrar where the domain was registered
The name server(s) that are responsible for the domain
Only registrars can talk to these root servers, so you have to go through the registrar to change the name server information.
The final lowest rung in the DNS hierarchy is name servers. Name servers hold all the actual addressing information for a domain and can be run by anyone. The 2 most important (or maybe relevant is a better word...) types of DNS records are:
A: There should be many of these, each pointing a domain or subdomain (castledragmire.com, www.castledragmire.com, info.castledragmire.com, ...) to a specific IP address (version 4)
SOA: Start of Authority - There is only one of these records per domain, and it specifies authoritative information including the primary name server, the domain administrator’s email, the domain serial number, and several timeout values relating to refreshing domain information.
Now that we have all the basics down, on to the actual reason for this post. It’s really a nuisance trying to explain to people why their domain isn’t working, or is pointing to the wrong place. So here’s why it happens!
Back in the old days, it often took days for DNS propagation to happen after you made changes at your registrar or elsewhere, but fortunately, this problem is of the past. The reason for this is that ISPs and/or routers cached domain lookups and only refreshed them according to the metrics in the SOA record mentioned above, as they were supposed to. This was done for network speed reasons, as I believe older OSs might not have cached domains (wild speculation), and ISPs didn’t want to look up the address for a domain every time it was requested. Now, though, I rarely see caching on any level except at the local computer; not only on the OS level, but even some programs cache domains, like FireFox.
So the answer for when a person is getting the wrong address for a domain, and you know it is set correctly, is usually to just reboot. Clearing the DNS cache works too (for the OS level), but explaining how to do that is harder than saying “just reboot” ^_^;.
To clear the DNS cache in XP, enter the following into your “run” menu or in the command prompt: “ipconfig /flushdns”. This does not ALWAYS work, but it should work.
If your domain is still resolving to the wrong address when you ping it after your DNS cache is cleared, the next step is to see what name servers are being used for the information. You can do a whois on your domain to get the information directly form the registrar who controls the domain, but be careful where you do this as you never know what people are doing with the information. For a quick and secure whois, you can use “whois” from your linux command line, which I have patched through to a web script here. This script gives both normal and extended information, FYI.
Whois just tells you the name servers that you SHOULD be contacting, it doesn’t mean these are the ones you are asking, as the root DNS servers may not have updated the information yet. This is where our command line programs come into play.
In XP, you can use “nslookup -query=hinfo DOMAINNAME” and “nslookup -query=soa DOMAINNAME” to get a domain’s name servers, and then “nslookup NAMESERVERDOMAINNAME” to get the IP the name server points too. For example: (Important information in the following examples are bolded and in white)
Nslookup is also available in Linux, but Linux has a better tool for this, as nslookup itself doesn’t always seem to give the correct answers, for some reason. So I recommend you use dig if you have it or Linux available to you. So with dig, we just start at the root name servers and work our way up to the SOA name server to get the real information of where the domain is resolving to and why.
root@www [~]# dig @a.root-servers.net castledragmire.com
; <<>> DiG 9.2.4 <<>> @a.root-servers.net castledragmire.com
; (2 servers found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5587
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14
;; QUESTION SECTION:
;castledragmire.com. IN A
;; AUTHORITY SECTION:
com. 172800 IN NS H.GTLD-SERVERS.NET.
com. 172800 IN NS I.GTLD-SERVERS.NET.
com. 172800 IN NS J.GTLD-SERVERS.NET.
com. 172800 IN NS K.GTLD-SERVERS.NET.
com. 172800 IN NS L.GTLD-SERVERS.NET.
com. 172800 IN NS M.GTLD-SERVERS.NET.
com. 172800 IN NS A.GTLD-SERVERS.NET.
com. 172800 IN NS B.GTLD-SERVERS.NET.
com. 172800 IN NS C.GTLD-SERVERS.NET.
com. 172800 IN NS D.GTLD-SERVERS.NET.
com. 172800 IN NS E.GTLD-SERVERS.NET.
com. 172800 IN NS F.GTLD-SERVERS.NET.
com. 172800 IN NS G.GTLD-SERVERS.NET.
;; ADDITIONAL SECTION:
A.GTLD-SERVERS.NET. 172800 IN A 192.5.6.30
A.GTLD-SERVERS.NET. 172800 IN AAAA 2001:503:a83e::2:30
B.GTLD-SERVERS.NET. 172800 IN A 192.33.14.30
B.GTLD-SERVERS.NET. 172800 IN AAAA 2001:503:231d::2:30
C.GTLD-SERVERS.NET. 172800 IN A 192.26.92.30
D.GTLD-SERVERS.NET. 172800 IN A 192.31.80.30
E.GTLD-SERVERS.NET. 172800 IN A 192.12.94.30
F.GTLD-SERVERS.NET. 172800 IN A 192.35.51.30
G.GTLD-SERVERS.NET. 172800 IN A 192.42.93.30
H.GTLD-SERVERS.NET. 172800 IN A 192.54.112.30
I.GTLD-SERVERS.NET. 172800 IN A 192.43.172.30
J.GTLD-SERVERS.NET. 172800 IN A 192.48.79.30
K.GTLD-SERVERS.NET. 172800 IN A 192.52.178.30
L.GTLD-SERVERS.NET. 172800 IN A 192.41.162.30
;; Query time: 240 msec
;; SERVER: 198.41.0.4#53(198.41.0.4)
;; WHEN: Sat Aug 23 04:15:28 2008
;; MSG SIZE rcvd: 508
root@www [~]# dig @a.gtld-servers.net castledragmire.com
; <<>> DiG 9.2.4 <<>> @a.gtld-servers.net castledragmire.com
; (2 servers found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35586
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2
;; QUESTION SECTION:
;castledragmire.com. IN A
;; AUTHORITY SECTION:
castledragmire.com. 172800 IN NS ns3.deltaarc.com.
castledragmire.com. 172800 IN NS ns4.deltaarc.com.
;; ADDITIONAL SECTION:
ns3.deltaarc.com. 172800 IN A 216.127.92.71
ns4.deltaarc.com. 172800 IN A 209.85.115.181
;; Query time: 58 msec
;; SERVER: 192.5.6.30#53(192.5.6.30)
;; WHEN: Sat Aug 23 04:15:42 2008
;; MSG SIZE rcvd: 113
root@www [~]# dig @ns3.deltaarc.com castledragmire.com
; <<>> DiG 9.2.4 <<>> @ns3.deltaarc.com castledragmire.com
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26198
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0
;; QUESTION SECTION:
;castledragmire.com. IN A
;; ANSWER SECTION:
castledragmire.com. 14400 IN A 209.85.115.128
;; AUTHORITY SECTION:
castledragmire.com. 14400 IN NS ns4.deltaarc.com.
castledragmire.com. 14400 IN NS ns3.deltaarc.com.
;; Query time: 1 msec
;; SERVER: 216.127.92.71#53(216.127.92.71)
;; WHEN: Sat Aug 23 04:15:52 2008
;; MSG SIZE rcvd: 97
Linux also has the “host” command, but I prefer and recommend “dig”.
And that’s how you diagnose DNS problems! :-). For reference, two common DNS configuration problems are not having your SOA and NS records properly set for the domain on your name server.
Another of my favorite XP hacks is modifying domain addresses through XP’s Hosts file. You can remap where a domain points on your local computer by adding an IP address followed by a domain in the “c:\windows\system32\drivers\etc\hosts” file.
Domain names are locally controlled, looked up, and cached on your computer at the OS level, so there are simple hacks like this for other OSs too.
I often utilize this solution as a server admin who controls a lot of domains (Over 100, and I control most of them at the registrar level too ^_^). The domain system itself across the web is incredibly fastidious and prone to problems if not perfectly configured, so this hack is a wonderful time saver and diagnostic tool until things resolve and work properly.
Alas at insecure systems that are important to us all
I’ve been waiting to hear this kind of news for years: Domains May Disappear After Search. I’ve often told people for this kind of reason to watch where they are registering domains, as I believe some registrars like register.com are not very scrupulous and would do this exact kind of thing. I personally use GKG for all my domain registration needs, though they have recently ticked me off with a policy I recently ran into in which you can’t modify any information on a domain 6 months after renewing an expired domain with a credit card. Their tech support also isn’t very good, but they have fair prices and excellent domain management interfaces.
Another huge domain problem is “domain tasting” in which domains can be registered and then refunded within a five day grace period. Unethical people will use this to register expired domains and keep them if they get enough hits. After all, domains only really cost 25 cents to register if you are an accredited ICANN (Internet Corporation for Assigned Names and Numbers) registrar, which cost something like $3000 to obtain. This is a big problem if anyone lets their domain expire. Fortunately, some services, like GKG, give you a grace period to reregister your domain after it expires before others can try to claim it.
It’s great to have standards so everything can play together nicely. I’ve even heard IE8 should pass the Acid2 test with “Web Standard Compatibility” mode turned on, and it has been confirmed for a long time that FireFox3 will (finally) pass it. Microsoft, of course, has a bit of a problem with backwards compatibility when everyone had to use hacks in the past to “conform” to their old IE software, which was, and still is, filled with bugs and errors; and with IE version upgrades, they need to not break those old websites. This really technically shouldn’t be a problem if people properly mark their web pages with compatible versions of HTML, XHTML, etc, but who wants to deal with that? Compatibility testing and marking, especially in the web world, is a serious pain in the ass, which I can attest to after working with web site creation for many years, something I am not very proud of :-). I am a C++ advocate, and Java/.NET hater, and yes, I’ve worked heavily in all of them.
Anyways, some new web standards even break old ones, for example:
<font><center></font></center>
is no longer allowed. Non nested (ending child elements before the parent) is no longer possible in certain circumstances in HTML4, and definitely not allowed in XHTML, as that would be specifically against what XML was designed for. This was one of my favorite parts of original HTML too, in that you could easily combine formatting elements in different sections and orders without having to redefine all previous formats each time. Though CSS does help with this, it has its own quirks too that I consider to be a rather large failing in its design. I should be expanding more on that later on.
And then there’s this one other oddity that has always bugged me. Two standard HTML colors are “gray” and “lightgrey”... if that’s not a little confusing... and for the record, “grey” and “lightgray” do not work in IE.
Further, XML, while it has its place and reasons, really really bugs me. Just the fact that it really slows things up and is overused where it’s not needed because it is the “popular” thing to do. Come on people, is it that hard to create and use interfaces for binary compiled data? Or even ini-type files for crying out loud... Until we have specific hardware designed and implemented to parse XML, or better text parsing in general, I will continue to consider XML a step backwards, a very unfortunate reoccurring reality in the software world.