Thursday, January 29, 2015

A Ubee nightmare: Maybe your cable modem is the problem after all

For months, I had been involved with a frustrating struggle to get my Time Warner internet service to work properly.

During this time, I made quite a few phone calls to Time Warner, had to patiently listen to customer service reps keep explaining to me how to unplug my cable modem, had to stop by the local office a number of times to replace my cable modem, actually chose to downgrade my service to a point so that I could actually justify having reliable service for maybe a third of a month, had service technicians come and tell me the problem was all about the cables running through the walls, and everything had to be ripped out etc etc and so on and so forth.

And, I sort of believed them: After all, they showed me that my modem was trying upstream power levels around 55.2 dBmV to 58.5 dBmV, or, in one particular case, close to 60 dBmV. Clearly, this was above the recommended range.

Every time I tried to point out that my problems started when my cable modem was "upgraded" to a Ubee model, and that every time they replaced my modem, it was another Ubee, I was told that the problem had nothing to do with the modem.

Of course, I thought, it makes sense. After all, everyone around here is getting the same modems. And, obviously, their connections are fine. It must be the cabling.

This time it got so bad that I even considered getting a prepaid 4G hotspot.

After another one of the Ubee replacements from Time Warner failed to even establish a connection after hours of trying, I decided, maybe I was right to be suspicious of the crappily constructed ugly thing that cannot sit straight.

I stopped by the local Walmart, and got myself the most reasonable looking cable modem among the three that were on the shelves: A ZOOM 5341J.

I do not know anything about cable modems. I did not do any research about this purchase. It was a desperation move.

I came home, and called Time Warner one more time, and, for a change, got someone who just went along with me instead of forcing me to follow the script (of course, it took three attempts to actually navigate the voice prompt system to finally get connected, but, that's a whole different matter).

He told me the modem had to be powered up and connected to the cable line for him to be able to activate. With potential for embarrassment always on my mind, I hopelessly went through the motions of connecting the cables, first saw the DS light turn solid, than the US light turned solid, then the Online light turned solid, and stayed that way. I gave the rep the MAC address for the cable modem, he went away for a few minutes, the cable modem's light flashed a few times, and it got its new IP address.

Suddenly, I could once again navigate to any site I wanted.

A quick look at the modem's outdated and buggy management screen quickly revealed downstream power levels ranging between -0.2 dBmV and 0.6 dBmV (zero is optimal), and SNR around 42 dBmV (what a great coincidence ;-). I used to consider myself lucky if I got a downstream SNR of about 20 with the Ubees.

Upstream was the real shocker though. The power levels ranged between 38.8 dBmV and 41.3 dBmV.

Yes, that is down from not being able to acquire or maintain a connection at 55.2 dBmV and higher.

The moral of this story is not that the ZOOM 5341J is necessarily a great cable modem: Maybe it is, maybe it isn't. I do not know enough about these things to be able to judge that.

If you are struggling to keep a decent internet connection with Ubee cable modems, consider stopping by a local store and grabbing a cable modem compatible with your service provider, and giving it a shot. All you have to lose is the monthly equipment fee, and the frustration that kills you from the inside as you watch that stupid "Ready" light blink for hours on end.

Saturday, December 27, 2014

UTF-8 output in cmd.exe: perl 5 vs perl 6

I have mentioned before that there are some problems with UTF-8 output in cmd.exe with perl 5.

When I successfully built a perl 6 binary, I was excited to see it produce perfectly good UTF-8 output.

To recap, here is output from perl 5:

versus output from the perl 6 I built:

I am assuming the problem is obvious: With perl 5, you get some trailing characters repeated.

At first, I thought perl 6 produced correct output was because it was doing "something" "right" for various values of "something" and "right".

Then, David Farrell reminded me the piping perl 6 output through more results in garbled output:

Of course, piping the output through Cygwin's less works as expected:

You can guess the source of the difference between perl 6 and perl 5 now:

Yes, perl 6 uses LF instead of CRLF on Windows.

Let's see what happens if I tack on extra CRs:

Yup, more now sees a CRLF following each digit, and doesn't cause garbled output.

Of course, we knew that giving up CRLF translation fixed perl 5 UTF-8 output in cmd.exe.

I am just happy to have figured out the difference between perl 6 and perl 5.

I still have no idea what should be fixed and where — except that I am uncomfortable with what seems to be a "everything is Unix" assumption in perl 6.

Wednesday, December 17, 2014

When bits don't stick: More portability bugs in Perl modules

First off, a heartfelt thank you to NY.pm and mongoDB for giving me the opportunity to talk a little about my adventures in compiling perl with Microsoft Visual Studio 2013 Community Edition.

The process of putting together the talk lead me to discover another two more of those pesky bugs due to platform differences in handling files. These are more interesting than the directory separator character style issues that I have been ranting about, mostly because of how hard they were to see just by looking at the code. I will submit patches, obviously, but my hope with these blog posts is to re-emphasize that these platform-specific differences do exist, whether we like it or not, and if more people are aware, fewer bugs will make it into CPAN modules in the first place.

Bug 1: Corrupt images with App::revealup

I decided to put together the presentation using App::revealup. As I was reviewing my presentation, I noticed various screenshots would not display properly, or not at all. I noticed that PNG images were not displaying. It didn't matter what browser I was using. It also did not matter whether I was using Strawberry's perl, or the perl I had built. I could not figure out the problem, but I did realize JPGs were mostly displayed intact, if only with occasional corruption, and sometimes really visible artifacts.

The screenshots below show a PNG that does not display, a JPG that does not display, and the same image saved with a different compression level seemingly displaying perfectly fine:

There was one image, a screenshot of the Visual Studio's GUI IDE window, which, no matter what I tried, did not display properly. There was always some visible problem with it. I tried several compression levels, and decided to stick with the one that looked the least bad.

I just could not figure out what was wrong, but I also wanted to finish the presentation.

About an hour or so before the talk, however, something occurred to me.

I decided to check what would happen if I used a Windows BMP. Compare the screenshots below, showing a JPG and a BMP, respectively:

By now, knowing what I know about file and compression formats, I was pretty certain I knew what was going on. As one final test, I decided to temporarily serve the exact same presentation from one of my Linode VPSs.

Absolutely no problems there.

Mystery solved. However, I had to go give my talk, and I did not have a chance to locate source of the problem in the module. I'll come back to this at the end of the post.

For now, see if you can tell what's going on.

Bug 2: Failing tests with Complete::Util

At the end of the talk, I decided to install a recently uploaded module from CPAN. After looking briefly at the list of recent uploads, I decided App::wordlist which looked innocent enough. Unfortunately, cpanm App::wordlist failed, due to a failing test in Complete::Util.

The test script complete_program.t tests the routine Complete::Util::complete_program. A brief look at both the module and the test script reveal a lot of opportunities for making sure platform specific paths don't cause problems, but that is not where the bug lies. After all, passing Unix style paths to Windows APIs works with no problems.

The test fails because of the last check in this line:

push @res, $_ if $_ =~ $word_re && !(-d "$dir/$_") && (-x _);

The test script creates the following files:

mkexe("$dir/dir1/prog1");
mkexe("$dir/dir1/prog2");
mkexe("$dir/dir2/prog3");

mkexe is defined above:

sub mkexe { write_file($_[0], ""); chmod 0755, $_[0] }

Here is what perldoc perlport has to say about that:

chmod

  • Only good for changing "owner" read-write access, "group", and "other" bits are meaningless. (Win32)
  • Only good for changing "owner" and "other" read-write access. (RISC OS)
  • Access permissions are mapped onto VOS access-control list changes. (VOS)
  • The actual permissions set depend on the value of the CYGWIN in the SYSTEM environment settings. (Cygwin)
  • Setting the exec bit on some locations (generally /sdcard) will return true but not actually set the bit. (Android)

On Windows, if you want to create a zero-byte executable, you might want to give it the .bat extension. As far I can see, that won't cause any more portability problems than the module already might have, but at least, it won't cause a spurious test failure on Windows.

Again, I will, of course submit a match in due course, but I also find it helpful to document the process of exploration. After all, I would like to have something to show for the effort of diving into someone else's code, trying to figure out what is failing and why etc.

Back to App::revealup

I am assuming by now you have already realized that the images being served by App::revealup were being corrupted through CRLF-translation. That is why the bitmap file displays just with a color shift in a predefined area, that is why some JPEG files display with varying degrees of corruption, and some JPEG files don't display at all, and that's why PNG files uniformly fail to display.

Here is why that happens.

When App::revealup responds to your request, it does a few checks, and if the resources exists on the local filesystem, it returns the contents of said resource:

return App::revealup::util::path_to_res($path) if $path->exists;

$path variable here is an instance of Path::Tiny which provides several slurp methods.

App::revealup uses the plain slurp:

sub path_to_res {
    my $path = shift;
    if( $path && $path->exists ) {
        my $c = $path->slurp();
        my $meta = ['Content-Length' => length $c ];
        if( my $mime = MIME::Types->new->mimeTypeOf($path->basename) ){
            push @$meta, ('Content-Type' => $mime->type );
        }
        return [200, $meta , [$c]];
    }
    return [404, [], ['not found.']];
}

Path::Tiny documentation explains the slurp method really well.

Basically, the plain slurp call will do a:

local $/;
return scalar <$fh>;

which will cause translation of LF characters to CRLF.

You can see that clearly when you compare the local file with the copy retrieved from the App::revealup server on Windows:

Checking RFC2616, we note the following:

3.7.1 Canonicalization and Text Defaults

When in canonical form, media subtypes of the "text" type use CRLF as the text line break. HTTP relaxes this requirement and allows the transport of text media with plain CR or LF alone representing a line break when it is done consistently for an entire entity-body. HTTP applications MUST accept CRLF, bare CR, and bare LF as being representative of a line break in text media received via HTTP.

So, even for text files, I see no reason App::revealup ought to provide line-ending translation. The correct fix is to replace line 23 in App::revealup::util:

my $c = $path->slurp();

with

my $c = $path->slurp_raw();

PS:

For reference:

Sunday, December 14, 2014

Fix for Windows 8 fax API released by Microsoft

Earlier this year, I wrote about my problems sending faxes using Windows 8.1 Pro.

Apparently, there is fix for that now.

I haven't tested it yet. It was included in the November 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2, and I have the updated FXST30.DLL on my system. Hope it works ;-)

Friday, December 12, 2014

HTTPS everywhere and H.R.4681 - Intelligence Authorization Act

The house quietly passed HR 4681. Section 309 of the act contains a gem:

SEC. 309. PROCEDURES FOR THE RETENTION OF INCIDENTALLY ACQUIRED COMMUNICATIONS.
(a) Definitions.--In this section:
(1) Covered communication.--The term ``covered communication'' means any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage.

(B) Limitation on retention.--A covered communication shall not be retained in excess of 5 years, unless--

(iii) the communication is enciphered or reasonably believed to have a secret meaning;

So, first of all, this section empowers government agencies to retain communications they have captured, without a court order, for five years.

On top of that, if the communication is encrypted, (the "reasonably believed to have secret meaning" is so ambiguous not to constitute any kind of constraint on government).

Like many others, it made sense to me to move to HTTPS everywhere. I have taken baby steps toward that end, and hope to make more progress early next year.

However, in light of the language of this statute, it is entirely possible that moving the HTTPS everywhere is going to give U.S. government and assorted agencies the right to retain all communications they capture until such time as the encryption can be broken, or, more significantly, there arises a situation where case needs to be made against you.

While perfect forward secrecy may protect the content of the communication from being revealed, it is a double-edged sword, because it also prevents the voluntary revelation of such communication by an accused to reveal that the contents exonerate her from national security related offenses.

Therefore, if a government agency decides to target an individual, the mere existence of encrypted traffic between the browser of the accused, and a "bad" web site can doom her fate. Since encrypted communications can be kept indefinitely, accusations can be made any number of years in the future, in a much more different environment.

Alice: "But, but, I was only doing web searches about women's rights in Muslim countries when I clicked on that link to the al-Qaida recruitment web site."

Guvmnt: "Well, we captured 37,538,499 bytes of communication between you, and that web site. All of it is encrypted with perfect forward secrecy, and we can't tell what information you exchanged with the al-Qaida recruitment web site. But, if you were innocent, why would you engage in secret communications with the al-Qaida recruitment web site?!"

Your computer's IP address, and the IP addresses of the sites it communicates with are "meta information" which we know government agencies presume they have a right to collect without a court order. So, that is not new.

What seems to me to be new is the legal authority granted to government agencies to indefinitely retain information, and communications in case that communication is encrypted.

Tuesday, December 9, 2014

int main in C and hard coded directory separators in Perl

When I see a C program which features things like void main() or struct x *p = (struct x *) malloc (sizeof(struct x*)), alarm bells go off in my head, regardless of the stature of the person who wrote it.

I have decided, the Perl equivalent of those alarm bells is unwarranted the assumption that Unix style directory separators must work everywhere.

Looking at TestML::Runtime, I was horrified to find:

$self->{base} ||= $0 =~ m!(.*)/! ? $1 : ".";

This tries to extract the directory portion from the name of the executable by matching all characters up to a Unix style directory separator. If there is no Unix style directory separator in $0, it defaults to using the current directory, "." as the base.

So, what happens when this code is running via a test script invoked as perl t\some-test.t? Ooops! The test script cannot find its input files because it tries to look for them relative to the directory portion extracted above.

This wouldn't be so painful if people, more than two decades ago, had not already figured out how to parse a path into directory and file name components:

use File::Basename qw( dirname );
$self->{base} ||= dirname($0);

ought to work, no?!

Of course, it is probably more correct use File::Spec->splitpath, but I assume the author does want that dot in case $0 does not have directory portion.

I did not even know about the existence of TestML. It was a dependency for another module I did not know about, Pegex. All I did was try to install Inline::C.

My goal is not to fix individual modules.

My goal is to raise enough awareness of the interconnected issues that 1) The assumption that Unix style directory separators work everywhere is unwarranted; and 2) Perl has provided the necessary infra-structure to make this a non-issue for the longest time.

This way, if anyone other than me also tried to use Microsoft tools to build their own perl, and modules, the experience will be undermined by unforced errors.

I still can't get a smoke tester going on my Azure VM, and the free trial is about to run out. And, nine out of 10 issues I have encountered have been related to some assumption relating to paths. Each issue is small, trivial even, but they are still problems that ought not to exist.

I prefer to submit pull requests instead of bug reports, but the prospect of figuring out Zilla::Dist was not enticing.

Monday, December 1, 2014

Yeah, you put me in my place real good!

On Reddit, I was admonished for trying to install dzil, and my tardiness in filing a bug report against Moose. Frankly, I felt a little guilty. After all, why couldn't I have dedicated days into cloning, checking out, familiarizing myself with umpteen distros, patching things, testing with my perl built with Visual Studio 2013 Community Edition, as well as testing with various Strawberry, Cygwin, ArchLinux, and OSX perls instead of making a list with the goal of getting to it when I had some spare time?

With that twinge of guilt, I fired up a ConEmu instance, and cloned the Moose repo containing Karen's latest bugfix in response to my blog posts. Yes, I had meant to get to it, but Moose wasn't the only module which used filenames in ways that directly contradict the advice in perldoc perlport.

So, I run perl Makefile.PL. This is what I get:

OK, I follow the advice. Here is what I get:

Translation: You suck you loser!

That is simply obnoxious.

Undeterred, I run nmake test:

OK, that I can understand. So, Moose developers assume the only compiler in the world is gcc, and therefore, use gcc options regardless of the actual compiler. Fine.

That happens in inc/MMHelper.pm:

sub ccflags_dyn {
    my $is_dev = shift;

    my $ccflags = q<( $Config::Config{ccflags} || '' ) . ' -I.'>;
    $ccflags .= q< . ' -Wall -Wdeclaration-after-statement'>
        if $is_dev;

    return $ccflags;
}

Yes, another patch I should submit when I get a chance. Right now, I just want to check if Karen's commit makes everything else go smoothly with the Microsoft toolchain. So, just replace if $is_dev with if 0.

Do another perl Makefile.PL.

you are using MSVC... my condolences. at (eval 67) line 11.
you are using MSVC... my condolences. at (eval 67) line 11.

OK … OK

Here is the result:

All tests pass.

And, that is the sad part.

Except for some compiler flags that could easily be special-cased, everything works!

But, why slap me in the face, and put me in my place for simply daring to use a different tool than the one you like?