Showing posts with label how-stuff-works. Show all posts
Showing posts with label how-stuff-works. Show all posts

Tuesday, March 1, 2022

Radiosonde iGates are quite a mess, and they seriously need some fixing

UPDATE: The radiosonde data feeds have almost completely been moved to sondehub.org and the APRS-IS is no longer involved or affected by it. Thank you!

Saturday, February 19, 2022

Baofeng & BTech APRS-K1 & iPhone problems

TL;DR: It seems like many Baofeng radio units (UV-5R, UV-82, etc) do not work with the aprs.fi iPhone app using the BTech APRS-K1 cable. They do not work with any iPhone app - I tried all the other APRS apps I have, and other non-APRS apps recording or playing back Audio. It seems a bit like a build quality issue as it does not depend on the radio model, but rather the individual unit. It works for some people, but for many it doesn't. Other radios I have work fine. The Android phone I have works with the UV-5R, so it must have something to do with the circuitry on that side as well.

There's a 1-capacitor fix/workaround in the end!


Many users of the aprs.fi iPhone app have purchased an affordable Baofeng radio, and the BTech APRS-K1 cable, and have then tried to use those for APRS. It has worked for some users, but for many it has failed miserably. This setup does not have a PTT line to key the transmitter – instead, VOX is enabled so that the radio will transmit whenever there's audio coming from the iPhone or iPad.

Users have reported that as soon as the aprs.fi DSP modem is enabled ("Connect TNC" is pressed in the iPhone app), the Baofeng transmitter goes on, and never goes off. When "Disconnect TNC" is pressed in the app to stop the modem, the transmitter goes off a second or two later.

Naturally, at that point, the first guess would be that the modem would be emitting some noise, which would trigger the VOX in the Baofeng. But it doesn't, if you attach earphones and listen, or if you look at the modem output with an oscilloscope.

It doesn't depend on the radio model - UV-5R or UV-82. Some units work, some don't. I had only tested the app with a Puxing PX-777, a Kenwood TH-D72 and a Kenwood TH-D74, all using the same BTech APRS-K1 cable, as they all are using the same mic/headphone connector ensemble. All of them work fine. Well, as well as a VOX-driven APRS transmitter can work, i.e. not great, as the transmitter keys up slowly and goes off very slowly due to the VOX delays.

So a few weeks ago I finally ordered an UV-5R so that I could figure out what exactly is going on. I have a Siglent digital storage oscilloscope which allows me to look at the three audio channels between the iPhone and the radio: left & right headphone outputs, and the microphone input.

I also bought a 4-wire 1-meter TRRS extension cable from Ebay, cut it up and made a 10 cm extension cable with exposed wires where I could attach the oscilloscope. The JAMEGA Gmbh cable is awful - no shielding/braid at all on the cable, and they even sell a 10-meter version. Not good for low-level audio signals. But it'll work for 10 cm.


The first thing to test: plug in just the APRS-K1 cable, without a radio at the other end, and see that the inputs and outputs on the adapters and cables are fine, and the scope sees the right things. In this oscilloscope screen shot you see the L/R headphone outputs on the bottom, when a packet starts to go out. The blue line on top is the microphone line, which has a little high-frequency noise on it, as it is a low-level high-impedance wire which is not attached to anything on the other end.


Ok, that seems fine. The lines are quiet when the app is not transmitting a packet, and the waveform seems alright when it is transmitting. But what happens when I attach the APRS-K1 cable to a Baofeng without touching oscilloscope settings? This! All the time! It does not matter whether the Baofeng is powered on or off. Attach a Kenwood or a Puxing, this does not happen. There's a high-frequency signal on the cable, on all 3 pins.


If we zoom in to it on the frequency scale, we find a 473 kHz signal on all 3 wires. The frequency is drifting, so it's probably not sourced from a good oscillator / clock reference. Click on the image to see a higher-resolution screen shot - frequency counter is in the top right corner, level measurements on the panel below the plots. Pretty high-level, so if the Baofeng is powered on, VOX probably keys up the transmitter. The frequency is so high that you won't hear any audio being transmitted if you listen to the transmission with another radio. This is how it looks with the iPhone and a Belkin Lightning-3.5mm adapter.


With the Belkin lightning-3.5mm adapter, levels are 150 & 330 mV L/R, frequency is about 470 kHz. 

If I turn on the Baofeng, the VOX triggers, transmitter goes on, and on the oscilloscope the only difference is that there's some high-frequency noise on wires, which can be expected as there is a 5W transmitter on the table. Even though it's transmitting to a dummy load instead of an antenna, this is to be expected.


This is the same thing, without the Belkin adapter, when attaching directly to the 3.5mm connector of my iPad - levels are lower at 20 & 110 mV L/R, frequency is 555 kHz. Baofeng is off, so no high-frequency noise from the transmitter.


Now, the interesting detail is that this only happens when the iPhone audio interface (A/D and D/A converter chip & amplifiers) are powered on and capturing audio. Could it be a feedback loop of some kind, between the A/D D/A converters and the circuitry on the Baofeng, with the A/D/A converter leaking the 500 kHz signal back and amplifying it? Or an oscillator is formed together with the Baofeng?

The iPhone has a power management system, where unnecessary components are powered off to conserve battery power. If any application (Voice Memos, Camera app in video capture mode, or the aprs.fi app) starts to record audio from the microphone input, the electronics of the A/D and D/A converters are powered on. This explains why the transmitter goes on when you start recording audio on the iPhone, or when you turn on the modem in the aprs.fi app. If you're wearing headphones attached to the iPhone and start audio playback from any app, or start recording in one of those apps, you can even hear a small "pop" sound in the headphones - it's the D/A converter waking up! It looks like this on the oscilloscope:


Because someone reported the same Baofeng + APRS-K1 cable setup works with APRSDroid on an Android phone, I tried it with my Nokia here. Yes, it does work, so there must be some difference in the audio circuitry or how it is used!

On Android, the A/D and D/A circuits seem to be powered separately. Georg DO1GL says that APRSDroid only outputs audio samples when it is transmitting a packet, and captures audio samples continuously for the receive to work. When looking at it on an oscilloscope, there is a 260 mV DC offset on one headphone audio channel when it's not transmitting. The DC offset goes away during transmit, and some time after the packet has ended, the DC offset comes back. There is also some additional noise on the output of the Android phone for some time after the packet has gone out, and the noise stops when the 260 mV DC offset appears. It would seem to me that the D/A is powered on only when audio goes out, and it can be powered off even when A/D conversion is taking place.

On the iPhone it seems to me that the D/A cannot be powered off independently. All audio-recording applications which I tried cause the D/A to be powered up and the strong 500 kHz signal to be generated if attached to the Baofeng. I tried all the APRS apps with DSP modems, and several other audio-capturing applications, and they all trigger the noise, if my Baofeng is attached.

Oh well. If it's feedback or oscillation, can we fix it by adding capacitance? Turns out we can!


I'm not much of an electrical engineer, but sometimes you can work around issues like this by just adding a bit of inductance or capacitance somewhere in a very experimental fashion, without applying too much science or math.

If you peel off the sticker on top of the APRS-K1 dongle, and open the two small Phillips screws, you'll find a small circuit board with a few capacitors and resistors. In this picture, the radio is attached to the left side, and the iPhone is attached to the right side. Red wire is the microphone wire on each side, white is the earphone wire, and black is ground. No isolation transformers.

The iPhone-to-radio-mic connection has just a single 100 nF capacitor for blocking DC, on the bottom of the circuit board. The radio speaker -> iPhone mic path, on the upper side of the board, from left to right, has a series capacitor (10 nF), a 2-resistor voltage divider (10 kOhm top, 2.2 kOhm bottom half), a second series capacitor (guessing 10 nF but couldn't measure), and a 1.8 kOhm resistor to ground, which tells the iPhone that a microphone is connected.

I checked with the oscilloscope that the 500 kHz signal was strongest on the bottom right corner, the wire labeled Phone SP, i.e. audio output from the phone. I grabbed a box of capacitors which were small enough to likely fit within the enclosure, tried the smallest and largest values (10 nF and 470 nF) by just pressing the capacitor wires on the SP and GND pins (black and white wire), and found that the largest one stopped the oscillation completely!

Most importantly, the Baofeng UV-5R is no longer stuck transmitting!

I then tried different values, and found that when attached to the iPad, where the 500 kHz signal levels were lower, a smaller value of 220 nF was enough. With the iPhone and the Belkin 3.5 mm adapter, only 470 nF cured it.

In hindsight, I could have tried putting the cap on the other pins as well - a lower value might have worked in another location. But this worked well enough and I didn't want to spend any more time on it, so I just soldered the capacitor in place, measured the results and closed the package!

This is how it looks with Baofeng UV-5R attached and powered on:


And this is how it looks when a packet is being transmitted. Note how the left and right audio output channels look the same, even though the parallel capacitance is present on one of those channels. Seems like it doesn't affect the transmitted audio too badly. Levels are different from the very first oscilloscope screen shot, because I've reduced the volume on the iPhone a bit.


Now, when I got it transmitting, I found out that the Baofeng is deaf. I have it attached to a 2*5/8 vertical antenna on the roof, which I use with other radios normally, and the transmissions go out just fine, but the squelch on the Baofeng does not open up when the nearby digipeaters transmit. On the other hand, squelch needs to be used, otherwise VOX will not key the transmitter at all, as VOX refuses to transmit when the squelch is open. I will not investigate that issue any further. :)

Thursday, July 22, 2021

Manual stopping of the aprs.fi iPhone app is unnecessary

Quite often someone says the aprs.fi app is starting up with the map showing Helsinki instead of the previous map view and position, and requests for an improvement to the app to save the last location on the map. Well, it turns out that the app certainly does save the last location, and in fact a fairly complete state of many other views, every time the app leaves the screen. It is saved to a state restoration file. It will return to that view, based on the restoration file, even if the operating system needs to free up memory for other apps and removes the aprs.fi app from memory. Even after the whole iPhone is rebooted.

I invite you to make a quick test:

  1. Position to your location (using the GPS centering button in the low right corner), or any other preferred position other than Helsinki.

  2. Leave the app: Go to home screen by pressing the home button or swiping up on a device without a home button. Do not go to the most-recently-used apps list and swipe the aprs.fi app up to manually kill it. Just don't.

    When the app goes out of view the app saves the current view and state to a small state restoration file. Within some 5 to 20 seconds iOS will completely suspend the app from running, unless beaconing is enabled or the software TNC is running and streaming audio. It will not run in background, unless there is a necessary background task, and the iOS operating system will not allow it to execute on the CPU.

    iOS also takes a screen shot from the app on the screen at this point, and saves it on persistent storage along with the restoration file (SSD/flash memory).

  3. Turn off the power of the iPhone (Settings -> General -> Shut down). Just to prove the point. This will, for sure, terminate all apps and remove them from memory.

  4. Turn on phone, and open up the aprs.fi app. At this point it is started again, and it reads the state restoration file.

  5. Observe the app magically returning to your last view. It can even return to most other views than map view after a complete reboot cycle. If you were tracking stations, it will continue to track the same stations at their current locations. If you were at the Help screen before the reboot, it'll go back to Help!

    To make the "cold start" of the app look faster, iOS will initially display the screen shot of the app it took in step 2, until the app is actually running and producing stuff on the screen. Even after a reboot, it looks very much like the app would have been running all the time. Sufficiently advanced technology looking like magic, again.

  6. If you manually terminate the app by swiping it up from the recent apps list, iOS will delete the state restoration file and the screen shot, so that the next startup of the app will happen from a clean state. It will not be able to go back to the previous view. On the next cold start the app will show a splash screen with the aprs.fi logo instead of the screen shot.
Quite a few people have a habit of killing apps and removing the state files by swiping them up from the recent apps list. This is probably because the misconception that those apps would be running on the CPU and consuming memory, and killing them would free up resources and save energy. This would be quite logical and is rooted in the history of traditional computers. The Internet has a thousand sites describing this procedure and claiming it'd do something good. Unfortunately not everything written on the Internet is true.

In fact the iPhone/iPad iOS operating system is already doing all that needs to be done! When an app goes out of view, iOS normally suspends all execution of the app after about 5 seconds. The suspended app will remain in RAM memory, though, until that memory is needed for something else. If there's enough memory and you return to that app soon without using a lot of memory in other apps, iOS can simply wake the app up from memory and it will resume running very quickly. It may remain suspended in memory for a very long time if you use it frequently without using a lot of memory in other apps. The power usage of that memory is very small, and the amount of power used does not change based on how much stuff is currently stored.

When the active app currently running in foreground (i.e. displayed on screen) needs more memory and there isn't any memory available, iOS will quietly free up memory by fully terminating some of the other apps which are currently suspended. When a terminated app is started again, it will need to initialise itself and read all resource files from the permanent flash storage. All of this takes much more CPU, wall-clock time and electrical energy than waking up from RAM memory.

If apps are unnecessarily terminated manually, they will use more energy when they are used again, as opposed to the situation where they are simply woken up from suspended state.

The attached image is from the Apple developer documentation. The app transitions through the Inactive states very quickly when moving between the Active state and other states.

That said, some apps can also run in the background, but only while performing one of a few specific tasks: playing music, VoIP calls (skype, whatsapp calls, other telephony), receiving GPS location updates for navigation, and a few other things. Each of these Background Modes need to be specifically permitted by an Apple employee during the app review process. For example, an app without actual user-visible mapping, navigation or location-related features are not allowed not obtain GPS positions in the background.

The aprs.fi app plays and records audio in the background when the software DSP modem is running. A red "recording" symbol will show up in the top of the screen whenever this happens in the background, and tapping that symbol will pop up the app doing it.

The aprs.fi app can also receive GPS location updates when beaconing is enabled and the app is given permission to obtain location data in the background. The "Allow location access" setting in iOS Privacy settings must be set to "Always". "While Using the App" setting only gives location data when the application is in the Active state, i.e. in the foreground, displayed on the screen. The app will naturally only request and receive location updates in the background when beaconing is enabled - requesting the frequent location updates uses a significant amount of energy since it powers up the GPS circuitry. The battery drains much faster if beaconing is enabled! If the software modem is not running, and beaconing is off, the app will be properly suspended within seconds after it leaves the screen.

After saying all of this: There are a few cases where manual termination of the app may be necessary. If the state restoration file is corrupted, and the app crashes on startup while reading it, manual termination will delete the file and work around the issue.

Once I had a bug in the app, where the user could navigate to a view which had no working "go back" button, and no way to switch tabs. The app was running but the user was stuck there on that single screen. To make things worse, state restoration worked perfectly, so even after a full iPhone reboot the user would be automatically brought back to this view! Again, a manual termination of the app removed the state restoration file and the app would again start up in the map view, and all was fine as long as the user did not go to that same view again.

A buggy app could accidentally also continue recording audio, or receiving GPS location coordinates, after it no longer needs them. But iOS will tell you if they do this, and you can then terminate them if necessary.

A former Apple Genius Bar technician, Scotty Loveless, wrote:
"By closing the app, you take the app out of the phone's RAM . While you think this may be what you want to do, it's not. When you open that same app again the next time you need it, your device has to load it back into memory all over again. All of that loading and unloading puts more stress on your device than just leaving it alone. Plus, iOS closes apps automatically as it needs more memory, so you're doing something your device is already doing for you. You are meant to be the user of your device, not the janitor.

The truth is, those apps in your multitasking menu are not running in the background at all: iOS freezes them where you last left the app so that it's ready to go if you go back. Unless you have enabled Background App Refresh, your apps are not allowed to run in the background unless they are playing music, using location services, recording audio, or the sneakiest of them all: checking for incoming VOIP calls , like Skype. All of these exceptions, besides the latter, will put an icon next to your battery icon to alert you it is running in the background."
In 2016, an iPhone user decided to email Apple CEO, Tim Cook, and ask whether manual killing of apps would extend battery life. The reply came from Apple's senior VP of Software Engineering, Craig Federighi:


The recommendation of Kendall Baker is golden:
"As for the multitasking menu, think of that as a “Recently Used” section, as opposed to a “Currently Open” one."

Saturday, February 1, 2020

How APRS paths work

The APRS packet path is used to control the distribution and retransmission of a packet in the APRS network.

Every now and then someone asks how paths actually work. Having spent quite some time staring at packets that have been digipeated, and decoding them at my little aprs.fi web site, I suspect I might have begun to understand it, so here goes. I'll explain it the long way and begin with a packet without a path, continue with the traditional AX.25 digipeating path, and then go to the APRS world.

No path


Let's assume my call is N0CALL, and I have a generic APRS device which uses the generic APRS tocall (destination callsign) of APRS. The packet, without any digipeaters or path, in the usual text format, as used on the APRS-IS, would look like:

N0CALL>APRS:!1234.56ND01037.50E&

Some of my APRS gear on top of my old bass gear, from left:
Mobilinkd TNC3 for iPhone (prototype unit), Kenwood
TH-D72 (digipeats), an unassembled TNC-X (can digipeat with
a daughtercard added), Argent Data Systems Tracker2 OT2m
(digipeats), Coastal Chipworks TNC-pi on top of a Raspberry
Pi (digipeats with aprx), Byonics TinTrak4 (digipeats),
Kenwood TH-D74 (can digipeat). Moomin for size reference
only. Hartke Kickback 15" 120W. If your APRS hardware
is not shown here yet, my address is on qrz.com. :)
There are two separating characters: the '>' separates the source and destination callsigns, and the ':' separates the packet header from the actual data being transmitted (the APRS formatted position, in this case). A packet like this would not be repeated through any digipeaters, but it could still be directly heard, picked up by an iGate and passed to the APRS-IS on the Internet. Sometimes you might want to do this and configure your transmitter with an empty path.

Classic AX.25 digipeating


The digipeater path may then optionally appear after the destination callsign. Here's a packet that I might transmit, which requests digipeating via two specific digipeaters, in a particular, specified order:

N0CALL>APRS,OH7RDA,OH7RDB:!1234.56ND01037.50E&

Here, the digipeater path is OH7RDA,OH7RDB - the packet should be digipeated by OH7RDA and then OH7RDB. This is how classic AX.25 packet radio digipeating works. APRS is transmitted using AX.25 packet radio packets, so they will follow the old AX.25 rules too. You can certainly do this and most APRS digipeaters will digipeat your packet if you just put the callsign in the path.

When OH7RDA retransmits the packet, it will flip the "has been repeated" bit on for that callsign. It really is a single bit called the "H bit". The packet, in text format, will look like this – note the '*' which indicates the "has been repeated by this digipeater" bit:

N0CALL>APRS,OH7RDA*,OH7RDB:!1234.56ND01037.50E&

Each digipeater will look for the first digipeater callsign in the path which has not been used yet (i.e. the "has been repeated" bit is not on - there's no "*" in there), and then figure out based on that if it should be retransmitted. So, after OH7RDA has retransmitted the packet, and OH7RDB has heard the above packet with OH7RDA*,OH7RDB in it, it will go "oh that's my callsign, I'll retransmit". The resulting packet transmitted by OH7RDB will have OH7RDA*,OH7RDB* as the path! Note that OH7RDB will not retransmit the original packet before OH7RDA has done his part.

To add confusion, and to save a few bytes, usually only the last "*" is usually printed, although the "has been repeated" bit is set on all previous digipeater calls. OH7RDA,OH7RDB,OH7RDC* actually means OH7RDA*,OH7RDB*,OH7RDC* !

Aliases of digipeaters, classic AX.25 packet approach


Digipeaters can be also configured to respond to alias callsigns in addition to their own call. For example, OH7RDA could be configured to repeat a packet which requests digipeating by ALIAS. It may then do one of two things: It can either substitute ALIAS with it's own callsign, or prepend it's own callsign before ALIAS. This varies by digipeater software and its configuration.

N0CALL>APRS,ALIAS:data

When retransmitted by OH7RDA, will become one of:

N0CALL>APRS,OH7RDA*:data
N0CALL>APRS,OH7RDA,ALIAS*:data

The benefit of the second format (digi call prepended) is that, upon receiving such a packet, we know that the originator used ALIAS as the path.

In the classic/old APRS digipeating world, people used aliases like WIDE to ask for any digipeaters to digipeat. Currently, this method is used to digipeat through satellites and the ISS - just use a simple alias of ARISS as the path and it will be digipeated through the RS0ISS digipeater on the ISS or any of the other APRS-supporting satellites on 145.825 MHz! Yes, this means the PATH setting should be just "ARISS". Do not put any of the WIDE or RS0ISS stuff in there, it won't help.

WIDEn-N paths on APRS


This is where it gets interesting and more specific to APRS, as these paths only work on APRS digipeaters. A WIDEn-N path has two integers, n and N. WIDE3-1 would have n of 3, and N of 1. The first integer theoretically means "I'd like to have this packet to be digipeated by this many digipeater hops" (3 in the example of WIDE3-1). The second integer means "there's this many hops left before digipeating stops". When it becomes 0, it won't be digipeated any more, and the "has been repeated" bit will be set (a "*" will appear).

A packet with a path of WIDE3-3, after first retransmission will become WIDE3-2, and then WIDE3-1, and after the third retransmission it'll be WIDE3*. The SSID of 0 is not printed – it's really WIDE3-0 under the hood of the packet but the -0 will not be shown.

The digipeaters supporting this kind of alias usually also prepend their callsigns (all the good ones do, the bad ones don't). And the packet will be often picked up by many digipeaters! So you may see something like this, when both OH7RDA and OH7RDB hear N0CALL, and OH7RDC hears OH7RDB:

N0CALL>APRS,WIDE2-2:data
N0CALL>APRS,OH7RDA*,WIDE2-1:data
N0CALL>APRS,OH7RDB*,WIDE2-1:data
N0CALL>APRS,OH7RDB,OH7RDC,WIDE2*:data

You can also send a packet which originally has a path of WIDE2-1, and it'll be digipeated by digipeaters which are configured to respond to the WIDE2 alias - but only once, since the "remaining hops" number is 1 to begin with. If three digipeaters can hear you, this would result in:

N0CALL>APRS,WIDE2-1:data
N0CALL>APRS,OH7RDA,WIDE2*:data
N0CALL>APRS,OH7RDB,WIDE2*:data
N0CALL>APRS,OH7RDC,WIDE2*:data

A packet of WIDE2-2 is often retransmitted by much more than than 2 digipeaters, since it will be digipeated twice to each direction. Each high-level digipeater in a tower will be often heard by a few digipeaters which are actually quite far away. This is how a WIDE3-3 path might be distributed through a whole country, and a bit to the neighbouring countries as well (rough example but not far from the truth):


This is why we don't usually use paths with more than 2 hops on APRS, and practically never more than 3 (except in some specific cases in rural areas with quiet channels having little traffic). A packet like this can light up the whole APRS network in almost the whole country (or a few states, for you in the larger nations). If a lot of people did this, the channel would become too congested and APRS would stop working as everyone would be transmitting on top of each other. It has happened and we've learned it the hard way.

Fill-in digipeaters


In some areas people set up low-level "fill-in" digipeaters at their homes. They are only configured to respond to the WIDE1 alias, but not WIDE2. Digipeaters higher up at towers and such, serving a wide area, are configured to respond to both WIDE1 and WIDE2 aliases.

The idea is that the fill-in WIDE1 digipeater may well hear packets from nearby transmitters, which might be missed by the higher-up digipeater – due to distance, or because it hears a lot of stations and gets a lot of collisions, or because there are "urban canyon" obstructions. The wide-area-coverage WIDE2 digipeater will be heard by everyone, and there's no need for the fill-in WIDE1 digipeaters to retransmit packets which have already been transmitted by it!

This is accomplished by transmitting a packet with a WIDE1-1,WIDE2-1 path. This is the usual recommended default APRS path, as you might have noticed. It would be first digipeated by any digipeater (fill-in or wide-area), and then by a wide-area WIDE2 digipeater. But only for a total range of 2 hops! This is how it might look, if you could receive the original packet and all the digipeated packets - N1FILL is the fill-in home digipeater, OH7RDA and OH7RDB are higher-up digipeaters which can hear N1FILL but not N0CALL:

N0CALL>APRS,WIDE1-1,WIDE2-1:data
N0CALL>APRS,N1FILL,WIDE1*,WIDE2-1:data
N0CALL>APRS,N1FILL,WIDE1,OH7RDA,WIDE2*:data
N0CALL>APRS,N1FILL,WIDE1,OH7RDB,WIDE2*:data

In reality, many of these fill-in digipeaters are old and dumb TNCs, and can not do callsign prepending. Instead, they just do callsign substitution and replace WIDE1-1 with their own callsign, like this:

N0CALL>APRS,WIDE1-1,WIDE2-1:data
N0CALL>APRS,N1FILL*,WIDE2-1:data

Alternatively, if N1FILL would not receive this packet, but just the higher-up digipeaters directly, then it might go a bit further. OH7RDA heard N0CALL, and subsequently OH6RDA and OH8RDA hear OH7RDA:

N0CALL>APRS,WIDE1-1,WIDE2-1:data
N0CALL>APRS,OH7RDA,WIDE1*,WIDE2-1:data
N0CALL>APRS,OH7RDB,WIDE1,OH6RDA,WIDE2*:data
N0CALL>APRS,OH7RDB,WIDE1,OH8RDA,WIDE2*:data

To send a packet for one hop of digipeating you can choose to use either WIDE1-1 or WIDE2-1. With WIDE1-1 you may have it digipeated through fill-ins and wide-area digis, a WIDE2-1 should only trigger the wide-area ones.

There are also some buggy digipeaters out there which do not add their callsign at all - they just retransmit, and decrement the hop counter of a path element (WIDE2-2 to WIDE2-1 and so on). These are particularly annoying when trying to figure out what happened to a packet.

What about the ,qAR,IGATECALL stuff happening on the APRS-IS?


The APRS-IS servers on the Internet, or alternatively, the iGate itself, append something called the q construct to the packet. After the q construct you'll find the callsign of the iGate which received the packet from a radio and passed it to the Internet:

N0CALL>APRS,WIDE1-1,WIDE2-1,qAR,IGATECALL:data

The q construct does not appear on the radio side - it is only used for tagging additional information to it on the Internet.

Limits and policies


Digipeaters often have policies or filters configured, to prevent digipeating of outright silly paths like WIDE6-6, or paths with a big combined hop length ("WIDE1-1,WIDE2-2,WIDE3-3,WIDE3-3" for a total of 9 hops), or paths where the second integer is larger than the first one (WIDE1-7). Paths like this are considered abusive and cause excess congestion to the APRS network.

Having said that, I think it's alright to have some fun sometimes!

One fun trick is to send a packet, manually, by hand, towards the west, have it travel around the country and then come back to you from the east. You can accomplish this by drawing a map of live digipeaters (aprs.fi can do this for you), and then putting in the callsigns of the digipeaters in the path (up to 8 digis may work): DIGI1,DIGI2,DIGI3,DIGI4,DIGI5. Even though they are many elements in the path, it won't consume much more radio channel time than what a WIDE1-1,WIDE2-2 packet would. Just don't do this with the regularly-beaconing APRS tracker in your car, it'd be silly.

Thursday, August 15, 2019

RX-only igates considered beneficial to the network

As probably most of you know by now, besides running the aprs.fi web site, I'm also one of the two main authors of the aprsc server software. aprsc runs on most of the APRS-IS servers where iGates usually connect.

There have been recent and strong claims saying that receive-only (rx-only) iGates destroy the two-way messaging feature of APRS. This has been claimed in blog posts and facebook threads. Some people ask me if this is true.

No, it is not true. Receive-only iGates do not break messaging if there are transmit-capable igates nearby, and those transmit-capable iGates are connected to an APRS-IS server which has a full APRS feed. All aprs.net and aprs2.net servers, where iGates normally connect, do have a full feed. No problem!

Messaging would work from 1650m / 5400 ft above Vihti, even in the presence of
RX-only iGates, as long as there is at least one TX-capable iGate.

If there are no transmit-capable (TX) iGates around, two-way messaging will not work, of course. Having a transmit-capable iGate would therefore be better, but receive-only iGates are easier to set up technically, they are cheaper (receivers are practically free now), and licensing for automatic transmitters is difficult in many countries. Where transmit-capable (TX) iGates are present, receive-only iGates do not break the TX iGates, they just improve reception coverage. For messages, too.

Just to make it perfectly clear: If there are TX iGates present, additional RX-only iGates improve messaging performance. For the RF-to-IS direction (and ACKs for the IS-to-RF messages).

The common incorrect claim is that the APRS-IS server sends the message only to the latest iGate which heard a station. In fact, the APRS-IS servers (both javaprssrvr and aprsc) send the messages to all iGates which heard the station recently. In aprsc, "recently" means "within 3 hours", and I believe javaprssrvr uses something similar.

The server maintains a list of recently heard callsigns independently for each iGate client. There's a separate list of heard stations for each iGate client. When the server has a packet to pass on, it will look at all connected clients, and for each client, if the recipient of the message is found from the list of recently heard stations, the message will be sent. The scanning will not stop; the message will be given to all clients which heard the station.

I can confirm that we have written the software to do it like this, and since it is open source, you can see the code yourself. The automatic test case also validates that the server keeps working that way, so that there won't be a bug creeping in the future to break it accidentally. I've also tested that javaprssrvr behaves like this – I've run the test cases against it to confirm compatibility.

There may be problems when there is a server with a filtered feed involved (possibly a server software running at the "client" without having a full feed), but those are rare and known to be problematic. All Tier2 servers have a full feed (aprs2.net ones), and so do the core servers.

The real problem is that a user, seeing one-way beaconing to the APRS-IS works, may well expect two-way messaging to work too. And when it doesn't work, there's only a timeout, no immediate feedback saying "sorry, this won't work now", which is what a sensible system would do today.

Even in that case, an rx-only igate is better than nothing! I wouldn't be so harsh against them, since the step from RX-only to TX-capable may be a bit difficult for many.

Bottom line:

In each area, there should be one TX igate, maybe two. More may create QRM as the same messages will be transmitted from APRS-IS to RF many times. RX igates will not break messaging if there are TX igates around – they just improve RX coverage.

This picture may at first seem irrelevant, but it does show me working VHF in AM (122.825 MHz),
and Flarm on 868 MHz to OGN (which runs aprsc). Flarm antenna visible in top left corner.

Monday, December 12, 2016

aprs.fi moving to TLS

In an effort to increase security on the web at large scale, web browser vendors and other organisations such as Google are making changes which encourage web sites to move to TLS/SSL encryption. Even web sites which previously did not seem to need it – ones with static content only, and ones without any login / password functionality. This is good and fine – even if it's not a banking web site, it's good that third parties along the network can not observe or modify the content being downloaded. The Chrome web browser has started to label non-encrypted sites with an informative '(i)' symbol which warns the user that "Your connection to this site is not private", and will eventually make those warnings stronger. Google gives better ranking in the search results for https sites.

A real, practical issue right now is that the geolocation Javascript API is no longer available on non-HTTPS sites in recent Android and Chrome versions. This actually broke map center and tracking functionality on the aprs.fi web site.

I wholeheartedly support this movement, it will make the Internet a better place!

These days, with performance-improving developments such as ECDHE, GCM mode AES and hardware accelerated AES, running TLS on a web server is not much of a performance issue any more. Most of the CPU time will be spent on application logic, anyway.

The fun part is that HTTP/2, a new protocol used by modern web browser to access web sites, is only used over TLS/HTTPS – it is not available over plaintext connections. HTTP/2 is faster than older HTTP versions, and a surprising side effect is that a web site may well open up faster over HTTP/2 + TLS than over HTTP 1.1 without the encryption!

Picture not related. I just took it last summer. Kyyttö cows © Sappion luomu.
Before now, aprs.fi has only used TLS/HTTPS for its login and user account management pages. Fairly soon I will have a maintenance break on the aprs.fi servers, upgrade the operating system to the next major release, and install a new version of the aprs.fi software which supports access over both HTTP and HTTPS. To reduce duplicate content (same stuff being available over both HTTP and HTTPS) it will prefer HTTPS and nudge clients that way every now and then, but initially plaintext access should be possible, too. Later on, if there are no surprises, the nudges will gradually become stronger.

There are a few issues which need to be addressed. There are possibly a few Amprnet users accessing this site over amateur radio frequencies. On the other hand, they're then practically surfing the Internet over radio, and probably doing a few requests to other encrypted sites now and then, too, so maybe it's not a big problem for them.

Another thing is that apparently users in China can't access the Google Maps API over HTTPS, so those users would still need the plaintext access for now. I might make the zh.aprs.fi site plaintext only, and bump those users that way, or something along that way. Maybe the Amprnet users can use that, too?

Sunday, June 21, 2015

DKIM, SPF and assorted tricks to get through spam filters

Today I've set up DKIM (DomainKeys Identified Mail) on the aprs.fi servers and within the aprs.fi DNS zone. A week back I already set up the SPF (Sender Policy Framework) records in the DNS, and fixed the reverse DNS information for the IPv6 addresses used by aprs.fi to send out email.

Mike Mozart / Creative Commons / Via Flickr: jeepersmedia
In non-technical terms, this should help GMail and other services to figure out cases of others sending email (spam?) on behalf of aprs.fi, and correctly classify those as junk. It also might help GMail figure out that the registration confirmation and password reset emails sent out by aprs.fi are actually not spam.

It has been a rather persistent problem - GMail has consistently labeled the registration emails as spam, and people have been asking why they're not getting the emails. In all cases the mails have been found in the spam folder. We'll see if this helps!

Thank you to postfix and opendkim for making this a rather easy thing to get going.

Friday, March 13, 2015

Device identification database updated, available for other apps

Most APRS devices and applications transmit an unique AX.25 destination callsign in all their packets, so that receiving stations can figure out which application or device is transmitting each packet. Bob Bruninga maintains a tocalls.txt index file, which lists all the assigned destination callsigns.

Those devices which use the Mic-E encoding to transmit position packets encode the latitude and a little bit of the longitude within the destination callsign, in which case something else has to be done for device identification. Mic-E device IDs are encoded around the comment text, with one character in the beginning of the comment text and zero to two characters in the end of the comment. The Mic-E type codes are indexed in mic-e-types.txt.

Now, when an application such as aprs.fi wishes to automatically decode the destination callsigns and type codes to readable application names, as seen on the aprs.fi station information page, the author of that application needs to collect all the device identifiers from those two files, and somehow convert them to application source code, or a configuration file that can be read by the application. The master files are written with human interpretation in mind, and it's rather hard to make an application automatically parse out the identifiers from them. It needs to be done manually, and whenever new devices or applications are published, all the applications wishing to detect the new ones need to get an update. All softwares authors need to get notified that there are some new devices, and then somehow add the devices in their respective configs. That's quite a lot of extra work that would be better spent writing some fancy new features instead.

OH7LZB-7, correctly identified as a Kenwood TH-D72,
at Mikkeli International yesterday afternoon.
Aircraft and training provided by MIK at Helsinki-Malmi.

For aprs.fi, I initially made a Perl module, Ham::APRS::DeviceID, and published it as open source, so that other software authors could use the index too, and skip the manual labour-intensive part of parsing the master text files. For some obscure reasons a number of programmers have chosen to use other programming languages than Perl, and the module was of limited usefulness for them.

To improve the situation, I converted the device index in YAML (Yet Another Markup Language), which is easy to read and edit by humans, and also easy to read and edit by computer programs. I also wrote a little converter program which parses the tocalls.yaml file and outputs the same data in JSON and XML formats, which are popular file formats for passing data between computer systems.

I then updated the original Perl module to read the YAML file, and removed the old database that was embedded in the code. DeviceID.pm 2.00 and later use tocalls.yaml. I also updated aprs.fi to use the new version of the Perl module, and tocalls.yaml. The update brought a number of new devices to aprs.fi, including the newer Yaesu radios.

Other programmers who wish to do device identification are welcome to download the YAML, JSON or XML files and use those. It should be straightforward to automatically update the device index during an application build, or even automatically update the index directly in the application.

New devices should still be first added to Bob's files, and only then to tocalls.yaml. If you have a new device which has already been added to Bob's index, please make a pull request or an issue ticket to update tocalls.yaml accordingly.

Wednesday, May 23, 2012

aprs.fi connected directly to aprs2.net hubs

I'm happy to announce that aprs.fi has tonight been directly connected to all of the APRS Tier 2 Network hubs. Two frontend servers, APRSFI-C1 and APRSFI-C2, are now connected with read-only connections to the five T2 hubs (T2HUB1 to T2HUB5), which act as the backbone of the aprs2.net network. This should provide a very stable and trouble-free connection between aprs.fi and the APRS-IS network.



Before this change aprs.fi was connected to a single APRS-IS server (usually T2FINLAND) and collected packets from that server alone. That server, in turn, is connected to a single T2 hub. Sometimes that connection could have some trouble and be disconnected for a few minutes, causing some packets to be lost. Recently a misconfiguration within the T2 network caused intermittent but severe packet loss for a few users for a long time. Having redundant, parallel connections to all of the servers should provide aprs.fi with copies of all packets even if some parts of the network have issues.

The fact that aprs.fi was usually connected to T2FINLAND also caused many users to prefer that server, which in turn caused a high load to the single server. Also, if that server would have had a hardware failure, all of those users would have lost connectivity to the APRS-IS. aprs.fi would have automatically switched to another server. Right now, T2FINLAND has 210 clients connected, while most other servers only have 30 to 100 clients. The Tier 2 network currently has a total of 4005 clients connected to the 86 servers in 31 countries.

If you're using T2FINLAND (finland.aprs2.net), or if you have configured your server to connect to any other single server, please reconfigure your system to use one of these Regional Rotate Addresses:
Europe and Africa: euro.aprs2.net
Asia: asia.aprs2.net
North America: noam.aprs2.net
South America: soam.aprs2.net
Oceania: aunz.aprs2.net
Check out the map of T2 servers and the rotate address distribution on the aprs2.net home page!

All of the regional rotate addresses will make your client connect to one of the nearly servers which have recently been automatically tested to be available and working. When that server fails, your client will automatically connect to some other server. All of the servers will provide equally good connectivity to aprs.fi. Starting tonight, T2FINLAND is not better in that respect than any of the others.

I repeat: Do not connect to finland.aprs2.net. If you're in europe, use euro.aprs2.net instead. T2FINLAND's server hardware will eventually break (we found it in the dumpster), and your igate or client software might be disconnected for a long time until someone gets to fix the server. Unless, of course, you use a regional rotate address, in which case you'll be automatically rerouted to a working server.

Thanks to all the Tier 2 operators for making this possible!

Saturday, March 27, 2010

Slowdown on Friday 26th

Yesterday the aprs.fi APRS feed was slow for about an hour, between 15:00:55 and 16:11:03 UTC. One of the WXQA servers the service connects to was down, and the connection attempts timed out. The timeout was too long, and the connection retry timer was too short, and the connect() attempt is a blocking call, resulting in slow processing of packets. I knew about the potential problem, but hadn't bothered to fix it until now.

In the evening I implemented a 2-second connect timeout and an exponential backoff for the retry timer. First reconnect attempts will happen within seconds, but they will slow down to about 2 minutes between retries. Using a non-blocking connect() would have been the correct fix, but this was a bit quicker. The problem should not appear again in this form.

It seems like no APRS data was lost or missed, it was just collected in a buffer, and processed once the connect attempts started working again. The following graph gives some idea of the relative processing rate changes. At peak about 10 megabytes of data was in the buffer.

Thursday, March 18, 2010

VRRP failover - HA for the web service

This evening I've set up keepalived on the two aprs.fi front-end web servers. The program automatically manages the IP addresses of the web service. If one server goes down (due to a hardware failure, operating system or web server software hang, or for a maintenance reboot), the other server will now automatically bring up the service IP address of the first one. The fail-over happens within seconds.

The very same VRRP method is used by the routers serving the hosting network to keep the .1 default gateway address available.

There haven't been any hardware problems so far which would have made this necessary, but now I don't have to go through so much reconfiguration every time I want to reboot a server for a kernel upgrade, or take one down to add some memory. I can also shut down the web server processes on one of the servers (for a more complicated reconfiguration) and keepalived will quickly point all users to the other servers.

Tuesday, January 26, 2010

The time to test has arrived

Okay, I'm now officially fed up. Fed up with my own bugs caused by the complexity of the aprs.fi software. Every now and then I change something in one corner, maybe to fix a bug or to add a little feature, and that breaks something small in another corner of the project. Because I fail to notice the bug, it might be broken for days until someone actually tells me. Sometimes it's the embedded maps, sometimes it's the Facebook integration, sometimes it's AIS feeding using one of the three methods. And usually they're broken because I changed something that was quite far from the broken part.

It's time to do some automatic testing. It's no longer feasible to manually verify that things work after making a change and before installing the software on the production servers - there are too many things to test. It takes too long, and something is easily forgotten.

Writing automatic tests in hobby projects like this one is usually not done, because it generally feels like the time spent on writing testing code is wasted - hey, I could be implementing useful features during that time. But on the other hand, once some testing infrastructure is in place, it's much quicker and safer to implement changes since it takes only one or two commands to run the test suite and to see that the change didn't break anything.

A little terminology:

Unit tests execute small parts of the code base (usually a single function/method, or a single module/unit/class). They feed stuff to that little piece of code and see that the expected results come out. They're often run before actually compiling and building the whole application. As an example, I can write a test to run the APRS packet digipeater path parser with different example paths, and check that the correct stations are identified as the igates and the digipeaters.

System tests run the complete application, feeding data from the input side (for example, APRS or AIS packets using a simulated APRS-IS or JSON AIS connection) and checking that the right stuff comes out at the output side (icons pop up on the map, updated messages show up on the generated web pages).

The open-source Ham::APRS::FAP packet parser, which is used by aprs.fi, already has a fairly complete set of unit tests. After changing something, we can just run the command "make test" and within seconds we know if the change broke any existing functionality. If you follow the previous link to CPAN, and click View Reports (on the CPAN Testers row) you'll get a nice automatically generated report from the CPAN testers network. The volunteer testers run different versions of Perl on different operating systems and hardware platforms, automatically download all new modules which are submitted to the CPAN, run the unit tests included with the modules, and send the results to the cpantesters.org web site. Thanks to them, I can happily claim that the parser works fine on 8 different operating systems (including Windows), a number of different processor architectures (including less common ones like DEC Alpha and MIPS R14000 in addition to the usual 32-bit and 64-bit Intels), and with all current versions of Perl, even though I only run it on Linux and Solaris myself.

Last Friday SP3LYR reported on the aprs.fi discussion group that negative Fahrenheit temperatures reported by an Ultimeter weather station were displayed incorrectly by aprs.fi: -1F came up as 1831.8F and 999.9C. I copied a problematic packet from the aprs.fi raw packets display and pasted it to the testing code file in the FAP sources (t/31decode-wx-ultw.t), and added a few check lines which verify the results. Sure enough, the parsed temperature was incorrect, and "make test" failed after adding a test with a low enough temperature. There were a couple of test packets in there before, but none of them had a temperature below 0 Fahrenheit.

Only after adding a test case for this bug I started figuring out where the actual bug was. After fixing the bug the "make test" command passed and didn't complain about the wrong parsing result any more. I committed the changes to the SVN revision control system, and then installed the fixed FAP.pm module on aprs.fi. Because none of the other tests broke after the fix I can be sure that I didn't break anything else with the fix. And because there's now a test in the unit test suite for this potential bug, I'm sure that same bug will not accidentally reappear later.

This is called test-driven development, and it can be applied to normal feature development just as well. First write a piece of code which verifies if the new feature works, and then write the code which actually implements the functionality. When the test passes, you're done. You need to write a bit more code, but it's much more certain that the piece of code works, and won't break later on during the development cycle.

None of this is news to a professional programmer. But from now on I'll try to apply this approach to this hobby project too, at least to some degree. Yesterday I added a few unit tests to the code to get started:

$ make testperl
--- perl tests ---
PERL_DL_NONLAZY=1 /usr/bin/perl \
"-MExtUtils::Command::MM" "-e" \
"test_harness(0, 'libperl', 'libperl')" \
tests/pl/*.t
tests/pl/00load-module.......ok
tests/pl/11encoding..........ok
tests/pl/20aprs-path-tids....ok
All tests successful.
Files=3, Tests=83, 1 wallclock secs ( 0.26 cusr + 0.06 csys = 0.32 CPU)


It ran tests from 3 test files, and the files contained 83 different checks in total. The first file makes sure all Perl modules compile and load. The second file tests the magic character set converter using input strings in different languages and character sets, checking that the correct UTF-8 comes out. The third one runs 24 example APRS packets through the digipeater path inspector. By comparison, the Ham::APRS::FAP module's test suite has 18 files and 1760 tests, and it's just one component being used by aprs.fi.

In the near future I'll try to implement a few system tests which automatically reinstall the whole aprs.fi software in a testing sandbox, feed some APRS and AIS data in from the different interfaces, and see that they pop up on the presented web pages after a few seconds. I want to know that the live map API works, the embedded maps and info pages load, and that the Facebook integration runs. With a single 'make test' command, in 30 seconds, before installing the new version on the servers.

But now, some laundry and cleaning up the apartment... first things first.

Monday, October 12, 2009

New bad GPS fix detector algorithm installed

I've just installed my new bad GPS fix detection algorithm. It should detect bad fixes about as well as before, but produce less false positives. The new algorithm looks at the previously received packets instead of the previously accepted packets, and is also slightly adaptive, taking into account more history than just the previous single accepted position.

It should work better for jets (traveling close to 1000 km/h), although during the takeoff acceleration some points might be dropped. After some initial test flights we'll be fixing that. :)

It should also better handle the case where the initial transmission happens to be somewhere far off. It seems like there are a bunch of stations which always wake up in Tokyo and then start transmitting their correct position in the US or Europe. Probably the GPS manufacturer has decided to show it's office location instead of the standard 0/0 lat/lon, and either does not indicate the bad fix in the NMEA sentence, or the tracker ignores that bit of information and transmits the bad position. These should now jump to the correct position after just a couple of packets.

The algorithm also ignores positions which were sent more than 2 hours ago, so if you take an intercontinental flight and start transmitting your new position immediately, it should just work!

Feedback is more than welcome!

Thursday, October 8, 2009

Status and comment texts

As a little early morning exercise I've made aprs.fi show the status message in the info balloon of the current position on the real-time map, and also in the KML. Status message is shown in purple, and the comment text is shown in green.

There has been some confusion about these messages. There are three kinds of "status/comment" messages you can attach to your position. For example, SM4IVE-9 (info) is sending two of them.

The comment text is sent together with the position, in the end of the position packet. Here's an example packet with a comment text of www.sm4ive.com:

SM4IVE-9>APERXQ,WIDE2-2,qAR,SM5NRK-2:!5908.38N/01532.45E>000/000/A=000148www.sm4ive.com

The status message is sent as a separate packet which starts with a '>' character:

SM4IVE-9>APERXQ,WIDE2-2,qAR,LA6TMA-1:>{AT0B4}aprstracker-0.11-16f648

The Mic-E status message is encoded in a mic-e packet using just a few bits, and can contain one of these 8 standard messages: Off duty, En route, In service, Returning, Committed, Special, Priority, Emergency. 7 custom messages (Custom 0 to Custom 6) are also defined. All Mic-E packets contain this status message, and it only consumes a couple of bits in the message, so this requires the least bandwidth from the APRS channel. On the other hand, it can only express the few predefined values.

I would recommend using only the comment text, since it is sent in a single packet together with the position. The status message is sent in a separate packet which increases congestion.

If a status message is required (for example, if the text really needs to be so long that it doesn't fit in the comment text), the status message should not be sent too often. Certainly not as often as the position packet.

In the following photo Armi frowns upon seeing a long, static status packet:

Sunday, August 16, 2009

Arctic Sea position speculation

Quick recap: A cargo vessel called Arctic Sea (MMSI 215860000) was probably hijacked on July 24th 2009 near the eastern coast of Sweden. This was big news in northern Europe, since hijackings generally happen near the Somali coast, not over here. The ship has a Russian crew of 15, it appears to be owned by a Finnish company, and the owners of that company are of Russian origin. The Finnish media had considerable trouble trying to figure out the true owners, and the owners were really hard to interview. The ship deported from the harbor of Pietarsaari on 22nd of July and carries 6500 tons of Finnish timber, worth of about 1.3 MEUR.

The really odd thing is that the ship didn't go to the nearest Swedish port, but continued towards Africa as if nothing had happened. Very strange indeed. Either the hijackers were still on the ship, or the crew is taking part in the plot.

Latest news (Ransom demanded): BBC, CNN, YLE.

There have been a few questions about AIS positions of Arctic Sea shown on aprs.fi.

Q: Why is the track not shown for the moment of hijacking between Gotland and mainland Sweden?

A: There are no AIS receivers in the area which would directly send AIS reports to aprs.fi. These receivers are run by volunteers (thank you!), and each volunteer chooses where to submit AIS data. There is a receiver in the area, but it is submitting data to MarineTraffic only, and while MarineTraffic and aprs.fi exchange AIS data, aprs.fi is not getting the reports of all of those receivers. The Swedish maritime officials have an AIS receiver network of their own, and they've reported it ran circles and stopped for a while.

Q: Is the position shown for Saturday, 15th of August, valid?

A: Technically, it's possible, but I personally would find it very unlikely. It is easy to fake and it doesn't make any sense for the hijackers to publish their true position like this.

The position report was sent by an anonymous receiver station to MarineTraffic. It is quite easy to send fake data to MarineTraffic over the Internet, since they allow unauthenticated UDP packets containing NMEA strings to be sent to the service. aprs.fi does not allow unauthenticated UDP packets, all AIS submissions are tied to a specific receiving station using a password. Of course any one of those stations could feed us invalid positions, but at least we have some idea of the originator.

If the hijackers (or someone else) wanted to play tricks, they could also go to a shop selling marine radio equipment, buy an AIS transmitter, configure Arctic Sea's MMSI number (and other correct data) in it, give it an incorrect position by crafted NMEA strings (fake GPS receiver on the serial port of the AIS transmitter) and have it transmit the packets on the correct AIS frequency. If they've got the money and motivation to hijack ships with guns and speedboats, they've certainly got the guts to buy or steal AIS equipment. They could also grab the AIS transmitter from Arctic Sea, and take it to another position using a speedboat.

The French navy says there were 3 military vessels in the claimed position on the Bay of Biscay, heading for the Baltic sea, and they didn't see the hijacked ship. And they didn't see it on their radar, either.

The coast guard of Kap Verde claims to have seen the vessel about 800 km off the coast of Cape Verde, which is some 3600 km away from the Bay of Biscay.

In any case, this is starting to become a good plot for a movie.
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy