Which way to go?

During a recent ridesharing lift, the driver offered that my trip from the airport to my home could either be undertaken via an inland route that would take about 25 minutes, or alternatively the route could primarily be scenic via a beach-going route that would require nearly 35 minutes of travel time. The driver was entirely willing to take whichever of the two routes that I preferred.

If I was in a hurry, certainly I would have chosen the shorter-timed trip. This choice might not have been as breathtaking, but it would at least get me to my destination in a 10-minute sooner timeframe. On the other hand, for just a pittance of an added 10 minutes, the trip via seeing the waves and long stretches of sandy beaches seemed alluring and irresistible.

Yes, I opted for the ocean view and figured that the totality of selecting the longer ride was probably better for my overall mental health and psychic well-being anyway.

The fact that I was given a choice is somewhat unusual, I would assert. It seems that most occasions of taking a ridesharing journey are usually pre-ordained by the use of today’s marvelous GPS systems. The computer algorithms doing the routing are amazingly good at what they do. In days past, the algorithms were somewhat crude and simplistic, oftentimes solely based on the distance involved to try and determine which was the “best” route to take.

Humans know that distance is not the only determining factor of how long a driving trip will take.

You can readily pick a shorter distance that will get bogged down in gnarly traffic. You inch along at a snail’s pace. Another route that was longer in miles driven could easily be much shorter in terms of time consumed during the travel effort. The point is that you need to consider not just the distance since that’s only one perspective on the problem. What kind of traffic might be encountered? Are there lots of stop signs or traffic signals? Are the streets being torn-up for roadway repairs? Etc.

Selecting the right routing can be a bit of a chore.

Of course, modern-day routing algorithms have a myriad of factors that they take into account nowadays. This means that human drivers do not need to put much thought into the routing aspects anymore. Just bring up the GPS, enter the destination, and you’ll quickly get a quite sound travel path laid out for you.


Easy peasy, as they say.

The GPS that the ridesharing driver was using for my trip from the airport did not encompass the notion of a scenic trip versus a drabber inland trek. That wasn’t part of the algorithmic properties. This was something that the driver knew first-hand, having driven in the area for many years and by living generally in the nearby vicinity. Also, the driver indicated that prior riders had sometimes explicitly requested a beach-going path, so he regularly brings up the matter on an upfront basis rather than waiting for passengers to ask about it.

The motivation didn’t seem to be about money, though that was perhaps on the mind of the driver. Imagine that if a rider reached their destination and found out from a friend that they could have taken the scenic route, yet the driver didn’t mention the possibility, perhaps the tip or rating of the driver would get dinged accordingly. That’s the version wherein the driver failed to be proactive and gets penalized accordingly.

Another result could be that by informing riders of the choice of possibilities at hand, the driver might get a larger tip. You see, even if someone selects the inland route, they are likely to be appreciative that an additional option was offered. This might spur the rider to give an extra tip. Meanwhile, if the rider does take the scenic path, the odds are ostensibly heightened that the passenger will leave a boosted tip, doing so because of the joy involved in seeing the spectacular views.

All in all, the driver seems to come out a winner.

Drivers that don’t consider the notion of providing an option are going to lose out. They are either blissfully unaware of the alternative path or perhaps they are just willing to abide by whatever the GPS indicates. Since the rider might also have the GPS mapping available, the driver might also figure that it is easiest to simply drive as the GPS instructs, otherwise, the rider might question what kind of underhanded trickery the driver is up to.

That’s why this particular driver was seemingly astute enough to ask.

Had the driver chosen to take the scenic route and let’s say opted to not mention this rerouting to me, I might have gotten suspicious. We’ve all experienced those shady taxi rides wherein the driver goes half-way around town to stretch out a driving trip, bolstering the meter charges astronomically. A ridesharing driver doesn’t want to get into that type of dicey predicament and possibly lose their privileges with the ridesharing network service that they were dispatched on.

Anyway, this specific example dealt with the choice of going inland versus taking an ocean view route. There are certainly other criteria or characteristics that could lead to a similar situation. In other words, there might be something about a GPS proposed routing that is not necessarily the end-all in terms of which way to proceed. A driver might have in their mind additional factors worthy of potential attention.

Let’s try a somewhat akin use case.

Suppose the driver told me that there were two potential routes. One that would take me through parts of town that were extremely unseemly, and the other route that would be a kind of everyday route with nothing unusual or untoward along the way.

What might the unseemly categorization consist of? Perhaps some areas are covered with graffiti. Maybe there is a ton of trash all over the streets and sidewalks. Such an area might have known gang activity and frequent shootings. There might be a slew of statistics that showcase the area to be a high crime locale and altogether dangerous to be in.

If you were given such a choice, which route would you pick?

You might be wondering what the difference in the driving time might be. It could be that you’d be okay with the blighted route if it cut down demonstratively on the time for the journey. Or you might insist that no amount of saved time would be worthwhile in exchange for driving in the sordid area. Another consideration might be the time of day for the ride. If the ride is in the middle of the daytime, perhaps the risks are lessened. A ride through that part of town at say midnight might be an entirely different consideration.

To recap, we have ridesharing drivers that might or might not mention to passengers that there are alternative routes to a given destination. A typical computer-based GPS might not take into account all of the factors that a human driver might consider and instead optimize solely on driving time and distance. A ridesharing driver might choose a preferred route without necessarily informing the passenger or might let the passenger know what their choices are and have the rider decide which route they prefer.

To some extent, this routing question is on the shoulders of the driver.

Sure, there is the GPS system that is providing the most likely route. We can pretty much assume that by-and-large the ridesharing drivers will just go ahead with whatever the GPS indicates. No sense in fighting city hall, as it were. Furthermore, there is a slight chance that asking a passenger about their route preferences might cause some riders to get upset, feeling as though this is something the driver ought to know, and they are wasting the time of the rider in dealing with a straightforward matter.

Shifting gears, let’s consider the future of cars and the advent of self-driving cars (for my extensive coverage of self-driving cars, see the link here). The emergence of AI-based true self-driving cars will entail the use of computer-based AI driving systems. This means that there will not be a human driver at the wheel.

How might that impact this routing conundrum?

The simplest answer is that the AI driving system will stridently perform or execute the GPS algorithm indicated routing. Whatever the GPS spits out, that’s what the AI driving system is going to undertake. If this means going the inland route, so be it. If this means going the unsightly route, so be it. There won’t be any semblance of challenging what the GPS instructs.

Well, actually, that is an overly simplistic way of thinking about the situation.

There is a lot more to this.

Time to dig a bit deeper.

First, be aware that right now the automakers and self-driving tech firms are putting the preponderance of their focus toward getting a self-driving car to safety go from point A to point B. For this focus, there isn’t much of a care about which way the GPS says to go. Assuming that the GPS is providing drivable paths, the rest of the matter is up to the AI driving system being able to navigate and maneuver along those designated streets and highways (well, as long as the Operational Design Domain or ODD is also being observed, see my discussion at this link here).

Flash forward to the future and imagine that self-driving cars are relatively prevalent.

The odds are that self-driving cars are going to be organized into fleets. A fleet operator will have numerous self-driving cars in a particular area and make those vehicles available for ridesharing purposes. For example, having self-driving cars roaming around an airport is likely to be a smart thing to do. Flying travelers arriving at the airport are going to undoubtedly need a ride to somewhere in town.

Okay, so we’ve got a slew of self-driving cars that are meandering around near the airport, waiting for requests. The thing is, there might be numerous fleets all competing for that same passenger business. I’ve got my fleet of self-driving cars near the airport, and you’ve got yours there, and likewise so do other fleet operators.

Why would a potential passenger choose one of the self-driving cars over another?

We’ll make the somewhat obvious assumption that all of the self-driving cars are generally equal, namely, they are driving at the same level of safety, they are indistinguishable one from the other (for my analysis about all self-driving cars being somewhat the same in certain key respects, see my column coverage). I’m not saying they are all the same models or brands of vehicles, only that from the perspective of wanting a ride, they are nearly the same as a convenient form of ground transportation (we can also assume the cost is about the same too).

Presumably, I want people to select my fleet over your fleet. This makes sense since I want to try and maximize my revenue and keep my self-driving cars busy. An empty roaming self-driving car is pretty much going to be losing money while wandering. When a passenger is riding in a self-driving car, this generally means that someone is making money by that activity. The moment that a prospective passenger chooses your fleet over mine, I am saddened and there is a hole in my pocket (the money being lost to you).

I’ve got to come up with some other means to attract passengers. The vehicle itself is not going to be the differentiator.

Aha, the routing might be a handy way to make my self-driving cars more attractive.

We know that human drivers are apt to provide those tailored insights about which path to take. A standard GPS might not be imbued with those kinds of particulars. Thus, for my fleet, a specialized GPS is devised, one that leverages what human drivers know about a region and therefore can accordingly provide a wider or more introspective selection of routes.

Here’s the envisioned future. A fleet operator advertises that their self-driving cars are using a super-duper enhanced GPS routing algorithm. This algorithm will ensure that you never end up getting routed through the bad parts of town. No worries when you get into their self-driving cars since their chosen routes will always avoid the unseemly or unsightly areas.

Guaranteed, or your money back.

Some would say that this is a useful and an above-board way of doing things. Others though would decry this kind of routing and the flagrant and abdominal advertising that goes with it (I’ll explain why, in a moment).

Here’s then an intriguing question: Will the advent of AI-based true self-driving cars potentially include skirting around the bad parts of town, automatically so, and what makes that a potentially controversial concern?

Before we unpack the matter, it will be useful to clarify what is meant by referring to an AI-based true self-driving car.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Routing Aspects

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

For fleet operators that are in charge of running a slew of self-driving cars, there is something they will especially delight in and altogether not miss at all. As tough as it is to say this aloud, the beauty for fleet operators is that there be won’t be any human drivers involved in the fleet. All the driving is being undertaken by AI driving systems.

Human drivers need rest breaks. They need food breaks. They need bathroom breaks. All told, they are humans.

They also drive in ways that utilize human foibles. Human drivers are at times driving in a distracted manner. The distraction can be readily apparent such as watching a cat video when their eyes should be on the roadway, or the distraction can be nearly impossible to discern such as a driver that just had a fight with their boss and therefore mentally become preoccupied when now behind the wheel of a moving car.

Despite these qualms about human drivers, as per the discussion earlier, there is the possible advantage of them knowing the local area. This can make a big difference in terms of where they drive and where passengers will be taken.

The individualistic aspects of driving are going to be weaned out of the act of driving.

Human drivers are each of their own driving mores and mannerisms. Within the bounds of driving lawfully, there is a tremendous amount of latitude about driving. For self-driving cars, of a specific brand or model, will generally drive in the same manner, assuming that the street scene is the same (in other words, a self-driving car going down a street that has a dog on the loose is going to be driving somewhat differently from a similar self-driving car in a case where there isn’t a dog loose on that street).

If a fleet operator wants to instruct all of their human drivers to avoid a certain part of town, this is somewhat problematic to do but assuredly straightforward and extremely quietly possible via the use of an AI driving system.

Why does this possibility get the dander up by those that eschew the notion?

They argue that this kind of across-the-board consistency and ease of avoiding certain parts of a town or city is going to be bad for everyone. People taking rides in self-driving cars will never realize that a certain part of town is in rough shape and needs bolstering. The act of skirting that area is tantamount to denying that there are issues there that need to be dealt with and resolved. Furthermore, this could also lead to few if any riders every getting dropped off in that part of town. This in turn could depress the business activity in that area. There is less opportunity for businesses to set up shop there, knowing that self-driving cars are not likely to be dropping people off in those locations.

In essence, the mass routing of self-driving cars to avoid a given area of town could have the adverse indirect consequence of turning that part of town into a kind of isolated desert, cornered off from the rest of the city or county that it resides within.

Some pundits point out that there might be new regulations needed to try and overcome this kind of outcome.

Perhaps there would be legally imposed restrictions that self-driving cars could not be programmed to avoid specific parts of town. If a route to a given location went through a “bad” part of town, this could not be routed around. Whatever is the optimal path, regardless of where it goes and assuming that optimum is based solely on time and distance, that’s how the self-driving car must operate.

Others express that this would seem overhanded. Why place the burden on the backs of the self-driving cars and the fleet operators? If a part of town is being avoided, the city or town authorities ought to take this as a sign that they need to do something to rectify that situation. Turn around that part of town and make it thriving, such that self-driving cars and fleet operators will gladly route into those areas.

Another idea is that those areas that are being avoided ought to potentially offer incentives to the fleet operators, trying to get self-driving cars to come into those areas. Of course, the counter-argument is that those areas might not be able to come up with those incentives, by the very fact that they are already in bad shape, to begin with.


This issue will not yet appear until we have a greater prevalence of self-driving cars on our roadways. The number of experimental tryouts is much too small to make this kind of conundrum noticeable at this juncture of the self-driving car advancement.

Some various twists and turns will further arise.

For example, consider this chilling notion.

We know and already accept that if a human driver is menaced by a pedestrian in the middle of the street that is brandishing a weapon, the car driver will likely take evasive driving action. The human driver might try to drive away from the person, or possibly even drive toward the person, using the vehicle as a means of deterring the threatening act (see my coverage of the driver that drove into a shooter, saving lives, at this link here).

Self-driving cars are assumed to be programmed to never threaten a pedestrian. If a human opts to walk out in front of a self-driving car, we naturally hope that the AI driving system will bring the self-driving car to a stop. A human driver might not do so in the case of a pedestrian that is armed and the driver believes that the most prudent action entails a battle of car versus human.

The point is that when a self-driving car goes into an area that might have a lot of crime, presumably the criminals would know that the self-driving car and the AI driving system will always back down. In that case, it would potentially be relatively effortless for such crooks to try and stop and carjack those self-driving cars (or take other horrific actions).

Overall, there is going to be a lot of handwringing and new issues to be dealt with as our society experiences the advent of self-driving cars. Some mistakenly or naively believe that there is nothing societally different about whether a car is being driven by a human versus an AI system, but that kind of thinking is going to have a rude awakening.

Seems prudent to be considering these matters now, rather than waiting for the day that the horse has already gotten out of the barn, such that those wonderous self-driving horseless cars have already become ubiquitous as our primary form of vehicular travel.