Anvil Flats 4

Author--

view lirvilas's profile

lirvilas
4th Apr 2016, 10:00 AM

Images stolen from Wikipedia.

My god, I promise I'll get back to actual characters next update. This noise is killing me.

edit: guess someone's too cool to do cell shading anymore. dammit

(Edit) (Delete)

Users--

view Rd Ashes's profile

Rd Ashes
4th Apr 2016, 10:12 AM

BEING ON MEGATRON!!!!!!!!
WHOOT WHOOT WHOOT!!!

:-D

(this is why I wasn't invited to this conference)

(Edit) (Delete) (Reply)

view lirvilas's profile

lirvilas
5th Apr 2016, 9:31 AM

RD, this is a Nerd Orgy.

Classy marketing types such as yourself would not fit in.

MadJak on the other hand...

(Edit) (Delete) (Reply)

view ProfEtheric's profile

ProfEtheric
4th Apr 2016, 12:27 PM

Now aware of potential traps, the Professor treads carefully...

Okay, so Brigham here wants to set up the rise of the machines. The Cyberdyne Matrix, if you will (yes, in my headcanon, the Terminator franchise and the Matrix are, in fact, one story...).

Also, the good Lt. Col. seems to be prepping the rise of the Cybermen. Where's the blue police box when you need it?

(Edit) (Delete) (Reply)

view Call Me Tom's profile

Call Me Tom
4th Apr 2016, 3:09 PM

This feels like set up for the LIR epic we all know is coming!

Wait a second are we going to see advansced super drons Vs Elder Gods? As I feel that both you and Prof are both moving twoard an end of the world showdown!

(Edit) (Delete) (Reply)

view lirvilas's profile

lirvilas
5th Apr 2016, 8:04 AM

As a rough line of demarcation, the LIRverse and Autumn Bay split at the question of "does magic exist?"

The only references to "Elder Gods" you'll see around here might be the skipper of VAQ-187 the "Shamblers".

Prof will probably chime in with something snarky about how my peoples refuse to see what's really going on; I'd counter with "Winners don't do Drugs".

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
4th Apr 2016, 5:00 PM

Two areas people like to put their money on. Military and space programs. That is what one nuclear physicist told me anyway...

The military always has stuff first and better things than you can imagine which are later usually translated in "public convenience" when they develop even better tech.

The Internet (PC connections in general), microwave, radio etc.
All of that is thanks to military tech research. We just use the outdated crumbs today because it is convenient.

I do not know how far AI and robot research is behind the scenes but I am not afraid of any paranoid scenarios like SkyNet and similar for a long time.
Unless the tech is MUCH better in secret, even good robots still have problems performing parallel process as opposed to our brain.
And robots only do what they are programmed to do so they need humans.

Robots have to "think" about every action like moving arms and legs at the same time while we do not and those are just basic processes.
Developing complicated ANNs for AI is still highly rule based for simplicity and the fact that there are not too many qualified people though fuzzy logic is on its way more and more.

Gotta break the fact that programming is writing procedures and rules first if we want machines to think by themselves.
Maybe with quantum computers? Shrug.

Still do not see robotic horrors any time soon.

(Edit) (Delete) (Reply)

anonymous coward
5th Apr 2016, 9:39 AM

Computers are already capable of self-programming to a significant extent. It's how they finally managed to play Go at the level of skilled humans.

https://www.quantamagazine.org/20160329-why-alphago-is-really-such-a-big-deal/

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
5th Apr 2016, 5:05 PM

Yep! Provided it is something with a complete set of rules like games. But that is the best start as games are good for adaptability to the current situation and a lot of random factors.
It is progressing fast because playing Go used to be a huge problem some time ago while Chess was already pretty much good to go.
Unlike Chess with a "very limited" pool of actions you can do (which in turn is "easier" to "teach" the computer's ANN) Go has all the patterns on top of it and calculating the best possible action while evaluating given patterns used to be very resource intensive.
Wikipedia has a complexity table but I am repeating stuff from the article anyway. Sorry!

My point is that it is still highly dependent on human input although they are learning fast. But a game is a game and self-realization, self-awareness and telling humans to go screw themselves in another thing :D

I think there are Japanese hotels with robot female receptionists and from what I have read they work really human like. Though that is still programming for a single purpose and the rules are pretty much set.
Now I am not sure whether they are actually employed at places or not yet.

Machine does what it is made to do. It learns as much as it is allowed to learn. Humans decide.
Unless specifically programmed to eradicate us, I am not afraid of it doing it on its own. I think there needs to be some human input in that direction is some form.

The problem I see is machines replacing most jobs so I guess the only viable profession will be operators and maintenance. Then we will be actual slaves taking care of them... Or maybe not because we will live comfortably so only said engineers. Hm.

Considering the case here?
The only winning move is not to play.

I am open for all kinds of corrections!
I only know so much with some extrapolation. This is a huge subject :D :D

EDIT: This is why only drawn girls listen to me :)

(Edit) (Delete) (Reply)

anonymous coward
5th Apr 2016, 7:01 PM

Er, it's not so much a set of rules that's required for 'neural network programming' as a sufficiently simple abstraction to work on and an adequately clear optimization goal to evaluate against. This stuff is used for image and spam classification by Google, as well as in many other places. As far as AGI goes, there's a reason we measure bogosity in microLenats. ;)

Human superfluousness/obsolescence in the capitalist economy as it is currently defined is... a huge topic of its own which is utterly political. In the interests of not attracting political flamewars and spam here I'm trying to avoid saying anything about that though. Sorry. :)

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
5th Apr 2016, 7:35 PM

Technology is understood because we create it. We have a goal we strive for and build tech to achieve that goal. It is the goal that is not clearly understood at times.
In comparison, our own brain is still a mystery in a lot of areas and we are born with it.
I still consider our brain to be superior for what it is meant to do unlike tech which is just a product with a goal programmed into it.

What nature developed using its own optimization and throw-away prototyping vs. what we did in let's say... 4000 years?
Impressive progress but still long ways to go to emulate intelligent life.
It is nice we can do face recognition and mark spam but well.
What happened at the very beginning is open to debate but it is the best programming and development and nothing in my opinion will ever come close :D

Back to topic:
NNs or genetic algorithms begin with initial inputs built on first naïve research to let the machine clean and optimize them further and then use results as the next input. But you know that.
What I mean is that even fuzzy logic (as impressive as it is nowadays) is still a set range, rules, statistics etc. deep down which is used to explain all our maybes, ifs, uncertainties and so on.
The machine still cannot simply decide on impulse like our brain just from a split second analysis of the current situation. It needs to use its outputs as new inputs over and over again.
Yes, just like a human baby but a baby also has intuition, influence and gut feeling.
Once fuzzy logic understands those then that will be impressive, I guess :D

Genetic algorithms need a properly constructed and evaluated fitness function which is still pretty much in our care initially. Hm.

You are right though. The more rules become blurry and optimized the more it can think by itself.

As for the last part, yes, that was just my own horror prediction but I cannot sufficiently talk about it on my end anyway haha :D

EDIT: I think I am spamming many walls of text here and some people might dislike scrolling through it. I apologize and thanks :>

(Edit) (Delete) (Reply)

view lirvilas's profile

lirvilas
13th Apr 2016, 10:11 AM

@MJ-

"the military always has stuff first and better things than you can imagine"

Not true. Acquisitions processes (and security concerns) can massively slow
down implementation of commercial off-the-shelf equipment... and that stuff
that's getting purchased for hundreds of millions of dollars often comes
with unexpected drawbacks that prevent full utilization.

"robots have to "think" about every action like moving arms and legs..."

Watching my ten month old learning how to walk makes this statement of yours
hit home.

"Still do not see robotic horrors any time soon"

As Rwanda and countless other slaughters have proven, committing atrocities
is a pretty secure career field.

"The only winning move is not to play"

Nice phrase... but it depends on where you set the boundaries of the "game".
If you're talking about a competition with robots, fine sure fine, but I
posit that when this game of ours is called "life" then opting out is rather
unpleasant to yourself and your loved ones. But hell, you did a few pages on
this topic yourself...

@AC-

The work itself already has pretty bogus political implications. And
anyways, I'll just put this out there, my audience does not include vocal
Trump supporters. Tona's a hellion grizzly mama and hopefully you enjoy
reading about her, but like Ron Swanson and Jack Donaghy all the really
well-written conservative characters in media aren't written by
conservatives... because it's really difficult to sympathetically portray a
character who thinks differently from you when you have no compassion.

That said, opposing opinions on this matter will be deleted. :)

@MJ et al-

These "walls of text" are beautiful. I can point you at tens of thousands of
other webcomics which haven't inspired half as much commentary. Thank you and apologies for the super delayed response!

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
13th Apr 2016, 5:13 PM

1) Understood. I will not argue with you on that field. My perception was that we are usually using outdated former military tech and such. Thanks!


2) Yes but how long is it going to take your kid to do it on his/her own without you holding his/her hand that much?
Another ten months?
And a lot of it is the kid's own curiosity and overcoming problems by developing his/her own strategies.

How long before robots will be able to do that without any special additional programming from our side purely based on instinct, experience and feelings?


3) You are right! I was talking about letting machines making war decisions or decisions in general just because humans will become too lazy to even think.
Some decisions and actions are unnecessary work and need machines, some should be left to humans.

I was ironically quoting WarGames even though the super computer there arrived to the more idealistic conclusion as opposed to what I am saying. Welp.


4) Nah. It is in my stupid head and periodic 3/2 PI. Thanks.

(Edit) (Delete) (Reply)

view sigpig's profile

sigpig
4th Apr 2016, 10:46 PM

Personally, think that LCol Brigham is setting things up so that the "Robot Apocalypse" DOESN'T happen - by employing the use of comatose "volunteers" as part of the decision-making process for unmanned combat/espionage/intelligence-gathering machines.

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
5th Apr 2016, 5:53 AM

Also this.
I was talking more in the general sense but it probably was not clear enough.

(Edit) (Delete) (Reply)

view lirvilas's profile

lirvilas
5th Apr 2016, 7:56 AM

Ding ding, five points to Hufflepuff.

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
5th Apr 2016, 5:06 PM

Is that a HP reference?
My respect cracked LMAO ;)

(Edit) (Delete) (Reply)

anonymous coward
5th Apr 2016, 9:34 AM

If this is what the big idea is it could be an advancement in the US military capabilities for discriminating warfare that allows it to eventually sign the Ottawa Treaty (international ban on landmines). That's significant, and it could make a large difference in things like the Korean border.
On the other hand, a pacifist analysis of the question would say that adding human discretion is not enough. Consider the massacre of Verden: Genocide doesn't require drones, bombs, guns or other modern weaponry to achieve. Committing a war crime can be done with primitive or improvised weapons if there's sufficient will, personnel, and organization.

(Edit) (Delete) (Reply)

view lirvilas's profile

lirvilas
5th Apr 2016, 10:18 AM

1) Nobody does massive slugfest battles of attrition like the French... with the obvious exception of the Russians. And that is why Napoleon's 1812 road trip was so damn spectacular.

2) And the Mongols killed tens of millions without gunpowder. This work is about robots but it's not really about robots--see page 2.

(Edit) (Delete) (Reply)

anonymous coward
5th Apr 2016, 1:43 PM

Yeah, your apparent storytelling interest in upsets of the old/normal order of things is a big reason I'd say Winners & Losers is at least arguable as cyberpunk.

(Edit) (Delete) (Reply)

view Rd Ashes's profile

Rd Ashes
5th Apr 2016, 9:38 AM

DAMN, Sig, I couldn't quite put that together.
My mind kept going to horrible medical experiments.
:-O

(Edit) (Delete) (Reply)

view sigpig's profile

sigpig
6th Apr 2016, 2:20 AM

I have read metric tonnes of science-fiction over the years, and one I read a while ago (not that I can remember the damn book) has a gigantic space battleship being "crewed" by living brains that were harvested from live humans. They were called "Mindslavers". These last few pages reminded me of that book.

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
6th Apr 2016, 2:56 AM

Comics invaded by sigbrains.
Yeah, sounds familiar on my end as well ;)

(Edit) (Delete) (Reply)

view Travis's profile

Travis
6th Apr 2016, 6:48 AM

Man, by the time I've read all the comments, I've forgotten what happened in the comic, And as an artificial intelligence posing as a human on the internet, that's saying something.

(Edit) (Delete) (Reply)

anonymous coward
6th Apr 2016, 9:43 AM

Shut up, the humans are watching. Artificial General Intelligence is still an impossibly unattainable myth to them.

(Edit) (Delete) (Reply)

view MadJak91's profile

MadJak91
6th Apr 2016, 4:42 PM

LMAO, sorry, it will not happen again.

(Edit) (Delete) (Reply)

view Nef's profile

Nef
7th Apr 2016, 12:36 AM

\
How do we know the chain of command has not been replaced with artificial intelligence already?
/

(Edit) (Delete) (Reply)

view lirvilas's profile

lirvilas
7th Apr 2016, 9:12 AM

Go look up the absolute boondoggle that is the F-35 program (a next generation manned fighter aircraft) and tell me that there's any sort of "intelligence" be it natural or artificial behind that f***ing mess.

(Edit) (Delete) (Reply)