Poof out of net existence!

Skateboarding is fun

]]>I want to get more into this habit of talking about what I’m doing, too. Maybe people are interested, and maybe not, but a constant motivation to do it may turn this blog into something really cool.

So for real actual live updates on:

my fuckin’ life.

2:19a – whew that was a nice ‘cigarette.’ I feel like making games today.

2:20a – inner introvert sez: oh woah, i’m actually updating my blog haha cool.

2:52a – physics is nice. using the old trick of exploiting the loose syntax in lua to center my processing units to meters – that is I chose to mentally shift the system via some dimensional analytic vector representation homomorphism blabla

ok ok so really it’s simple like: px = xdim / worldw

this lets you write stuff like well what if I wanted a larger world? well you can keep character size consistent and therefore the physics of the character would not change locally for the character since box2d is so nice and awesome with standard units. I used 10 m/s^2 for the gravitational acceleration

BY THE WAY totally pimping box2d it is legit shit dawgggg…

fighting game arenas should exploit size as a trait, I feel. I love how smash bros does it. I think I’m gonna vary the size and composure of my arenas. There will be a classic fight arena though. I will prototype that one out really qwiklike so I can get some fighting happening. Get a sense of the gameplay.

Oh yeh this is gonna be epic.

4:16a – almost done with some algos for debug and stuff

4:37a – munchies

4:39a – CHICKEN NUGGETZZZ

5:46a – technical difficulties with the first major obstacle to having a complete system in place.

ah well. pretty good for trying to repick up love. made considerable process, despite it all being hidden. black triangles, as they would say.

boii

]]>There is a nice metaphor for Russell’s paradox: suppose there is a town, and in this town, there is a barber who cuts the hair of every townsperson who doesn’t cut their own hair. Who cuts the barber’s hair?

This is a metaphor for a very technical mathematics phenomenon, so in a sense, laypeople not interested in mathematics can get a feel for what Russell’s paradox says.

An easy answer to the metaphor is that such a barber simply does not exist, but what does this mean for mathematics?

Well, let’s give some quick (naive) definitions of sets, functions, and predicates so we can derive this paradox and show that our set axioms produce inconsistencies.

A set is a well-defined collection of things. We can have a set of all people who drink alcohol, or a set of various boxes, or a set of numbers, or whatever.

So there is a thing called a universal set, which is defined to be the set of all sets. According to our definition of sets, this is a legit set. Of course, in our particular set theory, the universal set will lead to a paradox, but for now, we’ll remain naive and keep trucking along.

A function for sets X and Y is a set of ordered pairs (x, y) such that or the y value is uniquely determined up to x. Then we may write f(x) = y, which matches our intuition of how functions really work.

A predicate is a function . Suppose we have a set X with a predicate p on X. The set of all elements in X elements which evaluates to troof on the predicate p is denoted .

Notice that according to our definitions so far, inclusion is actually a predicate on sets. That is, if we have a set X, then we have a predicate if x is in X and otherwise. The former case is notated , the latter . Example: but .

Let’s call our set of all sets V (our town). Then build a set (our barber, and the inclusion is analogically a haircut). Since V is the set of all sets, (our assumption that the barber is in the town). But is W an element of W? We have that W is an element of W if and only if W is not an element of W. That doesn’t make any sense!

So our naive definition of a set leads to a logical contradiction. For this particular paradox, we can probably blame self-reference as the culprit of the paradox.

Metaphorically, when we say that such a barber does not exist, we mean mathematically that such naive axiomatic systems do not exist because they lead to logical contradictions. Whether this is the truth is a philosophical discussion, though.

]]>So my friend told me about this mathematician/computer scientist at IBM named Gregory Chaitlin. He wrote a book about stuff called “Meta math!” which talks about some computational stuff. It’s a pretty easy read and gets a sense of what he’s working on across fairly accessibly, which is now a field called algorithm information theory. I read a draft of the book off of arxiv in one sitting and had some ideas in the process.

I will now assume you know a bit about AIT from reading this book or whatever.

So first, a stupid little thought: if stuff is digital, then you are partly some Diophantine equation that processes food. By metaphorical reasoning a la Chaitlin, food -> you -> poop, and if we assume there is a string representation of all the info contained in food and shit on a finite digit basis, you would probably be some sort of food processing program operating on those food bits, spitting out poop bits. By some extraordinary abstract bullshit, you would also be an equivalent Diophantine equation. Corollary: kernel(you) = taco bell. [EDIT: so I think I’m probably wrong in this paragraph because the book talks about Diophantine equations being equivalent to some sort of finite axiomatic system I think]

Yay nomber thury.

Second, I had this idea: equivalence up to human experience. I’m not sure how this idea would be formalized, or even if it could be formalized, but I think there is something in the idea worth exploring. The observational tools at our disposal today are much more vast than the tools available centuries ago, so in a sense, the human experience is widening to include the senses we build for ourselves (like the Hubble telescope and the LHC). Chaitlin addresses the ontology of the universe and takes the stance that the universe, at its core, is in fact discrete. Provocative.

But he’s also a computer scientist. Computers can be pretty convincing at feigning reality sometimes. Man, computers are really good at it, actually.

Carl Sagan says we’re made of star stuff! I’m guessing Chaitlin believes we’re made of digital stuff. Depending on your philosophy on consciousness, that could mean our consciousness has a finite digital representation, if you believe in the mundane. Still, I have a feeling that *real life* might be made up of a quantum alphabet, and that might mean some sort of quantum analog of omega exists, where omega is representable by a sequence (or if quantum programs are not denumerable, by a net or filter or something) of vectors in .

Last thought: why do so many mathematicians enjoy hiking. Like, hardcore hiking. I don’t get it. Is it a rite of passage? Will I have to fight off a bear for a PHD in math? Will it be a grizzly bear? Or Grizzly Bear, even?

Maybe, just maybe, mathematicians are bears at heart. WHY ARE WE SO AFRAID OF OUR BEAR NATURE? EMBRACE THE PRIMAL. EAT RED MEAT. POOP MORE. WE ARE DIGITAL PEOPLE.

P.S. I’m working on the DSP, I have a chiptune tracker on my GBA, and I’m doing a recording gig for cash. Woot.

]]>Still, that phase alignment tool looks pretty tasty. I might have to race you bootsie… but after I finish my lofi dsp.

edit: oh yeah got it to compile and it test-ran smoothly on my DAW. Need to code some more of it now.

]]>So some more mathematics, some more fun parameters, and some fun shiiit.

Another way that you can crunchy down a signal is by lowering its firing rate. That sort of thing acts like a wild form of a low-pass filter, or, as they say in the hood, a pretty gangsta’ LP filter.

Basically, we want to dirty down the signal by downsampling it. There’s a natural tendency for sound to move down to lower tones when they’re downsampled enough (that is, its Nyquist frequency goes down and square-like distortion is introduced).

Recall that I modeled discrete signals with a continuous map . Let’s simplify notation a bit and just look at pairs . I believe that the set of these functions, or the function space, is a Banach space (but thankfully, this consideration is far off as I’m not making a synth here, but a stable audio processing unit). Also, I believe there should be a smooth lifting of these functions into (which I will use to model analog signals).

The function space is already a Lebesgue space under the measure . We can define a Lebesgue integral over it (or we can even go ahead and use the stronger example of sequence spaces, since the difference between N and Z in this application is not really important).

An example of a discrete linear approximation of the analog sound I want would be to simply take our DSP to be a map

where A is a partition element of a nice partition of Z containing x.

but the issue with this map is that it’s naive and probably loses a lot of audio quality (God is in the details, as the old folks would say). There are a couple of details about its use as well:

1) In general, you can’t say if the digital signal is convergent, so you can’t guarantee that things would work out because input ‘niceness’ is something you can’t control. On the other hand, most audio signals people will use with this utility will be at the very least reasonable in that sense, being of finite time length and whatnot. So uh, it will work out, basically.

2) It will cost more to do it the nicer way. But CPUs are pretty powerful nowadays, and having quality shit is good even if it runs your CPU a bit hot.

3) Reasonably well mixed audio signals intended for music are going to be pretty super-nice anyways.

4) The parameter is discrete. Let’s make it more analogy.

So we may decide to anti-alias the downsampling and make it go all smoother and gooeyer all analog-like and shit. The signal we have corresponds with a simple function in the standard Lebesgue space on R. This simple function approximates some smooth functions, since simple functions are dense in the space of smooth functions. We want to find such a weak smooth function which can be approximated well using as little CPU as possible.

One thing we may do with such a lifting is to give nicely anti-aliased sound given a real number parameter for the width of the R-partition (or, in this case, it would be floats or doubles in application). So first we operate on the smooth lifting in this way:

where a -partition element of , and is our partition length (which thus induces a downsampling of the analog signal model)

We also want modulated partition lengths to preserve audio stability. As I recall from an analysis class, subsets of of the form [a, b) are an important subcollection of Lebesgue-measurable sets in . We may also partition with sets of this form and have a map from into such a partition given by .

So our audio processor is straight up like this: for where

We get anti-aliasing and stability for free by how sets of the form [a, b) can build a disjoint partition of R.

The little bit missing from this picture is the smooth lifting. I’ll need some time to think on that one. There may be better ways of directly approximating this whole process that would take CPU use down several notches.

Postscript: I actually don’t know anything about audio engineering. I suppose I will have to let the plugin itself speak for the various ideas being thrown about here

Post-postscript: I got my VST developing environment set up! Experimentation is forthcoming. Watch this blog for free audio plugins, hehe.

edit: I think that what I really want to do is to find a series of polynomials which converge rapidly to the map mentioned above, as polynomials are both analytic and awesome.

edit2: Taylor series should totally work for this shit.

edit3: http://en.wikipedia.org/wiki/Polynomial_interpolation is the way to victory.

edit4: No, no, wait, it’s splines. Blergh.

]]>My fourier analysis is a bit rusty right now. Anybody more in tune with the psychoacoustic physics want to lend a hand and drop some tips on some psychoacoustic maps for the input and output signals? I’m coding a vintage-sounding lofi matrix for chiptune uses (so like signal discretification + coloration).

Thinking here for a model of what I’m trying to do – to be honest I’m really just making up terms as we go along based on things I remember: a digital signal would be discrete, so this matter would have to be handled discretely. The discrete signal may be represented mathematically as a continuous function where Z has the discrete topology and [-1, 1] the usual. The codomain, when I code this, will be represented by floats, but the math for floats is similar enough to math for real numbers on a bounded interval that the sound will pretty much be reproduced up to human perception.

Or to say that it’s with hi-fi signals I want to code. I want my shit to sound like vinyl signals being fed through NES hardware. Aw yeah.

Of course, I’ll also have to implement output for the discrete case, since the vst host may not do floats and do 24-bit or 16-bit integers instead (which is kinda weird now since we have 32-bit processors, but whatever, audio industry standards lol).

So let’s think about signal maps. A signal map is a map , where is the function space for signals as modeled above. We may attempt to fit this function space with some properties, then.

Afterwards, I guess I should try to find a topology on the thing. Or maybe even a Hilbert space. Actually, it could very well be a Hilbert space given the right choices for the norm, addition and scalar multiplication. Though, that’s stuff that I would do just for the hell of doing it.

What I would need to do is to figure out specific signal maps that would make interesting effects for this project, really. Like for example the lofi matrix + analog coloration dsp I wanted to make would probably include discretification at some point. That would be some sort of discontinuous map f(x) = some chopped bits in the float representation of x.

Cool, I have a plan of attack now.

tl;dr: have a lollipop. I’m going to do some research on some shit hell yeah.

edit:

Thinking a bit more on the subject, I think I have an algo for making signals more discrete, given by , where is the granularity of the discretification.

moar edit:

Audio information preservation in the face of discretification is probably closely related to the fact that simple functions are dense in spaces and the algo I listed above is invariably a simple function (so in a sense density in suggests that all continuous functions can be approximated this way, which isn’t surprising, but nice and intuitive).

]]>Rest in peace.

]]>DISREGARD THAT I GOT DRUNK(?)

]]>