Thursday 9 November 2017

Moving Media Elaborazione Del Segnale


Ci sono certo numero di indicatori e modelli matematici che sono ampiamente accettato e utilizzato da alcuni software per il trading (anche MetaStock), come MAMA, Hilbert Transform, Fisher Transform (come sostituti di FFT), omodina Discriminatore, Hilbert onda sinusoidale, istantaneo Trendline ecc inventato da John Ehler. Ma questo è tutto. Non ho mai sentito di nessuno diverso da John Ehler studiare in questo settore. Pensi che vale la pena di imparare elaborazione del segnale digitale Dopo tutto, ogni transazione è un segnale e grafici a barre sono un po 'sotto forma di questi segnali filtrati. Ha senso chiesto Feb 15 11 in 20:46 Wavelets sono solo una forma di base decomposizione. Wavelets, in particolare, si decompongono sia nella frequenza e nel tempo e quindi sono più utili di Fourier o altre decomposizioni puramente frequenza based. Ci sono altre scomposizioni tempo-Freq (per esempio l'HHT), che dovrebbero essere esplorate pure. Decomposizione di una serie di prezzi è utile per comprendere il movimento primario all'interno di una serie. In generale, con una decomposizione, il segnale originale è la somma dei suoi componenti base (potenzialmente con qualche moltiplicatore di scala). I componenti vanno dalla frequenza più bassa (a linea retta attraverso il campione) alla frequenza più alta, una curva che oscilla con una frequenza massima avvicina N 2. Come ciò è utile denoising una serie determinare la componente principale di movimento nella serie determinazione perni Denoising viene realizzato ricomporre la serie sommando i componenti dalla decomposizione, meno ultimi componenti di frequenza più alta. Questa serie denoised (o filtrata), se scelto bene, spesso dà una vista sul processo prezzo di base. Supponendo continuazione nella stessa direzione, può essere utilizzato per extropolate per un breve periodo avanti. Come le timeseries zecche in tempo reale, si può osservare come le denoised (o filtraggio) cambiamenti di processo prezzo per determinare se un movimento dei prezzi in una direzione diversa è significativo o solo rumore. Una delle chiavi, però, è determinare come molti livelli di decomposizione di ricomporre in una determinata situazione. Troppo pochi livelli (basso freq) significherà che la serie prezzo ricomposta risponde molto lentamente agli eventi. Troppi livelli (alta frequenza) significherà per una risposta veloce ma. forse troppo rumore in alcuni regimi di prezzi. Dato che il passaggio del mercato tra movimenti laterali e movimenti di moto, un processo di filtraggio deve adattarsi a regime, diventando più o meno sensibile ai movimenti in proiettare una curva. Ci sono molti modi per valutare questo, come guardando la potenza della serie filtrato contro la potenza delle serie di prezzi grezzo, destinate a una certa seconda regime. Supponendo che si è impiegato con successo wavelet o altre decomposizioni per produrre un segnale opportunamente reattiva liscia, può prendere il derivato e utilizzare per rilevare minimi e massimi come la serie prezzo progredisce. Occorre una base che ha buon comportamento al punto finale in modo che la pendenza della curva ai progetti endpoint in una direzione appropriata. La base deve fornire risultati coerenti al punto finale come i timeseries zecche e non può essere prevenuto positionally Purtroppo, io non sono a conoscenza di alcuna base wavelet che evita i problemi di cui sopra. Ci sono alcune altre basi che possono essere scelti che fanno meglio. Conclusione Se si vuole perseguire Wavelets e costruire regole di negoziazione intorno a loro, si aspettano di fare un sacco di ricerca. Si può anche trovare che anche se il concetto è buono, sarà necessario esplorare altre basi di decomposizione per ottenere il comportamento desiderato. Io non uso decomposizioni per le decisioni commerciali, ma ho trovato loro utile per determinare il regime di mercato e altre misure a ritroso alla ricerca. È necessario indagare come differenziare i metodi di interpolazione contro metodi di estrapolazione. La sua facile costruire un modello che si ripete il passato (quasi ogni schema di interpolazione farà il trucco). Il problema è, che il modello è in genere inutile quando si tratta di estrapolare nel futuro. Quando si hearsee i cicli di parola, una bandiera rossa dovrebbe essere salendo. Scavare nella applicazione di Fourier integrale, serie di Fourier, trasformata di Fourier, ecc, e youll scoprire che con abbastanza frequenze è possibile rappresentare qualsiasi serie storica abbastanza bene che la maggior parte dei commercianti al dettaglio possono essere convinti che funziona. Il problema è, non ha alcun potere predittivo di sorta. La ragione metodi Fourier sono utili in engineeringDSP è perché il segnale (tensione, corrente, temperatura, qualunque) tipicamente si ripete nella circuitmachine cui è stato generato. Come risultato, interpolando poi diventa correlate a estrapolare. Nel caso in cui tu sei con R, ecco qualche codice hacky da provare: l'analisi del ciclo e di elaborazione del segnale potrebbero essere utili per i modelli stagionali, ma senza sapere di più sulle prestazioni di un tale approccio alla negoziazione non vorrei prendere in considerazione una laurea in elaborazione dei segnali per appena negoziazione. Saresti felice applicazione ciò che si impara su standard di tipo problema di ingegneria, perché questo può essere quello che youll essere bloccato facendo Se non funziona abbastanza bene con il commercio. risposto 15 Febbraio 11 alle 22:10 DSP e l'analisi delle serie storiche sono la stessa cosa. DSP utilizza Enginering gergo e analisi di serie temporali utilizza gergo matematico, ma i modelli sono molto simular. Indicatore ciclo di cyber Ehlers è un ARMA (3,2). Ehlers ha alcune idee uniche: Qual è il significato della fase di una variabile casuale risposto 26 febbraio 11 a 05:04 Dimentica tutti questi cosiddetti indicatori tecnici. Sono stronzate, soprattutto se non si sa come usarli. Il mio consiglio: acquistare un buon libro Wavelet, e creare la propria strategia. risposto Feb 16 11 a 2:52 Ciao Fred, quale libro wavelet hai usato può consigliare un titolo ndash MisterH 28 mar 11 alle 11:26 Introduzione alla Wavelets e altri metodi di filtraggio in Finanza ed Economia di Ramazan Gencay, Faruk Selcuk Brandon Whitcher ndash RockScience 29 marzo 11 alle 2:15 Ive ha trovato John Ehlers Fisher Transform abbastanza utile come indicatore di futures in particolare su Heikin-Ashi tick grafici. Faccio affidamento su di esso per la mia strategia, ma non penso che sia abbastanza affidabile per basare un intero sistema automatizzato da solo perché non si è dimostrato affidabile durante i giorni mosso, ma può essere molto utile nelle giornate di tendenza come quella di oggi. (Id essere felice di inviare un grafico per illustrare, ma io non sono la reputazione necessaria) risposta 22 marzo 13 a 20: 47Signal Processing Fundamentals Dennis Bohn, Rane RaneNote 134 scritto 1997 Last revised 504 Grida di essere ascoltato in spazio, nessuno può sentire tu gridi . perché non c'è aria o altro mezzo per il suono di viaggiare. Suono ha bisogno di un mezzo di una sostanza che interviene attraverso il quale si può viaggiare da un punto all'altro deve essere effettuata su qualcosa. Quel qualcosa può essere solido, liquido o gas. Possono sentirti urlare sott'acqua. brevemente. L'acqua è un mezzo. Aria è un mezzo. pareti Nightclub sono un mezzo. Il suono viaggia nell'aria cambiando rapidamente la pressione dell'aria rispetto al suo valore normale (pressione atmosferica). Il suono è un disturbo nel mezzo circostante. Una vibrazione che si estende dalla sorgente, creando una serie di gusci di alta pressione e bassa pressione in espansione. alta pressione. bassa pressione. alta pressione. bassa pressione . Muoversi sempre verso l'esterno questi cicli di zone a pressione alternata di viaggio fino a fine dissipare, o riflette le superfici (pareti discoteca), o passando attraverso i confini, o di ottenere assorbito - di solito una combinazione di tutti e tre. Lasciato senza ostacoli, il suono viaggia verso l'esterno, ma non per sempre. L'aria (o altro mezzo) ruba una parte della potenza suoni che passa. Il prezzo di passaggio: il medium assorbe la sua energia. Questa perdita di potenza è vissuta come una riduzione quanto forte sia (il termine volume è usato per descrivere quanto forte sia di momento in momento) come il segnale viaggia lontano dalla sua fonte. L'intensità del segnale è ridotto di un quarto per ogni raddoppio della distanza dalla sorgente. Ciò significa che si tratta di 6 dB meno forte che si raddoppia la distanza da esso. Questo è noto come la legge dell'inverso del quadrato poiché la diminuzione è inversamente proporzionale al quadrato della distanza percorsa per esempio, 2 volte la distanza uguale a 14 diminuzione volume, e così via. Come si fa a creare un suono, e come si fa a catturare il suono Abbiamo farlo utilizzando lati opposti della stessa medaglia elettromagnetica. Elettricità e magnetismo sono parenti: Se si passa una bobina di filo attraverso un campo magnetico, l'elettricità è generata all'interno della bobina. Ruotare la moneta più e capovolgere di nuovo: Se si passa energia elettrica attraverso una bobina di filo, un campo magnetico viene generato. Spostare il magnete, ottenere una tensione di applicare una tensione, crea un magnete. questa è l'essenza di tutti gli oggetti elettromeccanici. Microfoni e altoparlanti sono oggetti elettromeccanici. A loro cuore c'è una bobina di filo (voice coil) e un magnete (magnete). Parlando cause vibrazioni sonore di viaggiare verso l'esterno dalla bocca. Parlando in una bobina mobile (aka dinamica) microfono fa sì che la bobina di muoversi all'interno di un campo magnetico. Ciò causa una tensione da sviluppare e di una corrente proporzionale al suono - suono è stato catturato. All'altra estremità della catena, una tensione viene applicata alla bobina dell'altoparlante causando un flusso di corrente che produce un campo magnetico che rende il cono spostare proporzionale al segnale audio applicato - è stato creato suono. Il microfono traduce suono in un segnale elettrico, e l'altoparlante converte un segnale elettrico in suono. Una cattura, l'altra creazione. Tutto nel mezzo è solo dettagli. E nel caso in cui vi state chiedendo: sì voltò, un microfono può essere un altoparlante (che fa i suoni piccoli teeny) e un altoparlante può essere un microfono (se si grida veramente forte). Crossover: semplici attraversamenti Divisione altoparlanti sono un male necessario. Un universo diverso, un diverso insieme di fisica e forse potremmo avere ciò che vogliamo: un altoparlante che fa tutto. Un altoparlante che riproduce tutte le frequenze audio ugualmente bene, senza alcuna distorsione, a livelli sonori adeguati per qualunque luogo si gioca. Bene, noi viviamo qui, e il nostro sistema di fisica non permetta questa stravaganza. La dura verità è che nessuno altoparlante può fare tutto. Abbiamo bisogno di almeno due - più se li possiamo permettere. Woofer e tweeter. Un grande woofer per i bassi e un po 'tweeter per gli alti. Questo è noto come sistema a 2 vie. (Controllare gli schemi di accompagnamento per le seguenti discussioni). Ma con due altoparlanti, le frequenze corrette devono essere posati (o hanno attraversato sopra) ad ogni altoparlante. Al livello più semplice un crossover è una rete passiva. Una rete passiva non è quello che necessita di un alimentatore di operare - se ha un cavo di linea, o scompare batterie, allora non è un circuito passivo. Il crossover passivo semplice consiste di due soli componenti: un condensatore collegato al driver ad alta frequenza ed un induttore (aka una bobina) collegato al driver a bassa frequenza. Un condensatore è un componente elettronico che passa alte frequenze (banda passante) e blocchi basse frequenze (il stopband) un induttore fa esattamente il contrario: passa basse frequenze e blocca le alte frequenze. Ma, come la frequenza cambia, né componente reagisce improvvisamente. Lo fanno a poco a poco lentamente iniziano a passare (o stop di passaggio) le rispettive frequenze. La velocità con cui ciò avviene è chiamato la pendenza di crossover. Viene misurata in dB per ottava. o abbreviato in dBoctave. La pendenza aumenta o diminuisce tanti dBoctave. Al livello più semplice, ogni componente si dà un dBoctave pista 6 (un fatto fisico del nostro universo). Ancora una volta, a livello più semplice, aggiungendo più componenti aumenta la pendenza con incrementi di 6 dB, creando piste di 12 dBoct, 18 dBoct, 24 dBoct, e così via. Il numero di componenti, o 6 incrementi di pendenza dB, è chiamato l'ordine di crossover. Pertanto, un crossover di 4 ° ordine ha (almeno) quattro componenti, e produce ripidi pendii di 24 dBoctave. La ripida la migliore per la maggior parte dei piloti, dal momento che gli altoparlanti solo eseguono bene per una certa banda di frequenze al di là che si comportano male, a volte male. pendii ripidi impediscono queste frequenze di ottenere al driver. È possibile combinare condensatori e induttori per creare una terza via che elimina i più alti alti e bassi più bassi, e forma una sezione di crossover media frequenza. Questo è naturalmente chiamato un sistema a 3 vie. (Vedi diagramma) La sezione quotmidquot forma un filtro passa-banda, poiché passa solo una banda di frequenza specifica. Nota del diagramma che la banda passante alta frequenza e termini di banda passante a bassa frequenza sono spesso abbreviati per passa-alto giusta e passa-basso. Un sistema a 3 vie consente di ottimizzare ciascun driver per una banda ristretta di frequenze, producendo un suono complessivamente migliore. Allora perché non utilizzare le caselle passivi Il singolo problema più grande è che un cabinet passivo (o un paio) solito giocano abbastanza abbastanza e pulito ad alta voce per i grandi spazi. Se il sistema audio è per la vostra camera da letto o garage, sistemi passivi avrebbe funzionato bene - forse anche meglio. Ma si mangia. Una volta che si tenta di riempire uno spazio relativamente grande con il suono altrettanto forte si inizia a capire i problemi. E non prende stadi, solo normale club dimensioni. E 'davvero difficile da produrre il volume richiesto con scatole passivi. La vita sarebbe molto più facile se si può solo sollevare tutti nella loro lattine proprio amplificatore - come un gruppo di 4 o HC HC 6 amplificatori per cuffie sparsi in tutto il pubblico. Lasciarli fare il lavoro poi tutti potessero sentire altrettanto bene, e scegliere il proprio livello di ascolto. Ma la vita è dura, e amplificatori per cuffia deve essere limitato alla pratica e la registrazione. altoparlanti monitor d'altra parte molto probabilmente hanno crossover passivi. Anche in questo caso, è una questione di distanza e la sonorità. I monitor sono di solito vicino e non eccessivamente forte - troppo forte e che andranno ad alimentare di nuovo nel microfono o essere sentito insieme al mix principale: non va bene. altoparlanti del monitor sono simili agli altoparlanti hi-fi, in cui i progetti passivi dominano. a causa delle aree relativamente piccole ascolto. E 'abbastanza facile da riempire piccole sale d'ascolto con i suoni incontaminati anche a livelli assordanti. Ma spostare quegli stessi altoparlanti nel vostro club locale e saranno suono sottile, opaco e senza vita. Non solo non giocare abbastanza forte, ma possono avere bisogno i benefici sonore di suoni che rimbalzano vicino muri per rafforzare e riempire il suono diretto. In grandi ambienti, queste pareti sono troppo lontano per beneficiare chiunque. Figura 1. passivo a 2 vie crossover passivo Figura 2. 3 vie crossover Allora perché non utilizzare un mucchio di scatole passivi è possibile, e alcune persone fanno. Tuttavia, per ragioni che seguono, funziona solo per un paio di armadi. Anche così, non sarà in grado di ottenere alti livelli di volume se la stanza è grande. I sistemi passivi possono essere ottimizzati solo così tanto. Una volta che si inizia a bisogno di più armadi, crossover attivi diventano necessari. Per ottenere una buona copertura di like-frequenze, si desidera impilare come-driver. Ciò impedisce utilizzando scatole passive poiché ciascuna contiene (almeno) un driver ad alta frequenza e un driver a bassa frequenza. La sua più semplice per mettere insieme un sistema di suono quando ogni armadio copre solo una gamma di frequenza. Per esempio, per una bella suono sistema a 3 vie, si avrebbe scatole a bassa frequenza (quelli più grandi), poi scatole di media frequenza di medie dimensioni e, infine, le scatole ad alta frequenza più piccoli. Questi sarebbero impilati o appesi, o entrambi - in una sorta di array. Una matrice altoparlante è la forma impilamento ottimale per ogni serie di armadi per dare la migliore copertura combinato e suono generale. Youve senza dubbio visto molte forme differenti matrice. Ci sono alte torri, alte mura, e ogni sorta di poliedri e archi. L'unico modo efficace per farlo è con crossover attivi. Alcuni sistemi più piccoli combinano scatole attivi e passivi. Anche all'interno di un unico armadio è comune trovare un crossover attivo utilizzato per separare i driver medie e basse frequenze, mentre una rete incorporata passiva viene utilizzato per il driver ad alta frequenza. Ciò è particolarmente comune per i super tweeter che operano nel corso dell'ultimo ottava audio. All'altra estremità, un crossover attivo spesso viene utilizzata per aggiungere un subwoofer ad un sistema a 2 vie passivo. Tutte le combinazioni sono utilizzati, ma ogni volta che un crossover passivo si presenta, si tratta di problemi. Uno di questi è la perdita di potenza. reti passive sprecano energia preziosa. La potenza aggiuntiva necessaria per rendere i piloti più forte, si riduce invece al largo delle componenti e viene fuori dalla scatola sotto forma di calore - non suona. Pertanto, le unità passive fanno si acquista un amplificatore più grande. Un paio di altri problemi di rete passiva ha a che fare con la loro impedenza. Impedenza limita il trasferimento di potenza come la sua resistenza, solo di frequenza sensibili. Affinché la rete passiva funzionare esattamente a destra, l'impedenza di sorgente (l'uscita amplificatori più l'impedenza dei cavi) devono essere il più vicino possibile allo zero e non dipendente dalla frequenza, e l'impedenza di carico (caratteristiche altoparlanti) deve essere fissato e non dipendente dalla frequenza (mi dispiace, non in questo universo solo in Star Trek). Poiché queste cose non sono possibili, la rete passiva deve essere (al massimo), una soluzione semplificata e compromesso per un problema molto complesso. Di conseguenza, il comportamento crossover cambia con frequenza - non qualcosa che si desidera per un buon sistema di suono. Un'ultima cosa come se non bastasse. C'è qualcosa che si chiama back-EMF (back forza elettromotrice. Letteralmente, back-tensione) che contribuisce ulteriormente a poveri sistemi di altoparlanti dal suono. Questo è il fenomeno in cui, dopo che il segnale si interrompe, il cono dell'altoparlante continua a muoversi, causando la bobina a muoversi attraverso il campo magnetico (ora agiscono come un microfono), creando una nuova tensione che cerca di guidare il cavo fino all'uscita amplificatori Se l'altoparlante è permesso di fare questo, il cono flop in giro come un pesce morente. Non suona bene L'unico modo per fermare back-EMF è quello di rendere l'altoparlante quotseequot un morto brevi, cioè zero ohm guardando indietro, o il più vicino ad esso il più possibile - qualcosa thats non succederà con una rete passiva appesa tra esso e l'amplificatore di potenza. Tutto questo, e per non parlare che induttori saturano ad alti livelli di segnale causando distorsione - un altro motivo non potete ottenere abbastanza volume. O il peso e l'ingombro aggiuntivo causato dalle grandi induttori necessari per una buona risposta alle basse frequenze. O che è quasi impossibile ottenere pendenze alta qualità passivamente, quindi la risposta soffre. O che gli induttori sono troppo bravo a far salire la radio locale, TV, emergenza, e le trasmissioni cellulari, e con gioia mescolandoli nel vostro audio. Così è la vita con i sistemi di altoparlanti passivi. Figura 3. attivo a 2 vie crossover Figura 4. reti crossover attivo attivi 3 vie crossover richiede un alimentatore per funzionare e di solito sono confezionati in un solo spazio, unità rack-mount. (Anche se negli ultimi tempi, altoparlanti alimentati con crossover attivi integrati e amplificatori di potenza stanno diventando sempre più popolari.) Guardando il diagramma allegato mostra come crossover attivi differiscano dai loro cugini passivi. Per un sistema a 2 vie, invece di un amplificatore di potenza, ora avete due, ma possono essere più piccole per lo stesso livello di volume. Quanto più piccolo dipende dalla valutazione della sensibilità dei driver (ne parleremo più avanti). Allo stesso modo un sistema a 3 vie richiede tre amplificatori di potenza. È anche vedere e sentire i termini bi-amplificazione. e tri-amplificato applicata ai sistemi a 2 e 3 vie. crossover attivi curare molti mali dei sistemi passivi. Poiché il crossover filtri stessi sono nascosti al sicuro all'interno del proprio box, lontano dai problemi di guida e di impedenza di carico che affliggono le unità passive, possono essere fatti funzionare in maniera quasi matematicamente perfetto. Estremamente pendii ripidi di crossover lisce e ben educati sono facilmente raggiunti da un circuito attivo. Non ci sono problemi di perdita di potenza dell'amplificatore, poiché circuiti attivi operano dal loro alimentatori a bassa tensione. E con le inefficienze della rete passiva smontabili, la potenza AMPS più facilmente raggiungere i livelli di volume richiesti. nervosismi altoparlanti e tremori causati da inadeguato smorzate back-EMF tutti, ma scompaiono una volta che la rete passiva è stato rimosso. Ciò che resta è l'impedenza di uscita amplificatori intrinseca e quella del filo di collegamento. Ecco dove il fattore di smorzamento termine viene in su. Si noti che la parola è smorzamento ing. Non smorzamento ning come spesso sentito impressionare i vostri amici. Smorzamento è una misura di capacità sistemi per controllare il movimento del cono dell'altoparlante dopo il segnale scompare. Non più di morire di pesce. Siegfried amp Russ crossover attivi passano molti nomi. In primo luogo, essi sono o 2 vie o 3 vie (o anche 4 vie e 5 vie). Poi c'è il tasso di pendenza e ordine: 24 dBoct (4 ° ordine), o 18 dBoct (3 ° ordine), e così via. Infine c'è un nome per il tipo di disegno. I due più comuni sono Linkwitz-Riley e Butterworth. prendono il nome Siegfried Linkwitz e Russ Riley che per primo ha proposto questa applicazione, e Stephen Butterworth che per primo descrisse la risposta nel 1930. Fino alla metà degli anni '80, il 3 ° ordine (18 dBoct) disegno Butterworth dominato, ma aveva ancora qualche problema. Da allora, lo sviluppo (introdotta da Rane e Sundholm) del 4 ° ordine (24 dBoct) disegno Linkwitz-Riley ha risolto questi problemi, e oggi è la norma. Che cosa questo aggiunge fino a è crossover attivi sono la regola. Per fortuna, la cosa più difficile un crossover attivo è sempre il denaro per comprare uno. Dopo di che, la maggior parte del lavoro è già fatto per voi. Al livello più fondamentale tutto ciò che ha realmente bisogno di un crossover attivo sono due cose: consente di impostare il punto di crossover corretta, e per consentire di bilanciare i livelli dei driver. È tutto. Il primo è fatto consultando la scheda tecnica produttori di altoparlanti, e componendo in sul pannello frontale. (Questo è assumendo un completo in fabbrica a 2 vie altoparlanti cabinent, per esempio. Se la casella è fatta in casa, poi entrambi i piloti devono essere attentamente selezionati in modo da avere la stessa frequenza di crossover. Altrimenti un grave problema di risposta può portare.) I livelli di bilanciamento è necessaria perché i driver ad alta frequenza sono più efficienti di driver a bassa frequenza. Questo significa che se si mette la stessa quantità di energia in ogni driver, uno suonerà più forte rispetto agli altri. Quello che è i giochi più efficienti più forte. Diversi metodi per bilanciare i driver sono sempre descritti in ogni buon manuale di proprietari. Equalizzatori Potreste aver sentito dire che gli equalizzatori non sono altro che controlli di tono glorificato. Quello è abbastanza preciso e aiuta a spiegare la loro utilità ed importanza. In poche parole, equalizzatori permettono di cambiare il bilanciamento tonale di ciò che si sta controllando. È possibile aumentare (boost) o diminuire (cut) su una base di banda-by-band solo le frequenze desiderate. Equalizzatori sono disponibili in tutte diverse dimensioni e forme, variando notevolmente nella progettazione e complessità. Selezionare da una semplice unità a canale singolo con 10 controlli sulla spaziatura frequenza 1-ottava (mono 10-banda d'ottava equalizzatore), tutta la strada fino ad una scatola full-optional, a due canali con 31 controlli sulla spaziatura frequenza 13 ottave (un impianto stereo 13-Oct equalizzatore). Ci sono modelli grafici con controlli di scorrimento (slider) che circa quotgraphquot la risposta in frequenza equalizzatori dalla forma formano, e ci sono modelli parametrici in cui si sceglie la frequenza, ampiezza e larghezza di banda desiderata (i parametri del filtro - vedi figura sotto) per ogni banda fornita. Di gran lunga, il più semplice e più popolari sono la grafica 13- e 23-ottava. Essi offrono la migliore combinazione di controllo, complessità e costi. Nel selezionare equalizzatori grafici, le caratteristiche principali da considerare sono il numero di canali InputOutput, il numero di boostcut bande, la distanza da centro frequenza di ogni, e la precisione dell'uscita vs. del pannello frontale. Fino al recente sviluppo della grafica vera risposta, le impostazioni del pannello frontale approssimata equalizzatori risposta effettiva. Prima alla grafica vera risposta, l'interazione banda adiacente causato la risposta di uscita effettiva di deviare dalle impostazioni del pannello frontale. Descritto come sia costante o variabile Q-Q (vedi diagrammi), il comportamento individuale banda del filtro determina l'interazione. Nei primi anni '80, Rane sviluppato i primi disegni Q costante per mantenere la stessa forma (larghezza di banda) sull'intera gamma boostcut. Al contrario, disegni variabile Q hanno diverse larghezze di banda (i cambiamenti di forma) in funzione della quantità boostcut. Rânes disegno costante-Q ha offerto un grande miglioramento nella risposta di uscita rispetto a quelle del pannello anteriore e divenne il design più popolare fino Rane e altri hanno sviluppato la prima vera risposta equalizzatori grafici. Ora veri grafica risposta offrono la migliore risposta. Utilizzando Equalizzatori Equalizzatori può fare miracoli per un sistema audio. Iniziamo con le prestazioni dell'altoparlante. Una sfortunata verità per quanto riguarda gli altoparlanti di bilancio è essi non suona molto bene. Solitamente questo è dovuto ad una risposta in frequenza irregolare, o più correttamente una risposta in potenza non piana. Un armadio ideale ha una risposta in potenza piatta. Questo significa che se si sceglie, ad esempio, 1 kHz come un segnale di riferimento, lo usano per guidare l'altoparlante con esattamente un watt, misurare il volume, e spazzare il generatore su tutta la gamma di frequenza altoparlanti, tutte le frequenze misureranno altrettanto forte. Purtroppo, con tutti, ma i sistemi di altoparlanti più costosi, non lo faranno. Equalizzatori possono contribuire a queste carenze di frequenza. Con l'aggiunta di un po 'qua e togliendo un po' là, ben presto si crea una risposta di potenza accettabile - e il sistema di un bel po 'meglio dal suono. La sua sorprendente come solo un po 'di equalizzazione può cambiare un sistema di suono povero in qualcosa di abbastanza decente. Il modo migliore per trattare con gli altoparlanti di bilancio - anche se costa di più - è di commettere un canale di equalizzazione per ogni armadio. Questo diventa un matrimonio. L'equalizzatore è impostato, una copertura di sicurezza è imbullonato-on, e sempre più si sono inseparabili. (Usa equalizzatori aggiuntivi per assistere con i problemi della camera.) Ed ora la parte più difficile, ma la parte più importante: Se fate le vostre misure esterne (senza riflessi sulle pareti o soffitto) e in aria (non riflessi da terra ) è possibile ottenere un quadro molto preciso della sola risposta altoparlanti, privo di effetti stanza. Questo vi dà la risposta stanza-indipendente. Questo è molto importante, perché non importa in cui viene utilizzato questo spazio perché ha questi problemi. Naturalmente, è necessario assicurarsi che il costo del diffusore di bilancio più l'equalizzatore aggiunge fino a molto meno rispetto all'acquisto di un sistema di altoparlanti davvero piatta per cominciare. Per fortuna (o dovrebbe essere questo purtroppo) questo è solitamente il caso. Anche in questo caso, la verità è che la maggior parte degli armadi non sono piatti. E 'solo gli altoparlanti molto costosi che hanno le risposte di classe mondiale. (Hmmm. Forse questo è il motivo per cui costano così tanto) La prossima cosa che si può fare con equalizzatori è quello di migliorare il modo in cui ogni luogo sembra. Ogni camera suona diversa - dato di fatto - fatto di fisica. Utilizzando esattamente la stessa attrezzatura, giocare esattamente la stessa musica esattamente nello stesso modo, diverse stanze suono differente - garantito. Ogni spazio chiuso tratta il suono in modo diverso. suono riflesso provoca problemi. Quello che il pubblico ascolta è costituito suono diretto (quello che viene direttamente dal diffusore direttamente all'ascoltatore) e suono riflesso (rimbalza tutto prima di arrivare all'ascoltatore). E se la stanza è abbastanza grande, quindi riverbero entra in gioco, che è tutto il suono riflesso che ha viaggiato finora, e per una (relativamente) lungo che arriva e ri-arriva all'ascoltatore ritardo sufficiente a sembrare una seconda e terza sorgente, o anche un'eco se la camera è molto grande. Il suo fondamentalmente un problema di geometria. Ogni camera è diversa nelle sue dimensioni, non solo nella sua lunghezza-by-larghezza dimensioni, ma nella sua altezza del soffitto, la distanza da voi e la vostra attrezzatura per il pubblico, che cosa è appeso (o non appeso), sulle pareti, quante finestre di base e porte ci sono, e dove. Ogni dettaglio circa lo spazio influenza il vostro suono. E purtroppo, c'è ben poco si può fare su qualsiasi di esso. La maggior parte dei fattori che influenzano il suono non si può cambiare. Certamente non puoi modificare le dimensioni o modificare le posizioni di porte e finestre. Ma ci sono alcune cose che si possono fare, e l'equalizzazione è uno di loro. Ma prima di equalizzare si desidera ottimizzare come e dove collocazione dei diffusori. Questo è probabilmente l'elemento numero uno da sbrigare. Tenete gli altoparlanti in uscita di curva, quando possibile. Rimuovere tutte le restrizioni tra gli altoparlanti e il pubblico, tra cui banner, attrezzature palco, e gli esecutori. Quello che vuoi è per la maggior parte del suono il pubblico sente di venire direttamente dagli altoparlanti. Si vuole ridurre al minimo tutte le suono riflesso. Se avete fatto un buon lavoro nella selezione e la parificazione diffusori, allora sapete già il vostro suono diretto è buona. Così che cosa è di sinistra è quello di ridurre al minimo il suono riflesso. uso successivo equalizzazione per aiutare con alcune delle camere caratteristiche più fastidiose. Se la stanza è eccezionalmente brillante si può rinforzare la fascia bassa per contribuire a compensare, o roll-off alcuni dei massimi. Oppure, se la stanza tende ad essere rimbombante, si può tono giù per la fascia bassa per ridurre la risonanza. Un altro modo EQ è molto efficace è nel controllo toni di feedback fastidiosi. Il feedback è che i sistemi audio terribile stridio o gridare ottengono quando l'audio dall'altoparlante si fa prendere-up da uno dei microfoni di scena, ri-amplificato e pompato fuori l'altoparlante, solo per essere raccolti-up di nuovo dal microfono, e ri - amplificata, e così via. Molto spesso, questo accade quando il sistema riproduce rumoroso. Che ha senso, perché per i suoni più morbidi, il segnale sia non è abbastanza grande per rendere al microfono, o se lo fa, è troppo piccolo per costruire-up. Il problema è uno di un out-of-control, a circuito chiuso, sistema positive-feedback costruzione fino qualcosa si rompe, o le foglie di pubblico. Usa il tuo equalizzatore per tagliare quelle frequenze che vogliono urlare non solo arresta lo stridio, ma si consente al sistema di giocare più forte. La frase tecnico per questo è massimizzare il guadagno del sistema prima del feedback. La sua importante capire all'inizio che non è possibile risolvere i problemi audio relativi sala con equalizzazione, ma è possibile spostare i focolai di crisi in tutto. È possibile riorganizzare le cose punto di vista sonoro, che aiuta gli eccessi addomesticati. Si vince, rendendo il suono migliore. Equalizzazione aiuta. Figura 5. banda passante Parametri di filtro Figura 6. Variable-Q Graphic Figura 7. Q costante Equalizzatori Grafici sono utili per aumentare il vostro strumento o voce. Con la pratica si impara a usare il vostro equalizzatore per migliorare il suono per il tuo migliore espressione personale: approfondire i bassi, riempire il mezzo, o esagerare gli alti. tutto quello che vuoi. Proprio come un equalizzatore può migliorare il suono di un povero altoparlante, può migliorare il suono di un microfono marginale, o migliorare qualsiasi strumento musicale. Equalizzatori ti danno quel qualcosa in più, quel bordo. (Sappiamo tutti dove quotradio voicesquot davvero venire da.) Vedendo audio Per effettuare misure di altoparlanti e sistema audio facile, è necessario un analizzatore in tempo reale (RTA). Un RTA permette di vedere la risposta in potenza, non solo per l'altoparlante, ma ancora più importante, per l'intero sistema. ACR stand-alone utilizzano una matrice di LED o LCD per visualizzare la risposta. A built-in pink noise generator (a special kind of shaped noise containing all audible frequencies, optimized for measuring sound systems) is used as the test signal. A measuring microphone is included for sampling the response. The display is arranged to show amplitude verses frequency. Depending upon cost, the number of frequency columns varies from 10 on 1-octave centers, up to 31 on 13-octave centers (agreeing with graphic equalizers). Amplitude range and precision varies with price. With the cost of laptop computers tumbling, the latest form of RTA involves an accessory box and software that works with your computer. These are particularly nice, and loaded with special memory, calculations and multipurpose functions like also being an elaborate SPL meter. Highly recommended if the budget allows. Dynamic Controllers Dynamic controllers or processors represent a class of signal processing devices used to alter an audio signal based solely upon its frequency content and amplitude level . thus the term quotdynamicquot since the processing is completely program dependent. The two most common dynamic effects are compressors and expanders . with limiters and noise gates (or just quotgatesquot) being special cases of these. The dynamic range of an audio passage is the ratio of the loudest (undistorted) signal to the quietest (just audible) signal, expressed in dB. Usually the maximum output signal is restricted by the size of the power supplies (you cannot swing more voltage than is available), while the minimum output signal is fixed by the noise floor (you cannot put out an audible signal less than the noise). Professional-grade analog signal processing equipment can output maximum levels of 26 dBu, with the best noise floors being down around -94 dBu. This gives a maximum dynamic range of 120 dB (equivalent to 20-bit digital audio) -- pretty impressive number -- but very difficult to work with. Thus were born dynamic processors. Compressors Compressors are signal processing units used to reduce ( compress ) the dynamic range of the signal passing through them. The modern use for compressors is to turn down just the loudest signals dynamically . For instance, an input dynamic range of 110 dB might pass through a compressor and exit with a new dynamic range of 70 dB. This clever bit of processing is normally done using a VCA (voltage controlled amplifier) whose gain is determined by a control voltage derived from the input signal. Therefore, whenever the input signal exceeds the threshold point, the control voltage becomes proportional to the signals dynamic content. This lets the music peaks turn down the gain. Before compressors, a human did this at the mixing board and we called it gain-riding . This person literally turned down the gain anytime it got too loud for the system to handle. You need to reduce the dynamic range because extreme ranges of dynamic material are very difficult for sound systems to handle. If you turn it up as loud as you want for the average signals, then along comes these huge musical peaks, which are vital to the punch and drama of the music, yet are way too large for the power amps and loudspeakers to handle. Either the power amps clip, or the loudspeakers bottom out (reach their travel limits), or both -- and the system sounds terrible . Or going the other way, if you set the system gain to prevent these overload occurrences, then when things get nice and quiet, and the vocals drop real low, nobody can hear a thing. Its always something. So you buy a compressor. Using it is quite simple: Set a threshold point, above which everything will be turned down a certain amount, and then select a ratio defining just how much a quotcertain amountquot is. All audio below the threshold point is unaffected and all audio above this point is compressed by the ratio amount. The earlier example of reducing 110 dB to 70 dB requires a ratio setting of 1.6:1 (11070 1.6). The key to understanding compressors is to always think in terms of increasing level changes in dB above the threshold point . A compressor makes these increases smaller . From our example, for every 1.6 dB increase above the threshold point the output only increases 1 dB . In this regard compressors make loud sounds quieter . If the sound gets louder by 1.6 dB and the output only increases by 1 dB, then the loud sound has been made quieter. Some compressors include attack and release controls. The attack time is the amount of time that passes between the moment the input signal exceeds the threshold and the moment that the gain is actually reduced. The release time is just the opposite -- the amount of time that passes between the moment the input signal drops below the threshold and the moment that the gain is restored. These controls are very difficult to set, and yet once set, rarely need changing. Because of this difficulty, and the terrible sounding consequences of wrong settings, Rane correctly presets these controls to cover a wide variety of music and speech -- one less thing for you to worry about. System overload is not the only place we find compressors. Another popular use is in the making of sound. For example when used in conjunction with microphones and musical instrument pick-ups, compressors help determine the final timbre (tone) by selectively compressing specific frequencies and waveforms. Common examples are quotfatteningquot drum sounds, increasing guitar sustain, vocal quotsmoothing, quot and quotbringing upquot specific sounds out of the mix, etc. It is quite amazing what a little compression can do. Check your owners manual for more tips. Figure 8. GateExpanderCompressorLimiter Action Expanders are signal processing units used to increase ( expand ) the dynamic range of the signal passing through it. However, modern expanders operate only below the set threshold point . that is, they operate only on low-level audio. Operating in this manner they make the quiet parts quieter . The term downward expander or downward expansion evolved to describe this type of application. The most common use is noise reduction. For example, say, an expanders threshold level is set to be just below the quietest vocal level being recorded, and the ratio control is set for 2:1. What happens is this: when the vocals stop, the signal level drops below the set point down to the noise floor. There has been a step decrease from the smallest signal level down to the noise floor. If that step change is, say, -10 dB, then the expanders output attenuates 20 dB (i. e. due to the 2:1 ratio, a 10 dB decrease becomes a 20 dB decrease), thus resulting in a noise reduction improvement of 10 dB. Its now 10 dB quieter than it would have been without the expander. Limiters are compressors with fixed ratios of 10:1 or greater. Here, the dynamic action prevents the audio signal from becoming any bigger than the threshold setting . For example, say the threshold is set for 16 dBu and a musical peak suddenly comes along and causes the input to jump by 10 dB to 26 dB, the output will only increase by 1 dB to 17 dBu -- basically remaining level. Limiters find use in preventing equipment and recording media overloads. A limiter is the extreme case of compression. You will hear the term pumping used in conjunction with poorly designed or improperly set limiters. Pumping describes an audible problem caused by actually hearing the gain change -- it makes a kind of quotpumpingquot sound. This is particularly a problem with limiters that operate too abruptly. Rest assured that Rane limiters are designed not to have any audible side-effects. Noise Gates Noise gates (or gates ) are expanders with fixed quotinfinitequot downward expansion ratios. They are used extensively for controlling unwanted noise, such as preventing quotopenquot microphones and quothotquot instrument pick-ups from introducing extraneous sounds into your system. When the incoming audio signal drops below the threshold point, the gate prevents further output by reducing the gain to quotzero. quot Typically, this means attenuating all signals by about 80 dB. Therefore once audio drops below the threshold, the output level basically becomes the residual noise of the gate. Common terminology refers to the gate quotopeningquot and quotclosing. quot A gate is the extreme case of downward expansion. Just as poorly designed limiters can cause pumping, poorly designed gates can cause breathing . The term breathing is used to describe an audible problem caused by being able to hear the noise floor of a product rise and lower, sounding a lot like the unit was quotbreathing. quot It takes careful design to get all the dynamic timing exactly right so breathing does not occur. Rane works very hard to make sure all of its dynamic processors have no audible funny business. Another popular application for noise gates is to enhance musical instrument sounds, especially percussion instruments. Correctly setting a noise gates attack (turn-on) and release (turn-off) adds quotpunch, quot or quottightensquot the percussive sound, making it more pronounced -- this is how Phil Collins gets his cool snare sound, for instance. light and the eye Sight is the sense organ of radiant energy . It evolved in relation to the materials that absorb, reflect or refract solar radiation. Its sense modality is light . presented in experience as luminance, color and objects in three dimensional space. Since ancient times the eye has been an icon for our consciousness and seeing the metaphor for intelligence 151 and with good biological justification. The order Primates, which includes humans, have in common binocular vision and a greatly expanded visual cortex for the processing of visual information. Vision is a primates dominant sensory domain. The eye is an attractive study for two reasons. It is self contained, which means that all the pieces of the puzzle are found within a single organ. It is also a sublime mechanism, with parts that resemble a camera lens, a daylight filter, an aperture control, and a CMOS160image sensor. However the true organ of vision is not the eye but the brain. By the time it enters awareness . color is really a complex judgment experienced as a sensation . The tissue encapsulated by the eye is an outpost of brain neurons that scan the world and record the basic luminance, contrast and movement in an optical image. The large visual cortex at the back of the brain, in tandem with many other brain areas, does the work of conceptualizing and visualizing a world of colors and objects around us. The visual tasks of image enhancement and interpretation are described in the pages on the structure of vision . This page starts with the physical attributes of light, the optical structure of the eye, the responses of photoreceptor cells to light (including the trichromatic foundation of vision and its unmistakable icon, the chromaticity diagram ), and the specific ways the eye is adapted to meet the visual challenges created by the physical world. light: the spectrum we can see The nuclear fusion occuring within the sun produces a massive flow of radiation into space. Scientists describe this radiation both as cycles or waves in an electromagnetic field and as tiny quantum packets of energy ( photons ). color vision The distance between the peaks in one cycle of an electromagnetic wave is its wavelength (symbol lambda ), measured in nanometers (billionths of a meter). The number of wave peaks within a standard distance is the wavenumber . the reciprocal of wavelength ( 1lambda ), which must be multiplied by 10 million to yield waves per centimeter. Thus, a wavelength of 500160nm equals a wavenumber of 150010 7 or 20,000 waves per centimeter. Light waves increase in frequency (number of cycles per second) as the radiation increases in energy short wavelength, high frequency light has roughly twice the energy of long wavelength, low frequency light. Frequency is a constant property of light at a given energy. When light passes through a transmitting (translucent or transparent) material, the speed of light and the corresponding wavelength of light are reduced somewhat, although the frequency of the light remains unchanged. This produces the characteristic refraction or bending of light as the light waves cross the boundary between different media, such as air and water or air and glass. The ratio between the speed of light in air and its speed through a transmitting medium 151 which determines the amount of bending produced in the light beam 151 is the refractive index of the medium. The baseline wavelength and speed of light are usually measured in air at the earths surface. Describing Light 038 Color. Light is the electromagnetic radiation that stimulates the eye. This stimulation depends on both energy (frequency, expressed as wavelength) and quantity of light (number of photons). the visible electromagnetic spectrum spectrum colors as produced by a diffraction grating ( IR infrared, UV ultraviolet) for a discussion of spectral color reproduction on a computer monitor, see the Rendering Spectra page by Andrew Young The figure shows the visible spectrum on a wavelength scale, roughly as it appears in sunlight reflected from a diffraction grating (such as a compact disc), which produces an equal spacing of light wavelengths. (A rainbow or glass prism produces an equal spacing of wavenumbers, which compresses the blue end of the spectrum.) Outside the visible range, electromagnetic radiation at higher energies (wavelengths shorter than 380 nanometers) is called ultraviolet and includes xrays and gamma rays. Lower energy radiation (at wavelengths longer than about 800160nm) is called infrared or heat at still lower frequencies (longer wavelengths) are microwaves, television waves and radio waves. Notice the very gradual falloff in luminosity at the near infrared (IR) end of the spectrum, and the relatively sharper falloff toward ultraviolet (UV). At the earths surface, the absorbing effects of the the ozone layer and lower atmosphere significantly filter short wavelength radiation below 450160nm and block all radiation below 320160nm. In addition, most wavelengths below 500160nm are blocked from reaching the retina by the transparent yellow tint in the adult lens and a protective yellow pigment layer on the retina. But in noon daylight there is as much energy in long wavelength (heat) radiation as there is in light, so the gradual falloff in perceptible red light is due to weaker visual sensitivity in longer wavelengths. Thus, the range of light wavelengths is somewhat arbitrary. Photometric standards for the visible wavelengths at daylight levels of illumination are from 360160nm at the near ultraviolet end to 830160nm at the near infrared. However under normal viewing conditions, the effective visual limits are between 400160nm to 700160nm . as shown in most diagrams on this site. Yet it is possible to see wavelengths down to 380160nm or up to 900160nm if the light is bright enough or viewed in near darkness. Within the spectrum, the spectral hues do not have clear boundaries, but appear to shade continuously from one hue to the next across color bands of unequal width. It is easier to locate the center of these color categories than the edges the approximate wavelength location of basic color categories (including cyan or blue green) is shown in the figure (above). Note the fairly sharp transitions from blue to cyan and from green to yellow, and the narrow span of cyan and yellow (which can appear white in a rainbow). I use quotation marks to refer to spectral colors because light itself has no color . Color is fundamentally a complex judgment experienced as a sensation . It is not an objective feature of the physical world 151 but it is not an illusion, either. A single wavelength of the spectrum or monochromatic light . seen as an isolated, bright light in a dark surround, creates the perception of a recognizable hue but the same light wavelength can change color if it is viewed in a different context . For example, long wavelength or red light can, in the right setting, appear red, scarlet, crimson, pink, maroon, brown, gray or even black Similarly, in all the diagrams or illustrations of color vision (including the chromaticity diagram ), spectrum colors are only symbols for the different wavelengths of light. Despite all that, the abstract wavelength numbers are conveniently made more interpretable by the use of standard hue categories. Ive summarized below the hue terminology adopted throughout this site. It uses the six primary hue categories red, orange, yellow, green, blue and violet . with blends indicated by compound names in which the first hue is a tint or bias in the second hue: blue violet indicates a violet leaning toward blue. spectral hue categories Note . Hue boundaries rounded to the nearest 10160nm. Spectral hue boundaries are arbitrary, due to the gradual blending of one hue into the next, the shifts in hue boundaries produced by luminance changes, individual differences in color perception, and language variation in the number and meaning of hue categories. c means complement of wavelength for extraspectral hues (mixtures of orange red and blue violet light). Sources . complementary hues from Wyszecki 038 Stiles (1982) hue boundaries from my own CIECAM spectral hue scaling of watercolor pigments, Munsell hue categories and spectral wavelengths. These labels are only guidelines. In addition to the context factors mentioned above, the hue boundaries will appear to shift as the luminance (brightness) of the spectrum increases individual differences in color perception create substantial disagreement in the location of color boundaries, or the location of pure colors such as the unique hues (red, yellow, green and blue) and the location of boundaries will change with the number of hue categories used and the meaning assigned to them (especially across different languages or cultures). Variations in Natural Light. Radiation from the sun that does get through the atmosphere and is visible to our eyes can be described in three ways: 149160 Sunlight is light coming directly from the sun 151 the image of the suns disk or a shaft of sunlight into a darkened room. The solar color changes significantly across the day and depends on the angle of the sun above the horizon, the altitude of the viewer above sea level, the season, the geographical location and the amount of water vapor, dust and smoke in the air. The sun itself is so brilliant that it overwhelms color vision, making color judgments unreliable, but if the noon sun were dimmed sufficiently, its color would appear a pale greenish yellow (not the deep yellow of schoolroom paintings). This color appears in the positive afterimage of sunlight reflected off a car windshield. 149160 Skylight refers to the blue light of the sky as viewed from a location in complete shade, for example the light entering through a north facing window. It results from the scattering of short wavelength light by air molecules. This scattering is slightly stronger from the northern sky, opposite the generally southern origin of sunlight. The illuminance contribution of skylight is significant: though much dimmer than the suns disk, the visible area of the sky is approximately 100,000 times larger, which is why daylight shadows are clearly illuminated and we can read a summer novel in deep shade. 149160 Daylight is the combined light of sun and sky, for example as reflected from an unshaded sheet of white paper illuminated outdoors. Significant color shifts occur in daylight, depending on geography, season and time of day, but it is unchanged by scattered clouds or overcast: these only dim the light and diffuse it. The most accurate way to describe these color changes in natural light is by means of a spectral power distribution (SPD). This is a measurement of the radiance or radiant power (energy per second) of light within a small interval of the spectrum (such as 570-575160nm for wavelengths). Usually the power within each wavelength or wavenumber interval is shown as a proportion of a standard wavelength or maximum power, given an arbitrary value of 100, which creates a relative SPD . Many relative SPDs have been published as standard illuminants . which are spectral templates used to model the characteristics of natural light and to describe artificial light sources and light filters. Two of these, the standard noon daylight illuminant ( D65 ) and noon sunlight illuminant ( D55 ), are shown below, along with the SPDs for north skylight and sunset light. (The numbers associated with the illuminants indicate the closest matching correlated color temperature of the light.) spectral variations in natural light standardized relative spectral power distributions for daylight phases across the visible spectrum, normalized to equal power at 560160nm, with the correlated color temperature (CCT) of each profile (Wyszecki Stiles, 1982) The most interesting of these templates is D65 . the noon daylight illuminant. This SPD is useful in color vision research because it is perceived as a balanced white illumination across a wide range of illumination levels 151 rain or shine, we perceive daylight as white light, provided the sun is not near the horizon. This is the first of many facts that confirm our color vision is not an abstract and impartial color sensor but is a living system that anticipates the range of natural surface colors as they appear under natural illumination . The noon north skylight illuminant is in contrast strongly skewed toward the blue wavelengths, and this in the shade illumination appears distinctly blue to the eye. Noon sunlight ( D55 ) has a nearly flat distribution and appears to be a yellowish or pinkish white when the eye is adapted to noon daylight. When the sun is lower in the sky at sunrise or sunset, sunlight must pass sideways through a much longer and denser section of the earths atmosphere, which scatters most of the blue and green wavelengths to produce a distinctly yellow or red hue. (Sunlight is also reddened by dust storms, ash from volcanic eruptions or the smoke from large fires.) This lends morning or late afternoon light a strong yellow or red bias, climaxing in the deep orange of sunrise or sunset. Morning light has a softer, rosier color, in part because the cooler night air has a higher relative humidity that produces long wavelength filtering morning fogs or mists, and in part because the drop in temperature abates daytime winds and convection currents, allowing dust and smoke to settle out of the atmosphere. Thus, the illumination that reveals our world is not constant but varies across a broad swath of tints from cool blues to warm yellows and reds. The eye is adapted to minimize the distorting effect that these color changes in the light have on the color appearance of objects. Finally, the eye is adapted to function across illumination levels from 0.001 lux (starry night) to more than 100,000 lux (noon daylight), which makes us functional day or night. Moonlight has the same spectral power distribution as daylight, though much reduced in intensity, so the D65 illuminant stands for the daylight and nighttime extremes of natural light experience. However, our color experience of light and objects changes dramatically within that illumination range, as the eye changes from trichromatic photopic vision to monochromatic scotopic vision . design of the eye The eye is a marvel of biological adaptation to a specific function. In large part this adaptation is successful because it separates visual tasks into four levels of structure: the optical eye, the retina, the photoreceptor cells, and the photopigment molecules. 149160 scotopic or dim light adapted rods (denoted by V and containing the photopigment rhodopsin ), most sensitive to green wavelengths at around 505160nm 149160 short wavelength or S 160cones, containing cyanolabe and most sensitive to blue violet wavelengths at around 445160nm. 149160 medium wavelength or M 160cones, containing chlorolabe and most sensitive to green wavelengths at around 540160nm 149160 long wavelength or L 160cones, containing the photopigment erythrolabe and most sensitive to greenish yellow wavelengths at around 565160nm As the figure shows, there is a large number of differences between rhodopsin (taken as baseline) and the S 160photopigment, and a similarly large number of differences between the S 160and M 160photopigments. In contrast, the M 160and L 160photopigments are nearly identical. Photopigments do not catch light particles the way a bucket catches rain. Even if a photon strikes a photopigment molecule, the probability that the visual pigment will photoisomerize depends on the wavelength (energy) of the light . Each photopigment is most likely to react to light at its wavelength of peak sensitivity . Other wavelengths (or frequencies) also cause the photopigment to react, but this is less likely to happen and so requires on average a larger number of photons (greater light intensity) to occur. Measuring Photoreceptor Light Sensitivity. The relationship between photopigment chemistry and light sensitivity was anticipated by 19th century visual researchers, and was demonstrated when the rod photopigment (then called visual purple ) was extracted from dissected retinas and shown to bleach readily in light. Over a century later, methods were developed to measure the bleaching of cone and rod photopigments to specific light wavelengths, which produces a relative sensitivity curve across the spectrum for each type of photopigment or cone. As the curve gets higher, the probability increases that the photopigment will be bleached (and the photoreceptor cell will respond) at that wavelength. The photopigment absorption curves shown below were measured in about 150 intact cones from surgically removed human eyes, held in a tiny glass tube and illuminated from the side by a thin beam of monochromatic (single wavelength) light (a technique called microspectrophotometry ). They closely resemble the recordings of single cones in monkey retinas and the absorption curves of genetically manufactured photopigment molecules. These curves have been normalized 151 the sensitivity at each wavelength is expressed as a proportion of the maximum sensitivity, which is assigned a value of 1.0. human photopigment absorption curves curves normalized to equal peak absorptance (1.0) on a linear vertical scale wavelength of peak absorptance in italics, number of photoreceptors measured for each curve at base of curve data from Dartnall, Bowmaker Mollon (1983) Four kinds of spectra were obtained with four distinct absorptance peaks at 420, 495, 530 and 560160nm. As expected from the photopigment molecular structure . the L 160and M 160photopigments have a similar peak and span within the spectrum, and both differ significantly from the location of the S 160cone curve. The fourth (rod) photopigment, rhodopsin . fits in between. The photoisomerization curves should describe color vision if each type of cone contains only one kind of photopigment and if the intensity of the photoreceptor response is proportional to the quantity of photoisomerized pigment. In fact, these curves do not correspond to human color matching responses, especially for the L160cones . So we have to shift attention to the cones as they respond in an intact (living) retina. It is impractical to measure cone responses in live human retinas, so methods have been contrived since the late 19th century to measure them by indirect methods. Within the past few decades, genetic identification of the specific opsin types expressed in individual photopigments has produced an increasingly accurate picture. The most reliable data on cone sensitivity curves or cone fundamentals actually comes from a different experimental method, also first used in the 19th century: color matching experiments . In this approach, viewers match the color of a test wavelength by a mixture of three primary lights. These matches are performed by normal trichromats and by carefully screened dichromats (colorblind subjects) who lack one of the L . M 160or S 160photopigments entirely or carry L and M photopigments that are very similar . Differences between the dichromats allow measurement of either the L or M light response without any contribution from the other type of cone. These measurements are then used to transform the color matching functions of normal subjects into cone fundamentals that match the separate curves found in dichromats. Adjustments must also be made to compensate for the prereceptoral filtering of short wavelength light by the lens and macular pigment. Alternate techniques, including measurement of nerve signals from individual human cones or rhesus monkey cones, have been used to confirm and clarify the color matching data. Because the cones are unequally distributed across the retina, it matters how large (visual angle) and where (centrally or peripherally) the color stimulus appears in the visual field 151 different presentations of color stimuli will produce different color matching functions. The standard alternatives are a 2deg (foveal) or 10deg (wide field) presentation of color areas centered in the field of view. Compared to 2deg curves, the 10deg L 160and M 160curves are elevated by 10 to 40 in the green to violet wavelengths. For reasons explained here . the 10deg curves are used throughout this site. The cone fundamentals seem to imply that cone sensitivities are fixed, like the speed of a photographic film. They are not: cone sensitivity depends on light intensity . and the curves below describe the average response under moderate levels of retinal illuminance. The absolute level of all sensitivities changes as part of light adaptation . and the relative sensitivities change during chromatic adaptation . Five Views of the Cone Fundamentals. Lets now look at five types of presentation using recent estimates of 10 degree, quantal cone fundamentals by Andrew Stockman and Lindsay Sharpe (2000). 1. Linear Normalized Cone Fundamentals . This is the most common textbook presentation of cone response sensitivities. The response at each wavelength is shown as a proportion of the peak response, which is set equal to 1.0 on a linear vertical scale. This produces the three similar (but not identical) curves shown below. normalized cone sensitivity functions the Stockman 038 Sharpe (2000) 10deg quantal cone fundamentals, normalized to equal peak values of 1.0 on a linear vertical scale This presentation is in some respects misleading, because it distorts the functional relationships between light wavelength (energy), cone sensitivity and color perception. However, comparison with the photopigment absorption curves above identifies three obvious differences between the shape and peak sensitivity of the photopigment and cone fundamentals: 149160The L 160cone has a noticeably broader or wider curve than the S 160and M 160cones the S 160cone has a narrower response profile than either the M 160or L 160cones. 149160Compared to the photopigments, the cone peak sensitivities have been shifted toward long wavelengths . by 5160nm ( L 160cone) to 25160nm ( S 160cone). 149160The short wavelength tails of the photopigment curves have been lowered so that the response below 500160nm falls toward zero. As the effects of prereceptoral filtering have been removed, this implies that increased S 160outputs are opposed to the L and M outputs . at short wavelengths, the S 160cone suppresses the L 160and M 160cone sensitivity. Overall, human spectral sensitivity is split into two parts . a peaked short wavelength sensitivity centered on blue violet (445160nm), and a broad long wavelength sensitivity centered around yellow green ( 560160nm), with a trough of minimum sensitivity in middle blue (475 to 485160nm). 2. Log Normalized Cone Fundamentals . A problem with the linear normalized cone fundamentals is that they emphasize the overall peak shape of the curves as a result they do not adequately display the tails or extreme low values. The solution is to present the normalized curves on a log vertical scale, as shown below. Each unit of the log sensitivity scale is 10 times smaller than the unit before, which zooms in on the very low sensitivities. (Cone fundamentals are most often tablulated in log normalized form, as it is easy to convert these curves into any other format.) log normalized cone sensitivity functions the Stockman 038 Sharpe (2000) 10deg quantal cone fundamentals, normalized to equal peak values of 1.0 on a log vertical scale These curves provide three additional insights: 149160 Each type of cone responds to a wide range of light wavelengths in fact, the measurable sensitivity of the L 160and M 160cones extends over the entire visible spectrum, although the sensitivity of the M 160cone is very low in the near infrared. 149160The L160and M160response curves largely overlap one another 151 and this overlap significantly limits the maximum saturation of hues in the yellow through green wavelengths. 149160The S160cone responds to only half the spectrum . from yellow green to violet perception of monochromatic yellow green to red hues depends entirely on the balance between L 160and M 160outputs. 3. Log Population Weighted Cone Fundamentals . The previous two formats imply that the different photopigments or cone spectral classes are represented in the retina in equal numbers. This is not true, which suggests the cone fundamentals should be weighted (shifted up or down in relation to each other) to more accurately represent their proportional contribution to color vision. The peak values of each cone are set equal to the proportion of that cone in the total number of cones in the retina. The probability that a cone will respond is weighted by the probability that a photon will strike that cone type. The population proportions used here are approximately those of the 10deg retinal anatomy: 63 L 160cones, 31 M 160cones, and 6 S 160cones. (There is reliable evidence that these proportions differ significantly from one person to the next.) population weighted log cone sensitivity functions the Stockman 038 Sharpe (2000) 10deg quantal cone sensitivity functions on a log vertical scale (1.0 total cumulative response by all three cones) area of 50 or more optical density of macular pigment and adult lens shown in yellow from Stockman, Sharpe 038 Fach (1999) Taking into account both the individual response sensitivities of the three cone classes and their proportional numbers in the retina, we see that a random photon of equal energy or white light is most likely to produce a response in an L 160cone at any wavelength above 445160nm. In contrast, the M 160cones have only 40 of the L 160cone response probability across all wavelengths, and the S 160cones only one tenth of that. Thus, a single photon is roughly 25 times more likely to produce a response in an L 160than an S 160cone. 4. Linear Population Weighted Cone Fundamentals . The log scale is useful to show the very low values of cone fundamentals, in the tails of the curve, but it gives an unfamiliar view of the overall shape of the population weighted curves. Represented on a linear scale, the curves reveal the response probabilities more directly. population weighted linear cone sensitivity functions the Stockman 038 Sharpe (2000) 10deg quantal cone fundamentals on a linear vertical scale, scaled to reflect L . M 160and S 160cone proportions in the retina (1.0 total cumulative response by all three cones) This is probably the most accurate picture of the proportional response probabilities of the three cone classes in relation to each other, to different wavelengths of light, and to our overall visual acuity. We glean a few more insights: 149160The L160and M160cones produce nearly all the information acquired by the retina the L 160cones account for most of the retinal signal at nearly all wavelengths. When we recall that the fovea contains half the total number of L 160and M 160cones in the retina, the curves indicate the dominant importance of foveal responses in color vision. 149160The linear scale emphasizes that each cone responds primarily to light near its wavelength of maximum sensitivity (the three white lines in the spectrum band). A single photon at a cones peak sensitivity has a visual impact equivalent to 10,000 or more photons at the tails or low sensitivity ends of the curve. 149160As a result, our eyes are most sensitive to yellow green wavelengths around the middle of the spectrum: yellows and greens are the most luminous colors in a prism spectrum or rainbow. In fact, most of our light sensitivity lies between 500160nm to 620160nm 151 roughly from blue green to scarlet. 149160The sensitivities of the L 160and M 160cones are well contrasted through the red to yellow green parts of the spectrum, but become very similar through the blue green to blue violet parts of the spectrum. The S 160cones break the tie in wavelengths below 5. Equal Area Cone Fundamentals . The population weighted curves are a physiological representation of visual sensitivity: the cones are weighted by their proportional numbers in the retina. But individual cone outputs have unequal importance or voting strength in determining color sensations because they flow into common pathways or channels of color information. These channels have a weight or importance of their own, which defines the perceptual importance of the cone classes in brightness and color perception. A plausible assumption adopted in colorimetry is that each type of cone contributes equally in the perception of a pure white or achromatic color. This means that the cone fundamentals do not represent individual photoreceptors, but classes or types of cones as a group. Each class is given equal perceptual weight in the visual system. In this type of display, the peak of each sensitivity curve is scaled up or down so that the area under each curve (equivalent to the total response sensitivity of each type of cone, pooled across all cones) is the same these curves are usually presented on a linear vertical scale. equal area cone sensitivity functions the Stockman 038 Sharpe (2000) 10deg cone fundamentals rescaled so that the area under each curve equals 10 on a linear scale These equal area cone sensitivity curves have an important and specific technical role in colorimetry . to model the changes in cone sensitivities that occur in each class of cone during chromatic adaptation . This equal area presentation of cone fundamentals should not be confused with the most commonly used model of visual sensitivity based on equal area weighting: the curves of standard color matching functions . These are superficially similar to cone fundamentals, and can be derived from the cone fundamentals by a mathematical transformation, but they do not represent specific photoreceptors or color channels in the visual system. The large differences in peak elevations (especially when compared to the population weighted cone fundamentals) imply that the S 160cone outputs must be heavily weighted in the visual system far out of proportion to their numbers in the retina. This turns out to be true. In addition, the proportionally small overlap between the S 160cone curve and the L 160and M 160curves implies that short wavelength (violet) is handled as a separate chromatic channel and is perceptually the most chromatic or saturated. We might also suspect that the L 160and M 160cones have a different functional role in color vision from the S 160cones, because they have very similar response profiles across the spectrum and lower response weights than the S cones. And this also is true: the L 160and M 160cones are responsible for brightnesslightness perception, provide extreme visual acuity, and respond more quickly to temporal or spatial changes in the image the S 160cones contribute primarily to chromatic (color) perceptions. photopic 038 scotopic vision The separate L . M 160and S 160cone fundamentals do not directly answer a more basic question: what is the eyes overall sensitivity to light How much radiant power must a light emit before we can see it The answer still depends on the wavelength of the light, but it also depends on the total intensity or energy of the illumination 151 the difference between daylight and darkness. Daylight (Photopic) Sensitivity. At illumination levels above 10 lux or so, corresponding to daylight levels of illuminance from twilight to noon daylight, the cones primarily define the luminance or brightness of a light or surface. This is photopic vision . and it is functioning whenever we see two or more different hues. Photopic sensitivity was one of the first visual attributes that 19th century psychophysicists attempted to measure. A plausible early approach was heterochromatic brightness matching . in which viewers adjust the brightness (radiance) of a monochromatic (single wavelength) light until it visually matched the brightness of a white light standard after this was done across the entire spectrum, it yielded a curve of overall light sensitivity. Unfortunately, colored light is not perceived the same as white light: hue purity increases the sensation of brightness . making saturated lights appear brighter than a white light of equal luminance. Various methods have been tried to get around the problem. The most reliable is flicker photometry . which cancels the chromatic component in a monochromatic light by flickering it on and off so rapidly that it appears to fuse into a steady, desaturated, half bright stimulus, which is then adjusted to the white light standard. The diagram shows results from six different measurement techniques, listed in the key in descending order of reliability, to show the extent of the problems. a passel of photopic sensitivity functions luminous efficiency measured using six different techniques (from Wyszecki 038 Stiles, 1982) The underlying problem, it turns out, is not in the measurement but in the theory: the diagram is not a picture of measurement problems but of visual adaptability. The brightness sensation is a dynamic response by different types of photoreceptors adjusting to different visual contexts. Overall light sensitivity varies across different luminance levels . light mixtures and dominant colors it varies depending on how it is measured. So it cannot be pinned down as a single curve, as can be done with the four types of photoreceptors. Even so, the curve is useful in many practical applications. So the international standards body for color measurement methods, the Commission Internationale de lEclairage (or CIE ) cut the Gordion knot and adopted a photopic sensitivity function denoted V ( lambda ), which means a curve of the luminous efficiency value V at each wavelength lambda . This curve was based on early (up to 1924) sets of diverse 2deg color matching functions weighted to reproduce, at the corresponding wavelengths, the apparent brightness of the three rgb primary lights used in the color matching studies. This standard curve locates the main light sensitivity in the green center of the spectrum between 500160nm to 610160nm, and places the peak photopic sensitivity at 555160nm 151 though as you see above, this peak is more of a plateau. The V ( lambda ) luminosity function equivalently represents the relative luminous efficacy of radiant energy, the light stimulating power of equal watts at each wavelength. In this guise it is the basis of modern photometry as deployed in photographic light meters and digital camera image sensors. An electronic sensor measures the radiance within small sections of the visible spectrum, then weights each section by its luminous efficacy the total across the spectrum matches to a good approximation the lights luminance or apparent brightness to the human eye when the light is viewed in isolation. Here is the curve on a log vertical scale, with its partner the scoptopic sensitivity function denoted V ( lambda ) and discussed in the next section. photopic 038 scoptopic sensitivity functions CIE 1951 scotopic luminous efficiency and CIE 1964 wide field (10deg) photopic luminous efficiency, relative to peak photopic sensitivity on log vertical scale relative peak sensitivities from Kaiser Boynton (1996) Unfortunately, photopic sensitivity has remained a moving target. All measurements include the prereceptoral filtering in brightness judgments, which complicates measurement in the blue and violet wavelengths. Subsequent research also showed that the CIE 1924 curve underestimated photopic sensitivity in the blue wavelengths, so the curve has been twice corrected 151 by Judd (1951) and Vos (1978). This modified Judd-Vos luminosity function is usually denoted V M ( lambda ) 151 M 160for modified . Meanwhile a consensus emerged that the S 160cones do not contribute substantially to brightness perception. This means the photopic luminosity function is more accurately defined as a weighted combination of the L 160and M 160 cone sensitivity curves . Paradoxically, when more realistic corrections for lens and macular density are added, these newer estimates increase even further the estimated photopic sensitivity to short wavelength light, despite exclusion of the S 160cone from the curve. The diagram below presents the updated 2deg luminosity function denoted V ( lambda ) and the companion 10deg luminosity function by Stockman 038 Sharpe, normalized on a linear scale. The modified 1978 Judd-Vos curve ( V M lambda ) and the 1951 scotopic curve V ( lambda ) are included for comparison. The new curves are based on the Stockman 038 Sharpe L 160and M 160cone fundamentals that best fit flicker photometric data for 40 viewers the L cones have a 50 greater weight than the M 160cones and the S 160cones are given zero weight. These curves put the peak photopic sensitivity at 545160nm, but with a flattened peak due to the averaged values of variant L 160photopigments . photopic 038 scoptopic luminosity functions photopic functions based on the 2deg and 10deg quantal cone fundamentals of Sharpe, Stockman, Jagla 038 Jaumlgle (2005), shown with the CIE 1951 scotopic function and the Judd-Vos 1978 photopic function all curves normalized to equal peak sensitivity on a linear vertical scale These curves show that short wavelength or blue light has a greater luminous efficiency under scotopic than photopic vision. They show that differences among the photopic curves are confined almost entirely to the short wavelengths, and that there is a greater blue response in wide field than foveal color perception. Finally, they show that long wavelength or red light strongly stimulates the photopic function but the scotopic very little. This is why submariners and astronomers dark adapt under red light: it keeps the foveal cones functional (for detail vision) while dark adapting the rods. Both cones and rods respond near the maximum to green wavelengths, regardless of luminance. This is why emergency response vehicles are now painted a light yellow green instead of the traditional red 151 the yellow green is much easier to see, especially in dim light. Dim Light (Scotopic) Sensitivity. The diagrams above also show the CIE 1951 scotopic sensitivity function denoted V ( lambda ) which describes human light sensitivity under dark adaptation at illuminances below 0.1 lux. This is approximately the amount of light available under a half moon at night. At these very low illuminance levels the cones cannot respond to light and the rods completely define our visual experience. There are seven peculiarities of rod based or scotopic vision: 149160All rods contain the same photopigment, rhodopsin, so rods lack the photopigment variety necessary for color vision. We become functionally colorblind except for isolated points of higher luminance, such as distant traffic lights or the planet Mars, that are bright enough to stimulate the cones. 149160Rod signals do not transmit within separate nerve pathways: they feed into the same channels used by the cones. These channels are segregated into contrasting redgreen and yellowblue opponent contrasts . As a result, rods can produce faint color sensations 151 at very low chroma and lightness contrast. 149160 Peak sensitivity is around 505160nm or blue green. As natural light around sunset is shifted toward red, the rods start to dark adapt while still exposed to long wavelength (red) light. 149160However, rod sensations of white usually appear faint blue (matching about 480160nm) and not blue green. 149160There are roughly 100 million rods in the human retina, yet they are completely absent from the fovea at the center of the visual field where daylight visual resolution is highest. We cannot read even large print text under scotopic vision, or recognize very small objects, because the fovea shuts down. 149160There are about 16 rods for every cone in the eye, but there are only about 1 million separate nerve pathways from each eye to the brain. This means that the average pathway must carry information from 6 cones and 100 rods This pooling of so many rod outputs in a single signal considerably reduces scotopic visual resolution and means, despite their huge numbers, that rod visual acuity is only about 120th that of the cones. 149160Like the cones, the rods are more widely spaced and larger in diameter toward the edges of the retina, but they also form a densely packed ring at about a 20deg visual angle around the fovea. This is why we can see very faint stars or lights at night if we look to one side, rather than directly at, their location. Mesopic Light Sensitivity. The rods strongly affect color perception under moderately low illumination by mixing with or tinting the color responses of the still active cones. This mesopic vision typically appears in illuminances from 0.1 to 10 lux, for example during the 45 minutes or so after sunset (for a viewer outdoors and shielded from artificial light). Because the rod and cone outputs are pooled in shared nerve pathways in mesopic vision, the photopic luminosity curve shifts toward blue and long wavelength hues become darker. Bright yellow, ochre and umber fuse into a single grayish tan, greens and blues appear as a single grue (green blue) color, and reds become a warm, dark gray. This Purkinje shift (named for the Bohemian scientist who described it in 1825) is easiest to see in large areas of color extending outside the visual field of the rodless fovea. It is quite noticeable if you look at a familiar, brightly colored art print, hawaiian shirt or flower bed in fading twilight as your eyes become adapted to the dark. In daylight illumination the rod signals are near maximum, which causes a ceiling effect that makes the rods insensitive to light contrast. (This is due to a response compression of the rod outputs, and not to photopigment depletion by light.) The rod signals therefore disappear in the same way cone outputs do in a ganzfeld effect . Even so, rods remain active in daylight, even under very bright levels of illuminance. They can affect color appearance by a process called rod intrusion . which desaturates colors, especially at long (red) wavelengths, in large field or peripheral vision, and under moderate to low levels of illumination. The Color Vision Research Laboratories at UC San Diego provide a comprehensive online library of data relating to photopigments, cone fundamentals, colorimetry and visual responses. trichromatic mixtures The premise that millions of distinct colors can arise from the stimulation of three different color receptors is called the three color or trichromatic theory of color vision, first proposed in the 18th century. It is the foundation of modern colorimetry . the prediction of perceived color matches from the physical measurement of lights or surfaces. The cone sensitivity curves show the probability that individual L . M 160or S 160cones will respond to light at different wavelengths. But they do not offer a very clear picture of how the cones work together or how the mind triangulates from the separate cone outputs to identify specific colors. We obtain this picture by charting the proportion of cone outputs in the perception of a specific color. This results in a literal triangle, the trilinear mixing triangle . that contains all possible colors of light. Any diagram that shows how the L . M 160and S 160cones combine to create color perceptions must also define a specific geometry of color . This geometry changes, depending on how the cone signals are combined. The relationships among cone outputs, the method for calculating the outputs that produce a specific color sensation, and the shape of color in the mind, are all aspects of the same problem. Principle of Univariance. A key feature of photoreceptor signals is that they represent light as a contrast or change to a continuous baseline signal of about 15040 millivolts, even in darkness (hence the name dark current ). A change in photoreceptor excitation is transmitted as a more or less change in this baseline signal. This single type of photoreceptor response to any and all light stimulation is termed the principle of univariance . Now, the rate of photoisomerization in the photopigment depends on two completely separate dimensions of the light stimulus: (1)160the quantity of light incident on the retina or (2)160the relative sensitivity of the photopigment to the light wavelength(s). As a result, a change in the photoreceptor signal can be caused by two very different changes in the light. The cone or rod output decreases as the light gets brighter or as the light frequency gets closer to the frequency of its peak sensitivity and the output increases as the light becomes dimmer or farther from its peak sensitivity. the principle of univariance a single L cone responds with the same more or less signals to changes in light frequency or light intensity Thus there are two kinds of ambiguity in the response of individual photoreceptors to light (diagram, above): 149160 A single type of cone cannot distinguish changes in wavelength (hue) from changes in radiance (intensity) . Alternating between equally bright green and red wavelengths ( B ), or modulating a single green wavelength of light between bright and dim ( C ), will produce an identical change in the output of a solitary L cone. 149160 Some changes in light produce no cone response . Alternating between equally bright blue and orange wavelengths ( A ), or between a dim green and proportionately bright red light ( D ), would not change the output of that solitary L cone. However, what the cones cannot do individually they can achieve as a team. The principle of univariance means that color must be defined by the combined response of all three cone types . The Cone Excitation Space. To visualize the color creating relationships among the separate L . M 160and S 160cones, they are used to define a three dimensional space. In this cone excitation space each dimension represents the separate and independent excitation or outputs produced in each type of cone. The standard method to illustrate cone behavior is to combine the three cone responses produced by monochromatic lights from short (390160nm) to long (750160nm) wavelengths. This is done by plotting the cone fundamentals at each wavelength as points in the cone excitation space. For example (diagram, right), at a wavelength of 500160nm, the L cone sensitivity is 0.44, M is 0.64 and S is 0.09. Those three numbers locate the combined cone excitation to monochromatic light at 500160nm. When similar points are plotted for all visible wavelengths, they define a curved path of cone excitations to monochromatic (maximally saturated) lights called the spectrum locus . a normalized cone excitation space the spectrum locus (red dots) plotted in three dimensions defined by the normalized cone fundamentals L . M and S V is the photopic luminous efficiency function We still can recognize in this curve the basic features of the normalized L . M and S cone fundamentals. All points at wavelengths below 400160nm or above 700160nm are at the origin (0 on all dimensions) which means those wavelengths are invisible 151 they produce no cone excitations. The L cone reaches its maximum response at around 565160nm, the M cone at around 540160nm, and the S cone at around 445160nm. But now we see them in dynamic combination. The V luminosity function (green line in diagram) is the sum LM of the normalized L and M outputs, so it forms a diagonal from the origin and in the L, M plane. The contrast between the L and M outputs ( L150M ) forms the opposing diagonal. We can also find the location of any complex color, if we know the cone excitations it produces. The white point ( wp ) produced by an equal energy illuminant is found as the total area under each cone fundamental divided by the sum of all three fundamental areas: L 0.44, M 0.37, and S 0.19. The extraspectral mixtures of red and violet extend as a line between the red and violet lights used to mix them 151 in the diagram, between 620160nm and 445160nm. reading cone responses to a 500160nm monochromatic light If we turn this diagram to look at it sideways, we see that the boundary of color space is geometrically irregular . No simple geometrical shape can describe the spectrum locus. It forms a roughly elliptical outline when viewed from one side (diagram, right) but a double lobed or pinched shape when viewed along the luminance diagonal (diagram, above). This double lobed shape reappears in the spectrum locus of color models, such as CIELAB or CIECAM . that contrast colors with a white surface. This double lobed shape occurs because the spectrum locus is bent at a 90deg angle along a line from the origin to approximately 525160nm (middle green, purple line in diagrams above and right). The vertical half, comprising all wavelengths below 525160nm that produce a significant S cone response, sticks upward like a shark fin. The rest of the spectrum locus 151 wavelengths above 525160nm where the S cone response is effectively zero 151 lies completely flat on the L, M plane. As a result of this bend, the color space is inherently curved . For example, mixing the wavelengths 575160nm and 475160nm in the right proportions will produce a white light. Therefore the mixing line between them must pass through the achromatic white point: but this can only be done with a curve (diagram, right). Moreover, the shape of this curve changes as we mix different pairs of complementary wavelengths. The curvature of chromaticity plane stretched inside the closed spectrum locus is geometrically irregular, too. The diagram demonstrates how the L and M cones operate in tandem to define luminance. For lights below 525160nm, luminance is defined by the projection of the spectrum locus into the L, M plane (dotted line in diagram, right). But if we set aside luminance perception defined by the LM diagonal, then color perception is divided into two parts : 149160at wavelengths above 525160nm . changes in the relative excitation of the L and M cones define the color response the S cones are silent. 149160at wavelengths below 525160nm . the relative L, M excitations are approximately the same as they are at 525160nm (the dotted line and purple line are equivalent) so it is the relative excitation of the independent S cone that defines the color response. This two part geometry is the photoreceptor foundation for the opponent geometry of color appearance. A final observation is that the white point is not located on the luminosity function. This simply demonstrates that white is not the same as bright . The perception of white is a form of color sensation, whereas the perception of bright is a unique intensity sensation. The cone excitation space implies that a bright stimulus produces more than two times the cone excitation of a white surface, and therefore visual white always has a lower luminosity than visual bright under the same viewing conditions. The Chromaticity Plane. Reweighting the cone fundamentals, for example by doubling the M cone response or by using equal area or population weighted cone fundamentals, changes the relative length of the dimensions but does not alter the fundamentally curved and irregular geometry of the cone excitation space. 160 However, we get a radically different color space through a different approach. By removing variations in the brightness of different wavelengths, we flatten the curvature of the three dimensional spectrum locus. This is done by normalizing on the total cone excitation, or dividing the excitation in each cone by the stimulation produced in all cones: If we divide the excitation produced in each cone type ( L c . M c and S c ) by this amount, we get the relative proportion of the color sensation that is separately contributed by each of the three cone types. This is the chromaticity of the color: The previous example only described a single wavelength of light, so we simply sum the cone excitations at that single blue green wavelength: 1600.64 0.44 0.09 side view of the cone excitation space This procedure radically transforms the color space in two ways (diagram, right): (1) it uncouples the red and violet ends of the spectrum locus, which were previously joined at the origin (zero values) because they are both very dark hues (2) it projects the spectrum locus onto the plane surface of an equilateral triangle (blue) whose corners are located at the three maximum values for the three cones. The three dimensional spectrum locus has been flattened into two dimensions, and the original banana shaped color space has been transformed into something resembling a right triangle. If we define the L . M and S dimensions as the response each cone relative to its maximum response, then a line from the origin to the white point ( wp ) forms an achromatic gray scale. It is customary to display this transformed spectrum locus so that the triangle plane is perpendicular to view. In this orientation it forms a trilinear mixing triangle or a Maxwell triangle . after the 19th century English physicist James Clerk Maxwell who first used it. The trilinear mixing triangle does not represent differences in brightness or lightness between colors, only differences in chromaticity (hue and hue purity). Chromaticity is the color in color, separate from its lightness or brightness. For this reason, the area within the mixing triangle that is enclosed by the bowed spectrum locus (and a line connecting the extreme short and long wavelengths) is called a chromaticity diagram . This figure offers many fundamental insights into the geometry of color and is worth patient study. a trilinear mixing triangle and chromaticity diagram any possible combination of three cone outputs can be represented as a unique point within the triangle the range of physically possible colors is contained inside the spectrum locus 149160Each corner of the triangle represents the color that would be perceived by the maximum excitation of a single L . M 160or S 160cone without any excitation in the other two cones. The sides of the triangle represent shared excitations between just two cone types, with no contribution from the third. Any location inside the triangle represents a color that results from the stimulation of all three types of cone. 149160The mixture proportions for each cone are shown along the triangle sides. By definition every trilinear mixture must sum to 100, so a mixture of 50 S 160and 38 L 160(color a ) must contain 12160 M (by subtraction: 1001505015038 12). Therefore just two chromaticity values uniquely specify every color in a chromaticity diagram. 149160The point where all three primaries contribute in proportions equal to their perceptual weight is the white point . The white point changes its location within the chromaticity diagram depending on how the three cone outputs are weighted the diagram shows them weighted proportional to the areas under the normalized cone fundamentals (44, 37, 19). 149160The chromaticities of monochromatic (single wavelength) lights define the spectrum locus . the trace of the most intense colors physically possible. The line of red and blue mixtures between 400160nm and 700160nm, which includes magenta and purple, is called the purple line . 149160As explained above . spectral hues at wavelengths above 525160nm are defined only by the relative proportion of L and M outputs they lie on the straight line LM base of the equilateral triangle. Hues below 525160nm are produced by a roughly constant ratio of L and M outputs, so are distinguished only by the relative percentage of S cone excitation. 149160All mixtures of two real colors of light (such as colors a and b ) define a straight mixing line across the chromaticity space. All colors produced by the mixture of those two colors of light must lie along the mixing line. 149160The hue purity or chroma of a color is defined as the length of the mixing line between the color and the white point. It is obvious that monochromatic hues do not have equal hue purity: spectral yellow appears rather pale or whitish because it is close to the white point, and spectral violet has the highest chroma because it is far away. 149160All mixtures outside the spectrum locus and purple line are cone proportions that cannot be produced by any physical light or surface. They are physically impossible or unrealizable colors . This gray area shows that about half of the unique combinations of cone outputs cannot be produced by any physical stimulus . It also shows that these colors 151 especially green primary 151 would appear more saturated than any spectral light. This has nothing to do with the purity of light: it is due to the overlap in cone fundamentals across the spectrum (especially between the L 160and M 160cones), and to the random, side by side mixture of L . M 160and S 160cones within the retina. As a result any light that stimulates one cone also stimulates one or both remaining types of cones. We are physiologically prevented from seeing a pure cone output, and therefore we never see a pure primary color . 149160Very large color differences can be produced by very small differences in cone outputs . For example, the change from white to pure yellow occurs with a change in L 160and S 160cone outputs of less than 20 all green mixtures are produced by changes of less than 30 in the L 160and M 160cone outputs. 149160The chromaticity distance between unique green ( 500160nm) and unique blue ( 450160nm) is extremely large and represents almost the entire range of S 160cone outputs. For this reason the appearance and measurement of blue hues are sensitive to the location of the white point and are variable across different color models . color space defined as cone excitations as a proportion of total cone excitations (brightness) 149160A color appearance does not reveal the cone proportions that created it. In every green color the M 160cone outputs are less than 60 of the total all possible colors contain at least 10 of the L 160outputs most of the possible cone output combinations result in various flavors of blue. 149160Finally, chromaticity diagrams are highly sensitive to assumptions made about how cones combine or are weighted in perception, or how the dimensions are rotated to present the chromaticity diagram to view. The two examples at right show the CIE Yxy chromaticity diagram (top) that has long been a standard in colorimetry but does not describe color differences accurately (for example, it makes green the most saturated spectral hue and gives it the largest perceptual area) and the CIE 1976 UCS chromaticity diagram (bottom), in which the measurement dimensions have been manipulated to represent, as accurately as feasible in a two dimensional diagram, the relative saturation of spectral hues (as their distance from the white point) and the perceptual difference between two similar colors as the chromaticity distance between them in the diagram. 10deg (wide field) and 2deg (foveal) chromaticity diagrams The diagram above shows the change in the chromaticity diagram that occurs if cone fundamentals are weighted so that the area under each curve represents the cone populations for a 10deg retinal area (which includes 6 or more S 160cones) or a 2deg retinal area (which includes less than 1 S 160cones). The foveal weighting produces a substantial reduction in the blue response range and shifts the white point almost to the L150M border. When using a chromaticity diagram, keep in mind that it has mathematically and rather mechanically removed the perceptual effect of brightness. Because the luminance of a red or violet monochromatic or single wavelength light appears quite dim in comparison to a green light, when all lights are the same radiant intensity, the red and violet lights would have to be substantially increased in power to produce a perceptual match to the chromaticity diagram. A chromaticity diagram therefore does not accurately describe, for example, the perception of relative color intensity in a solar spectrum, where the visible wavelengths have roughly similar radiance . The interactive tutorial on color perception hosted by the Brown University Computer Science Department includes a java applet that models the additive mixture of three primary colors. constraints on color vision To conclude this page, lets consider how our visual capabilities are adapted to perception of radiant energy in the world. By comparing our visual capabilities to those of other animals, we can understand how we benefit from three cones, rather than two or four, and why our cones are tuned to specific wavelengths and not others. variety in chromaticity diagrams (top) CIE Yxy diagram, with the xyz primaries rotated to match a maxwell triangle (bottom) CIE UCS diagram, with a primary triangle imputed Standard Assumptions. An assumption made in most studies of animal vision is that photopigment absorption curves conform to a common shape, for example Dartnalls standard shape (right), which is plotted around the pigments peak value on a wavenumber scale. This common shape arises from the backbone opsin molecule structure. Variations in the opsin amino acid sequence only shift the wavenumber of peak sensitivity up or down the spectrum this does not change the basic shape of the curve, though it becomes slightly broader in longer wavelengths. (We have already seen this template similarity in the human photopigment curves .) It is also assumed that each class of photoreceptor contains only one kind of photopigment, and that the principle of univariance describes the photopigments response to light. These three assumptions allow a basic understanding of animal visual systems without painstaking measurement of photopigment or cone response curves. Dartnalls standard shape does not adequately describe human color vision, especially the L160photopigment absorptance, but idealized curves are adequate to illustrate the important constraints on color vision. Monochromatic Vision. The rudimentary form of vision requires a single receptor cell ( V ). For maximum sensitivity this receptor should respond to wavelengths somewhere in the 300160nm to 1000160nm area of the spectrum where solar radiance at the earths surface is most intense. This kind of visual system, which is common in lower vertebrates . codes light along a single luminosity dimension that can only distinguish light from dark. Monochromatic vision can include a mechanism for light adaptation that allows the eye to function across large changes in overall illumination, and it can detect movement, shapes, surface textures and depth. But it cannot easily distinguish between the emitted brightness of lights and the reflected lightness of surfaces, or objects from near or similarly reflecting backgrounds. It also cannot perceive color: changes in hue 151 from green toward either red or blue 151 will appear as luminance changes from light to dark. Nevertheless, all vertebrates have at least this basic visual capability, which suggests that luminance variations are the dominant visual information available from the environment. Humans experience monochromatic vision at night, under scotopic or dark adapted vision when only a single type of photoreceptor (the rods) is active, and under monochromatic illumination such as the red light lamps used by astronomers. a single cone visual system response curve of human rods with maximum sensitivity at 505160nm As only one receptor is involved, the key constraint has to do with the receptor sensitivity peak and breadth within the span of solar radiation. For purely technical reasons the peak solar radiation seems to shift in a range between roughly 500160nm to 900160nm, depending on whether the radiation is summed within wavelength or frequency intervals, and is measured as energy or photon counts and the noon sunlight curve is rather flat throughout this range. So the solar peak is a poor criterion for comparison. Instead we can consider a window of atmospheric transparency or minimum light filtering as measured at the earths surface, which provides a stable frame of reference. dartnalls standard shape expressed on a wavelength scale at a peak value of 505160nm The major causes of light absorption or scattering in our atmosphere are air molecules (including the ozone layer), dust or smoke, and water vapor. As the diagrams at right show, there is an especially close correspondence between the human visual span and the wavelengths of minimum water absorptance . including liquid and water vapor 151 and the large bead of mostly water, the vitreous humor, that inflates the eye and sits between the pupil and retina. Human light sensitivity is located on the uphill side of this lowest point, away from UV radiation and toward the infrared side of the light window. All vertebrates have inherited visual pigments that evolved in fishes, which may explain why our pigments are tuned to these wavelengths. A second possible constraint is the range of chemical variation in photopigments . for example as expressed in all known animal photopigments. The figure below shows the wavelengths of maximum sensitivity for the four human photopigments in relation to animal photopigments with the lowest and highest peak sensitivities 151 from 350160nm (in some birds and insects) to 630160nm (in some fish). This puts the outer boundaries of animal light sensitivity between 300160nm to 800160nm. Human vision is in the middle of the range that other animals have found useful. human visual pigments within span of known animal visual pigments A third constraint has to do with the span of visual pigment sensitivity, because the sensitivity curves must overlap to create the triangulation of color. For Dartnalls standard shape at 50 absorptance, this implies a spacing (peak to peak) of roughly 100160nm. If we include the tail responses at either end of the spectrum, a three cone system could cover a wavelength span of about 400160nm. The fourth and last constraint is more subtle but equally important: avoiding useless or harmful radiation . 149160At wavelengths below 500160nm (near UV), electromagnetic energy becomes potent enough to destroy photopigment molecules and, within a decade or so, to yellow the eyes lens. Many birds and insects have receptors sensitive to UV wavelengths, but these animals have relatively short life spans and die before UV damage becomes significant. Large mammals, in contrast, live longer and accumulate a greater exposure to UV radiation, so their eyes must adapt to filter out or compensate for the damaging effects of UV light. In humans these adaptations include the continual regeneration of receptor cells and the prereceptoral filtering of UV light by the lens and macular pigment. 149160At the other extreme, wavelengths above 800160nm are heat, which is less informative about daylight object attributes: it is dimmer than shorter wavelengths, is heavily absorbed by liquid water or water vapor, and lacks the nuanced spectral variations that can be interpreted as color. In mammals, the visual systems heat sensitivity would have to be shielded from the animals own body heat at wavelengths longer than 1400160nm, and the very long photopigment molecules (or artificial dyes) necessary to absorb radiation in wavelengths between 800160nm to 1400160nm are known to oxidize or decompose readily. These complications make long wavelength energy more trouble than it is worth. On balance, then, it seems that animal vision is limited at the wavelength extremes as much as it is anchored by a radiance peak or an inherited range of photopigment possibilities. Dichromatic Vision. How do animals utilize this limited span of light Many mammals are equipped with a two cone photopic visual system: one cone shifted into the yellow green wavelengths, the other shifted toward the blue end of the spectrum, with substantial overlap between the two sensitivity curves in the green middle. These Y and B cones make up what John Mollon calls the old color system . They enable the eye to distinguish between light radiating in the long versus short wavelengths. light within the absorptance spectrum of water (from Segelstein, 1981) 160 human luminous efficiency and the transmission curve of pure water, by depth (from Soffer 038 Lynch, 1999) A two cone system can distinguish differences in wavelength patterns from total luminance, which means hue can be perceived separate from lightness . The efficient way to do this is to combine the two cone responses to determine a brightness quantity, but to difference or subtract the cone outputs to define a hue contrast (diagram, right). That is, the sum YB creates a supercone that has the same univariant response to hue as the V cone, while the difference Y150B contrasts stimulation at opposite ends of the light spectrum. The difference output is called an opponent coding of the separate cone outputs. It is difficult to overstate the importance of opponent responses in color vision, beginning with the opponent dimensions important to hue sensation but including many other contrast mechanisms discussed in a later page . Primates have retained the backbone of this mammalian vision as the yb opponent function (diagram, below). This opponent function is created from the outputs of two separate cone systems, which requires a bimodal shape in the overall visual response, with a sensitivity peak in the short and long wavelengths (diagram, right). The white point of this function, where the Y ( LM ) and B outputs are equal, is located around 485 to 495160nm (cyan). Like the yellow point that marks equal outputs from L and M cone sensitivity curves, this cyan white marks the point of equal outputs from the Y and B color receptors. Again like yellow, cyan light has a relatively low hue purity and tinting strength compared to the blue or red spectrum extremes. Unlike yellow, however, cyan is visually dimmer than the yellow green response peak of the V cone because the S cones do not significantly contribute to luminance perception. a two cone visual system the human yb opponent (contrast) function with peak sensitivities at 445 and 560160nm, a white point at 495160nm and macular masking of blue response This contrast between short and long wavelength light persists in human color vision as the warmcool color contrast . This is the most general chromatic contrast in color perception, and it appears to have a strong influence on the development and use of color terms in almost all languages. What determines the placement of the two peak sensitivities J.160Lythgoe and J. Partridge demonstrated that a two cone visual system adapted to the green leaves, twigs and brown soil litter of forest habitats gets the greatest chromatic contrast when the peak sensitivities are located between 420160nm to 450160nm ( B ) and 510160nm to 580160nm ( Y ). These ranges include the peak sensitivities of the primate yb opponent function shown above, and primates evolved in forest habitats. Metamerism 038 Colorblindness . There are two important limitations to a visual system based on two partly overlapping sensitivity curves. The first is that very different distributions of light wavelengths will be perceived as the same color, and this occurs among both chromatic (colored) and achromatic (white) color sensations. This problem is called metamerism. Dissimilar spectral distributions that produce the same color sensation are called metamers . A two cone system is especially susceptible to metameric confusions. Metamers occur whenever the two cones are stimulated in the same relative proportions . The most glaring examples include colors perceived as white, when the cone stimulations are 50:50. As the spectral reflectance curves below illustrate, this can occur in reflectance patterns that appear as dissimilar as gray, green or magenta in trichromatic vision. metamers for white (or gray) in a two cone visual system spectral reflectance curves for gray (top), magenta (middle) or green (bottom) would appear indistinguishable in a two cone visual system Parallel problems occur whenever surface color differences produce similar proportional responses in the cones 151 for example, between greens and reds, or purples and blues 151 or when the illumination changes color without changing proportional cone responses. These greatly expand the possible metameric confusions. These problems characterize human dichromatic vision or colorblindness in which typically either the L 160or M 160cones are absent. These folks see an unusually large number of material metamers in the everyday world, and large color differences often appear to them quite subtle. Dichromats are easily confused by yellows and browns, or by blue greens and purples, especially across surfaces of similar lightness. Yellow loses its characteristic lightness, and they commonly see an achromatic or white color in the spectrum located at the cyan balance point between 490160nm to 500160nm. The second limitation in a two cone system is that perception of saturation or hue intensity cannot be easily disentangled from lightness. There are only two possible combinations of two cone outputs: adding them together to define brightness, or contrasting them to define hue. There is no third combination to uniquely define saturation. Despite that, some studies show that human dichromats do see saturation differences, especially at the spectrum ends, but with only half the acuity of trichromats. To do this, the visual system probably uses lightness contrast to estimate chroma, by comparing the lightness of a surface to the lightness of the brightest surface in view. A process called lightness induction performs this contrast judgment in human trichromatic vision. This is how we can see the difference in a achromatic surface between a dark (gray) color and a dimly lit (white) color in red to yellow green surface colors, where S 160cone response can be effectively zero, the same contrast causes surfaces to appear dark (brown) rather than dimly lit (orange). (This is explained further in the section on unsaturated color zones .) separating luminance from hue responses in two cones defined as the sum and difference of the L and M outputs bimodal human visual response based on the Smith 038 Pokorny normalized cone fundamentals A two cone system seems optimally defined to provide a new function 151 chromatic adaptation to the shifts in daylight phases of natural light, from the slightly blue, cool light of noon to the ruddy, warm light of sunset. These changes in lighting significantly shift the apparent hue of surface colors: around sunset a white surface will appear yellow or orange. In human trichromats and dichromats alike, the separate cone sensitivities can be adjusted to increase the B response to compensate for the reduced blue light, and decrease the Y (trichromatic LM ) response to compensate for the increased red light (right), which should restore the white point to its accustomed location. However, color perception in dichromats is significantly affected by luminance contrast 151 dichromats perceive colors of light to grow redder as they get dimmer. This goes in the opposite direction to a compensatory increased sensitivity of the Y receptors, and probably complicates the perception of warm surface colors across changes in the intensity or chromaticity of the illumination. Trichromatic Vision. Finally, all primates 151 monkeys, apes and humans 151 acquired a second set of contrasting receptor cells: the L 160and M 160cones, which evolved from a genetic alteration in the mammalian Y cone. There is only a small difference between the L 160and M 160cones in molecular structure and overlapping spectral absorptance curves . but it is enough to create what Mollon calls the new color system . This defines hue contrasts between middle wavelength (green) light and long wavelength (red) light. These cells are also linked in a contrast or opponent relationship that defines the rg opponent function . a second two cone visual system the human rg opponent (contrast) function with peak sensitivities at 530 and 610160nm, a white point at 575160nm and macular masking of blue response The main benefit of trichromacy is that it creates a unique combination of cone responses for each spectral wavelength and unambiguous hue perception . This enhances object recognition when surfaces are similar in lightness or are randomly shadowed, as under foliage. It also substantially improves the ability to separate the color of light from the color of surfaces, because illuminant metamerism is also reduced color constancy is greatly improved. spectral contrast between direct sunlight and indirect (blue sky) light (from Wyszecki Stiles, 1982) A second important trichromatic benefit is that it reduces metameric colors to various flavors of gray around the white point and into dull blues and purples (diagram at right). As a result the number of physical metameric matches is radically reduced, as anyone who has tried to match household paint colors has found out. In fact, excluding trivial variations, there are no possible metameric emittance profiles under an equal energy white light source for any color at moderate to high saturation. In effect, saturation is a kind of perceptual confidence that hue accurately symbolizes the spectral composition of a color. Near grays, besides generating a very large number of metameric surface colors, are also most susceptible to color change by subtractive mixture with the light source color 151 change the emittance profile of the light, and the surface color changes as well. Metamers that appear identical under noon daylight often scatter into visually different colors under late afternoon light (or interior incandescent light) as the white point shifts from blue to yellow, and this chromaticity scatter is typically elongated in the red to green direction discriminated by the rg contrast (diagram at right). This is a particular problem for automotive manufacturers, who must choose different plastic, fabric and paint materials to get a color match and identical color changes across different phases of natural light . chromaticity of metameric colors in light mixtures and the scatter of illuminant shifted achromatic metamers (from Wyszecki Stiles, 1982) What explains the location of the rg opponent balance or white point at around 575160nm (yellow) This is the approximate spectral direction of both the chromaticity shifts in natural daylight and the approximate hue of the prereceptoral filters . As a result, changes in the angle of sunlight from morning to late afternoon, and the gradual darkening of the lens across age, produce no perceptible change the rg contrast and hence no color change that cannot be handled by adaptation of the yb balance. By the time sunlight acquires a golden or deep yellow appearance it has begun to shift off the yb axis toward red: the rg balance then registers a change and surface colors show the tint of the light. Another intriguing explanation for this yellow balance point appears in the reflectance curves of 1270 color samples from the Munsell Book of Color . The curves at right show 10 colors of identical saturation and value, equally spaced around the hue circle. All the curves seem to inflect in a small region centered on 575160nm which means comparative information about the reflectance curves is minimal at that point. This rg balance point is insensitive to chromaticity for the same reason that a lever is not imbalanced by weight placed over the fulcrum. Placing the yellow balance point at an area of minimal reflectance information makes color vision maximally sensitive to relative green and red changes in surface colors, and permits hue resolution into the red end of the spectrum, where the S 160cones provide no response, as the relative proportion of L 160and M 160response. Why Not 4 or More Cones The final query is: why dont we have four or more cones Why stop with only three reflectance curves for standard Munsell hue samples at constant lightness and saturation from Kuehni (2003) We can exclude the possibility that the obstacle is evolution of new photopigments. Molecular genetics has identified 10 variations in the human L 160and M 160photopigments, which create two clusters of similar peak responses around 530160nm and 555160nm (right). Males are also split roughly 5050 by a single amino acid polymorphism (serine for alanine) that shifts the peak sensitivity in 5 of these variants, including the normal male L 160photopigment, by about 4160nm. Finally, it is genetically possible for about 50 of females to express a fourth red photopigment and some individuals carry genes for only one type of L 160and M 160photopigment while others carry multiple (different) versions. These many combinations can significantly affect trichromatic responses or cause colorblindness . However it is still assumed that cones contain only one kind of photopigment or that cones with chemically similar photopigments output to common nerve pathways. Thus, cones and nerve pathways are the fundamental units of trichromatic vision . not the photopigments. There are twelve unique ways to sum or contrast three cone outputs to define hue: our vision uses six contrasts . plus a single luminance sum. This requires a unique nerve pathway for seven different signals similar outputs in a four cone system would require at least 15 contrast and luminance pathways. There are roughly one million nerve tracts from the eye to the brain, and each tract carries information from roughly six cones and 100 rods. This suggests nerve pathways are a resource that must be conserved. A four channel chromatic system would, at minimum, double this load, grossly decreasing the granularity in the retinal information or requiring an increase in neural processing in the retina and bandwidth in the optic nerve. Evolution could arrive at a more complex visual system, but it would require modifying a visual cortex specialized to receive and interpret the three cone outputs adding a fourth cone would mean reengineering the brain as well. These costs far outweigh any adaptive advantage that four cones could produce. Why 3 Badly Spaced Cones Evolutionary considerations lead to a more basic question: is color really what our visual system is adapted to perceive From a design perspective, the most interesting question is not why we have three rather than four cones, but why the three cone fundamentals are so unevenly spaced along the spectrum and unequally represented (63 L . 31 M 160and 6 S ) in the retina. Our acuity to differences in color (hue and saturation) would substantially improve, and our visible spectrum would significantly expand, if the cone sensitivity curves were more evenly spaced and the retinal cone proportions were better balanced. multiple L 160and M 160photopigments identified in the human retina curves from Backhaus, Kliegl 038 Werner (1998) peak wavelengths from Merbs 038 Nathans (1992) lines connect serinealanine polymorphisms The answer appears in an important optical problem that arises when a large eye is made sensitive to a wide span of the spectrum: chromatic aberration . When light passes through a lens, blue wavelengths are refracted (bent) more strongly than red wavelengths, causing the blue image to focus at a point in front of the red image (diagram at right). This causes overlapping, fuzzy colored fringes in a focused image, especially around the edges of intricate light and dark patterns, such as branches and leaves seen against the sky (right). Chromatic aberration seriously degrades visual acuity, as does a related optical problem, spherical aberration, caused by the somewhat round exterior of the cornea. Manufactured optical instruments solve this problem with a sandwich of lenses, the simplest consisting of a convex and concave doublet, one cancelling the chromatic aberration of the other. Animal lenses are always convex (bulging), and an achromatic doublet requires a rather long focal length (proportionally much longer than the diameter of an eye), so the doublet solution is not feasible in a large eye. However, the red wavelengths require less optical bending to come into focus, which means yellow light requires less precise optics . especially in bright daylight, when the aperture of the eye is small relative to its focal length and the eye essentially becomes a pinhole camera. Evolution tackled chromatic aberration not with complex lenses but with several new adaptations, some of them unique, that substantially reduce the effects of blue and violet wavelengths in the fovea and the eye as a whole: 149160A cornea that is more spherical at its center than around its circumference, reducing spherical aberration at the edges. 149160A cornea, lens and eye diameter (focal length) that produce the most precise image in the yellow wavelengths, where optical demands are less extreme. 149160The prereceptoral filtering in the lens and macular pigment, and in the yellow tint of bleached photopigment, which combine to filter out more than half the blue and violet light below 470160nm. 149160A strong directional sensitivity to light incident on the fovea (the Stiles Crawford effect ), created by the uniform alignment of photopigment in the outer segment discs this causes the photopigment to react less strongly to light coming from the side, which is predominantly scattered blue wavelengths. 149160An overwhelming population (roughly 94) of L 160and M 160receptors, and a close spacing between their sensitivity peaks, which limits the requirement for precise focusing to the yellow green wavelengths. 149160A sparse representation by S 160receptors in the eye (6 of total), which substantially reduces their contribution to spatial contrast, and the nearly complete elimination of S 160cones from the fovea, where sharp focus is critical. 149160Separation of luminosity (contrast) information and color information into separate neural channels, to minimize the impact of color on contrast quality. 149160Neural filtering of signals by the L 160and M 160cones within in the fovea that suppresses color information in detailed, contrasty textures. 149160Neural filtering higher in the visual system to eliminate chromatic aberration from conscious visual experience, and which can (after a period of adjustment) even eliminate blurring that is artificially induced by distorting prisms or eyeglasses. These many adaptations enable the fovea to be extremely effective at edge discrimination . even in strong contrasts of light and dark they also enhance image clarity when the eyes are stereoscopically combined, greatly improving depth perception. Minimizing chromatic aberration has profound benefits for modern humans, as it makes possible the crisp pattern recognition we require to read text, or the acute depth perception necessary to aim weapons or catch a prey. But what about early primates They were small bodied tree dwellers, who had to read the outlines of tree limbs intertwined in space and judge how far to leap to catch the branch of escape or the bough of dangling food. You can see this capability in the amazing fearlessness with which all primates scramble and leap across large distances between tree limbs high above the ground, where a single misjudgment can cause crippling injury or death. Those are stakes that evolution can latch onto. There is a minor downside to a strong selective pressure toward visual acuity and lack of selective pressure toward color discrimination: colorblindness . Because the genes for both the L 160and M 160photopigments are located next to each other on the X chromosome, the lack of duplicate opsin genes in XY males causes frequent variations in the L 160and M 160photopigments that make them chemically similar or identical 151 and can make the 25160nm separation between them disappear. The result is various forms of dichromacy that affect about 5 of the population, nearly all of them males. This red green colorblindness is caused either by missing L 160cones ( protanopia . in 2 of males) or M 160cones ( deuteranopia . in 6 of males). (Lack of S 160cones or tritanopia occurs in less than 0.01 of the population.) These conditions can be diagnosed using very simple perceptual tests, such as the Ishihara color disks . Remarkably, many men do not discover (are not told) that they are colorblind until their teenage years, which strongly suggests yet again that hue discrimination is not essential for most life tasks . (For more on color vision deficiencies, see this page .) chromatic aberration in a simple lens red and green light are focused far behind blue light 160 chromatic aberration and life among the trees A currently popular evolutionary explanation for L150M discrimination is that it assists the detection of red fruit among green foliage, the cherries in the leaves hypothesis (photos, right). But it can also be interpreted as a chromatic contrast designed to minimize the effects of chromatic aberration around a yellow balance point, so that red and green darken equally around the yellow focus. Edge detection and depth perception based on patterns in light and dark has taken evolutionary priority over any problems involving hue discrimination, and it is on these visual stimuli that culture, social consensus and communication really depend. A telling illustration comes in a map of the busiest areas of the human brain: the integrating connections between the visual and language areas. a map of the busiest areas of the human brain after Hagmann P, Cammoun L, Gigandet X, Meuli R, Honey CJ, et al. (2008) We find this edge and pattern imperative carried into our art and documents 151 etchings, woodcuts, monochrome wash and charcoal or pen drawings, printed text, fabric patterns and vegetable weaves 151 all art that appeals entirely to the eyes monochrome perception of pattern and line. Nor is this a matter of simple drawings from simple tools. Even with all our printing and reproduction technologies, text and engineering drawings still exclude varied colors to increase the legibility and interpretability of the document. As we say: to see means to understand, and to understand means to see clearly, not colorfully. The fundamental reference for all things luminous, chromatic and colorimetric is Color Science: Concepts and Methods, Quantitative Data and Formul230 (2nd edition) by Guumlnter Wyszecki and W. S. Stiles (John Wiley: 1982), nearly encyclopedic but showing its age. The best overview of color vision that I have seen 151 compact, informative and up to date, though emphasizing basic perceptual processes and colorimetry 151 is The Science of Color (2nd edition) edited by Steven Shevell (Optical Society of America, 2003). Im especially partial to the overview of experimental methods and evidence in Human160Color Vision (2nd edition) by Peter Kaiser and Robert Boynton (Optical Society of America, 1994). Peter Kaiser also has authored a lucid web site on The Joys of Vision . Color Vision: Perspectives from Different Disciplines (de Grutyer, 1998), edited by Werner Backhaus, Reinhold Kliegl John Werner contains a variety of interesting chapters, including a study of Monets aging eyes. The premier review of color and color vision as it relates to printing, photography and analog video is The Reproduction of Colour (6th ed.) by R. W.G. Hunt (John Wiley: 2004). Introduction to Color Imaging Science by Hsien-Che Lee (Cambridge University Press: 2005) is actually an in depth discussion of color topics relevant to color imaging technologies, including digital imaging. A text with similar topical coverage as Hunt but less formal theory is Billmeyer and Saltzmans Principles of Color Technology (3rd ed.) by Roy S. Berns (Wiley Interscience: 2000). Seeing the Light: Optics in Nature, Photography, Color, Vision and Holography by David Falk, Dieter Brill 038 David Stork (John Wiley: 1986) is an eclectic but very pragmatic and well illustrated traversal of almost every known color phenomenon relevant to modern imaging technologies. Color for Science, Art and Technology edited by Kurt Nassau (North Holland: 1997) is a miscellany of rather unusual chapters on color, such as The Fifteen Causes of Color, Color in Abstract Painting, Organic and Inorganic Pigments, and The Biological and Therapeutic Effects of Light. A short, conversational introduction to color, with emphasis on the issues vexing to cognitive theories, is C. L. Hardins Color for Philosophers: Unweaving the Rainbow (Hackett, 1988). A smattering of interesting essays, including John Mollons chapter old and new color subsystems, is available in Color: Art and Science . edited by Trevor Lamb and Janine Bourriau (Cambridge University Press, 1995.) To remedy the misperception that visual processes are well understood and uncontroversial, see for example James Fultons web site on the Processes of Animal Vision . Last revised 08.I.2015 149 copy 2015 Bruce MacEvoy cherries and life among the trees

No comments:

Post a Comment