Flytte Gjennomsnittet Mse Excel


Regneark implementering av sesongjustering og eksponensiell utjevning Det er greit å utføre sesongjustering og passe eksponentielle utjevningsmodeller ved hjelp av Excel. Skjermbildene og diagrammene nedenfor er hentet fra et regneark som er satt opp for å illustrere multiplikativ sesongjustering og lineær eksponensiell utjevning på følgende kvartalsvise salgsdata fra Outboard Marine: Klikk her for å få en kopi av regnearkfilen selv. Utgaven av lineær eksponensiell utjevning som skal brukes her for demonstrasjonsformål er Brown8217s versjon, bare fordi den kan implementeres med en enkelt kolonne med formler, og det er bare én utjevningskonstant for å optimalisere. Vanligvis er det bedre å bruke Holt8217s versjon som har separate utjevningskonstanter for nivå og trend. Fremskrivningsprosessen fortløper som følger: (i) først er dataene sesongjustert (ii) så blir prognoser generert for sesongjusterte data via lineær eksponensiell utjevning og (iii) til slutt er de sesongjusterte prognosene kvoteres for å få prognoser for den opprinnelige serien . Sesongjusteringsprosessen utføres i kolonne D til G. Det første trinnet i sesongjustering er å beregne et sentrert glidende gjennomsnitt (utført her i kolonne D). Dette kan gjøres ved å ta gjennomsnittet av to ettårige gjennomsnitt som kompenseres av en periode i forhold til hverandre. (En kombinasjon av to offset-gjennomsnitt i stedet for et enkelt gjennomsnitt er nødvendig for sentrering når antall årstider er like.) Det neste trinnet er å beregne forholdet til glidende gjennomsnitt, dvs. De opprinnelige dataene divideres med det bevegelige gjennomsnittet i hver periode - som utføres her i kolonne E. (Dette kalles også quottrend-cyclequot-komponenten i mønsteret, forutsatt at trend og konjunktursykluser kan anses å være alt som forblir etter gjennomsnitt over en helårs verd av data. Selvfølgelig kan endringer i måned til måned som ikke skyldes sesongbestemte, bestemmes av mange andre faktorer, men gjennomsnittet på 12 måneder glatter seg over dem i stor grad.) Beregnet sesongindeks for hver sesong beregnes ved først å beregne alle forholdene for den aktuelle sesongen, som er gjort i celler G3-G6 ved hjelp av en AVERAGEIF formel. Gjennomsnittstallene blir deretter rescaled slik at de summerer til nøyaktig 100 ganger antall perioder i en sesong, eller 400 i dette tilfellet, som er gjort i celler H3-H6. Nedenfor i kolonne F brukes VLOOKUP-formler til å sette inn riktig sesongindeksverdi i hver rad i datatabellen, i henhold til kvartalet av året representerer den. Det sentrert glidende gjennomsnittet og de sesongjusterte dataene ser ut som dette: Merk at det bevegelige gjennomsnittet vanligvis ser ut som en jevnere versjon av den sesongjusterte serien, og den er kortere i begge ender. Et annet regneark i samme Excel-fil viser anvendelsen av den lineære eksponensielle utjevningsmodellen til sesongjusterte data, som begynner i kolonne G. En verdi for utjevningskonstanten (alfa) er angitt over prognosen kolonnen (her i celle H9) og For enkelhets skyld er det tildelt rekkeviddenavnet quotAlpha. quot (Navnet er tilordnet med kommandoen quotInsertNameCreatequot.) LES-modellen initialiseres ved å sette de to første prognosene tilsvarer den første virkelige verdien av sesongjusterte serien. Formelen som brukes her for LES-prognosen, er den recirkulære resirkulære formen av Brown8217s-modellen: Denne formelen er oppgitt i cellen som svarer til den tredje perioden (her, celle H15) og kopieres derfra. Legg merke til at LES-prognosen for den nåværende perioden refererer til de to foregående observasjonene og de to foregående feilene, samt verdien av alfa. Således refererer prognoseformelen i rad 15 kun til data som var tilgjengelige i rad 14 og tidligere. (Selvfølgelig, hvis vi ønsket å bruke enkle i stedet for lineær eksponensiell utjevning, kunne vi erstatte SES-formelen her i stedet. Vi kunne også bruke Holt8217s i stedet for Brown8217s LES-modellen, som ville kreve to flere kolonner med formler for å beregne nivå og trend som brukes i prognosen.) Feilene beregnes i neste kolonne (her, kolonne J) ved å trekke prognosene fra de faktiske verdiene. Rotenes middelkvadratfeil beregnes som kvadratroten av variansen av feilene pluss kvadratet av gjennomsnittet. (Dette følger av den matematiske identiteten: MSE VARIANCE (feil) (AVERAGE (feil)). 2.) Ved beregning av gjennomsnitt og varians av feilene i denne formelen, er de to første periodene utelukket fordi modellen ikke faktisk begynner å prognose til den tredje perioden (rad 15 på regnearket). Den optimale verdien av alfa kan bli funnet enten ved å endre alfa manuelt til minimum RMSE er funnet, ellers kan du bruke quotSolverquot til å utføre en nøyaktig minimering. Verdien av alfa som Solver funnet er vist her (alfa0.471). Det er vanligvis en god ide å plotte feilene i modellen (i transformerte enheter) og også å beregne og plotte sine autokorrelasjoner ved lags på opptil en sesong. Her er en tidsserier av de (sesongjusterte) feilene: Feilautokorrelasjonene beregnes ved hjelp av CORREL () - funksjonen for å beregne korrelasjonene av feilene med seg selv forsinket av en eller flere perioder - detaljer vises i regnearkmodellen . Her er et plot av autokorrelasjonene til feilene ved de fem første lagene: Autokorrelasjonene på lags 1 til 3 er svært nær null, men spissen ved lag 4 (hvis verdien er 0,35) er litt plagsom - det antyder at Sesongjusteringsprosessen har ikke vært helt vellykket. Men det er faktisk bare marginalt signifikant. 95 signifikansbånd for å teste om autokorrelasjoner er signifikant forskjellig fra null er omtrent pluss-eller-minus 2SQRT (n-k), hvor n er prøvestørrelsen og k er lagret. Her er n 38 og k varierer fra 1 til 5, slik at square-root-of-n-minus-k er rundt 6 for dem alle, og derfor er grensene for å teste den statistiske signifikansen av avvik fra null tilnærmet pluss - eller-minus 26 eller 0,33. Hvis du varierer verdien av alpha for hånd i denne Excel-modellen, kan du observere effekten på tidsseriene og autokorrelasjonsplottene av feilene, så vel som på den rotte-kvadratiske feilen, som vil bli illustrert nedenfor. På bunnen av regnearket er prognoseformelen kvotetatt i fremtiden ved bare å erstatte prognoser for faktiske verdier ved det punktet der de faktiske dataene går tom - det vil si. hvor quotthe futurequot begynner. (Med andre ord, i hver celle der en fremtidig dataværdi vil oppstå, settes en cellereferanse som peker på prognosen som er laget for den perioden.) Alle de andre formlene kopieres ganske enkelt ned fra oven: Legg merke til at feilene for prognoser for fremtiden er alle beregnet til å være null. Dette betyr ikke at de faktiske feilene vil være null, men det reflekterer bare det faktum at vi forutsetter at fremtidige data vil svare til prognosene i gjennomsnitt. De resulterende LES-prognosene for de sesongjusterte dataene ser slik ut: Med denne spesielle verdien av alfa, som er optimal for prognoser med en periode fremover, er den forventede trenden litt oppadgående, noe som gjenspeiler den lokale trenden som ble observert de siste 2 årene eller noe. For andre verdier av alfa, kan det oppnås en helt annen trendprojeksjon. Det er vanligvis en god ide å se hva som skjer med den langsiktige trendprojeksjonen når alfa er variert, fordi verdien som er best for kortsiktig prognose, ikke nødvendigvis vil være den beste verdien for å forutse den lengre fremtid. For eksempel er her resultatet som oppnås hvis verdien av alfa er manuelt satt til 0,25: Den projiserte langsiktige trenden er nå negativ, heller enn positiv. Med en mindre verdi av alfa, legger modellen vekt på eldre data i sin estimering av dagens nivå og trend, og langsiktige prognoser reflekterer den nedadgående trenden observert de siste 5 årene i stedet for den nyere oppadgående trenden. Dette diagrammet illustrerer også tydelig hvordan modellen med en mindre verdi av alfa er langsommere for å svare på quotturning pointsquot i dataene og derfor har en tendens til å gjøre en feil på det samme tegnet i mange perioder på rad. Dens 1-trinns prognosefeil er større i gjennomsnitt enn de som er oppnådd før (RMSE på 34,4 i stedet for 27,4) og sterkt positivt autokorrelert. Lag-1 autokorrelasjonen på 0,56 overstiger sterkt verdien av 0,33 beregnet ovenfor for en statistisk signifikant avvik fra null. Som et alternativ til å svekke verdien av alfa for å introdusere mer konservatisme i langsiktige prognoser, blir det noen ganger lagt til en quotrend dampeningquot-faktor i modellen for å gjøre den projiserte trenden flatt ut etter noen perioder. Det siste trinnet i å bygge prognosemodellen er å quotereasonizequot LES prognosene ved å multiplisere dem med de riktige sesongindeksene. De resesaliserte prognosene i kolonne I er således bare produktene av sesongindeksene i kolonne F og de sesongjusterte LES-prognosene i kolonne H. Det er relativt enkelt å beregne konfidensintervaller for en-trinns prognoser laget av denne modellen: først beregne RMSE (root-mean-squared-feilen, som bare er kvadratroten til MSE), og beregne deretter et konfidensintervall for sesongjustert prognose ved å legge til og trekke to ganger RMSE. (Generelt er et 95 konfidensintervall for en prognose for en periode fremdeles omtrent lik punktsprognosen pluss-eller-minus-to ganger estimert standardavvik for prognosefeilene, forutsatt at feilfordelingen er omtrent normal og prøvenes størrelse er stor nok, si 20 eller mer. Her er RMSE i stedet for standardfeilavviket for feilene det beste estimatet av standardavviket for fremtidige prognosefeil fordi det tar forvirring, samt tilfeldige variasjoner i betraktning.) Tillitgrensene for sesongjustert prognose blir deretter resesasonalized. sammen med prognosen, ved å multiplisere dem med de riktige sesongindeksene. I dette tilfellet er RMSE lik 27,4 og sesongjustert prognose for den første fremtidige perioden (desember 93) er 273,2. så sesongjustert 95 konfidensintervall er fra 273,2-227,4 218,4 til 273,2227,4 328,0. Multiplicere disse grensene med Decembers sesongindeks på 68,61. Vi oppnår lavere og øvre konfidensgrenser på 149,8 og 225,0 rundt prognosen på 93,9 prosent på 187,4. Forventningsgrenser for prognoser mer enn en periode framover vil generelt øke etter hvert som prognosehorisonten øker, på grunn av usikkerhet om nivå og trend, samt sesongfaktorer, men det er vanskelig å beregne dem generelt ved hjelp av analytiske metoder. (Den riktige måten å beregne konfidensgrenser for LES-prognosen er ved å bruke ARIMA-teorien, men usikkerheten i sesongindeksene er en annen sak.) Hvis du vil ha et realistisk konfidensintervall for en prognose mer enn en periode framover, tar du alle kilder til Feil i betraktning, din beste innsats er å bruke empiriske metoder: for eksempel for å oppnå et konfidensintervall for en 2-trinns prognose, kan du opprette en annen kolonne på regnearket for å beregne en 2-trinns prognose for hver periode ( ved å starte opp en-trinns prognose). Beregn deretter RMSE for de to fremdriftsprognosefeilene og bruk dette som grunnlag for et 2-trinns konfidensintervall. Eksempel for statistisk dataanalyse Dette er et webtekstkompatibelt nettsted for bedriftsstatistikk USA Site Para mis visitantes del mundo de habla hispana, er stedet å være tilgjengelig og er tilgjengelig på: Sted Espejo for Amrica Latina Sitio de los EEUU Excel er den mye brukte statistiske pakken, som fungerer som et verktøy for å forstå statistiske begreper og beregninger for å sjekke håndarbeidet beregning for å løse lekseproblemer. Nettstedet gir en introduksjon til å forstå grunnleggende om og arbeide med Excel. Redoing av de illustrerte talleksemplene på dette nettstedet vil bidra til å forbedre kjennskapen din og som et resultat øke effektiviteten og effektiviteten av prosessen din i statistikk. Å søke på nettstedet. prøv E dit F inn på side Ctrl f. Skriv inn et ord eller en setning i dialogboksen, f. eks. quote variancequot eller quot averagequot Hvis det første uttrykket fra ordfrasen ikke er det du leter etter, kan du prøve F inn Next. Innledning Dette nettstedet gir illustrativ erfaring med bruk av Excel for datasammendrag, presentasjon og for annen grunnleggende statistisk analyse. Jeg tror den populære bruken av Excel er på områdene hvor Excel virkelig kan utmerke seg. Dette inkluderer organiseringsdata, dvs. grunnleggende datahåndtering, tabulering og grafikk. For ekte statistisk analyse må man lære å bruke profesjonelle kommersielle statistiske pakker som SAS og SPSS. Microsoft Excel 2000 (versjon 9) gir et sett med dataanalyseverktøy kalt Analysis ToolPak som du kan bruke til å lagre trinn når du utvikler komplekse statistiske analyser. Du oppgir dataene og parametrene for hver analyse verktøyet bruker de aktuelle statistiske makrofunksjonene og viser deretter resultatene i en utmatningstabell. Noen verktøy genererer diagrammer i tillegg til utdatatabeller. Hvis kommandoen Dataanalyse er valgt i Verktøy-menyen, er Analysis ToolPak installert på systemet. Men hvis kommandoen Dataanalyse ikke er på Verktøy-menyen, må du installere Analysis ToolPak ved å gjøre følgende: Trinn 1: På Verktøy-menyen klikker du Add-Ins. Hvis Analysis ToolPak ikke er oppført i dialogboksen Add-ins, klikker du Bla gjennom og finner stasjonen, mappenavnet og filnavnet for Analysis ToolPak Add-in Analys32.xll som vanligvis ligger i Program FilesMicrosoft OfficeOfficeLibraryAnalysis-mappen. Når du har funnet filen, velg den og klikk OK. Trinn 2: Hvis du ikke finner filen Analys32.xll, må du installere den. Sett inn Microsoft Office 2000 Disk 1 i CD ROM-stasjonen. Velg Kjør fra Windows Start-menyen. Bla gjennom og velg stasjonen for CDen din. Velg Setup. exe, klikk Åpne, og klikk OK. Klikk på knappen Legg til eller fjern funksjoner. Klikk ved siden av Microsoft Excel for Windows. Klikk ved siden av Add-ins. Klikk på nedpilen ved siden av Analysis ToolPak. Velg Kjør fra Min datamaskin. Velg Oppdater nå-knappen. Excel vil nå oppdatere systemet for å inkludere Analysis ToolPak. Start Excel. På Verktøy-menyen, klikk på Add-Ins. - og merk av for Analysis ToolPak. Trinn 3: Analysis ToolPak Add-In er nå installert og Data Analysis. vil nå bli valgbar på Verktøy-menyen. Microsoft Excel er en kraftig regnearkpakke tilgjengelig for Microsoft Windows og Apple Macintosh. Regnearkprogramvare brukes til å lagre informasjon i kolonner og rader som deretter kan organiseres og behandles. Regneark er laget for å fungere godt med tall, men inneholder ofte tekst. Excel organiserer arbeidet ditt i arbeidsbøker hver arbeidsbok kan inneholde mange regneark regneark brukes til å liste og analysere data. Excel er tilgjengelig på alle offentlig tilgang PCer (dvs. de, for eksempel i biblioteket og PC Labs). Den kan åpnes enten ved å velge Start - Programmer - Microsoft Excel eller ved å klikke på Excel Short Cut som er enten på skrivebordet eller på hvilken som helst PC, eller på Office Tool-linjen. Åpne et dokument: Klikk på File-Open (CtrlO) for å åpne en eksisterende arbeidsbok for å endre katalogområdet eller kjøre for å se etter filer på andre steder. Hvis du vil opprette en ny arbeidsbok, klikker du på File-New-Blank Document. Lagre og lukke et dokument: Hvis du vil lagre dokumentet med dets nåværende filnavn, plassering og filformat, klikker du enten på Arkiv - Lagre. Hvis du lagrer for første gang, klikker du Fil-lagre valgtype et navn for dokumentet, og klikker deretter OK. Bruk også File-Save hvis du vil lagre til en annen filamellokasjon. Når du er ferdig med å jobbe med et dokument, bør du lukke det. Gå til Fil-menyen og klikk på Lukk. Hvis du har gjort noen endringer siden filen var sist lagret, blir du spurt om du vil lagre dem. Excel-skjermen Arbeidsbøker og regneark: Når du starter Excel, vises et tomt regneark som består av et flertall av celler med nummererte rader nedover siden og alfabetisk navngitte kolonner på tvers av siden. Hver celle er referert til av koordinatene sine (for eksempel A3 brukes til å referere til cellen i kolonne A og rad 3 B10: B20 brukes til å referere til rekkevidden av celler i kolonne B og rader 10 til 20). Ditt arbeid er lagret i en Excel-fil kalt en arbeidsbok. Hver arbeidsbok kan inneholde flere regneark og orkart - det nåværende regnearket kalles det aktive arket. Hvis du vil vise et annet regneark i en arbeidsbok, klikker du på det aktuelle arkfanen. Du kan få tilgang til og utføre kommandoer direkte fra hovedmenyen, eller du kan peke på en av verktøylinjeknappene (skjermboksen som vises under knappen, når du plasserer markøren over den, angir navnet på knappen) og klikker en gang. Flytte rundt regnearket: Det er viktig å kunne flytte regnearket effektivt fordi du bare kan skrive inn eller endre data i markørens posisjon. Du kan flytte markøren ved hjelp av piltastene eller ved å flytte musen til den nødvendige cellen og klikke. Når cellen er blitt valgt, blir den aktive cellen og identifiseres med en tykk kantlinje, kan kun én celle være aktiv om gangen. Hvis du vil flytte fra ett regneark til et annet, klikker du arkfanene. (Hvis arbeidsboken din inneholder mange ark, høyreklikker du på rulleskjermknappene og klikker på arket du vil ha.) Navnet på det aktive arket vises med fet skrift. Flytting mellom celler: Her er et hurtigtast for å flytte den aktive cellen: Hjem - flyttes til den første kolonnen i den nåværende raden CtrlHome - flyttes til øverste venstre hjørne av dokumentet Slutt deretter Hjem - flyttes til den siste cellen i dokumentet Til Flytt mellom celler på et regneark, klikk på en hvilken som helst celle eller bruk piltastene. For å se et annet område av arket, bruk rullefeltene og klikk på pilene eller området overbeløpe rulleboksen i enten de vertikale eller horisontale rullefeltene. Merk at størrelsen på en rulleboks angir proporsjonal mengde av det brukte området på arket som er synlig i vinduet. Plasseringen av en rulleboks viser den relative plasseringen av det synlige området i regnearket. Oppgi data Et nytt regneark er et rutenett av rader og kolonner. Rynene er merket med tall, og kolonnene er merket med bokstaver. Hvert kryss av en rad og en kolonne er en celle. Hver celle har en adresse. som er kolonnebrevet og radnummeret. Pilen på regnearket til høyre peker til celle A1, som for tiden er uthevet. som indikerer at det er en aktiv celle. En celle må være aktiv for å legge inn informasjon i den. For å markere (velg) en celle, klikk på den. For å velge mer enn en celle: Klikk på en celle (for eksempel A1), og hold deretter skift-tasten mens du klikker på en annen (for eksempel D4) for å velge alle celler mellom og inklusive A1 og D4. Klikk på en celle (f. eks. A1) og dra musen over ønsket område, unclicking på en annen celle (for eksempel D4) for å velge alle celler mellom og inklusive A1 og D4. For å velge flere celler som ikke er tilstøtende, trykk på kontrollen og klikk på cellene du vil velge. Klikk på et nummer eller en bokstav som merker en rad eller kolonne for å velge den hele raden eller kolonnen. Ett regneark kan ha opptil 256 kolonner og 65 536 rader, så det er en stund før du går tom for plass. Hver celle kan inneholde en etikett. verdi. logisk verdi. eller formel. Etiketter kan inneholde en kombinasjon av bokstaver, tall eller symboler. Verdier er tall. Kun verdier (tall) kan brukes i beregninger. En verdi kan også være en dato eller en timeLogiske verdier er sanne eller falske. Formuler gjør automatisk beregninger på verdiene i andre spesifiserte celler og viser resultatet i cellen der formelen er angitt (for eksempel kan du angi at cellen D3 er å inneholde summen av tallene i B3 og C3 vil tallet som vises i D3 da være en funksjon av tallene som er inngått B3 og C3). For å legge inn informasjon i en celle, velg cellen og begynn å skrive. Vær oppmerksom på at når du skriver inn informasjon i cellen, vises informasjonen du oppgir, også i formellelinjen. Du kan også legge inn informasjon i formellelinjen, og informasjonen vil vises i den valgte cellen. Når du er ferdig med å skrive inn etiketten eller verdien: Trykk Enter for å flytte til neste celle under (i dette tilfellet A2) Trykk på Tab for å flytte til neste celle til høyre (i dette tilfellet B1) Klikk i hvilken som helst celle for å velge det skriver inn etiketter Med mindre informasjonen du oppgir, er formatert som en verdi eller en formel, vil Excel tolke den som en etikett, og standardene vil justere teksten på venstre side av cellen. Hvis du lager et langt regneark, og du vil gjenta den samme etikettinformasjonen i mange forskjellige celler, kan du bruke AutoComplete-funksjonen. Denne funksjonen vil se på andre oppføringer i samme kolonne og forsøke å matche en tidligere oppføring med din nåværende oppføring. For eksempel, hvis du allerede har skrevet Wesleyan i en annen celle, og du skriver W i en ny celle, vil Excel automatisk skrive Wesleyan. Hvis du hadde tenkt å skrive Wesleyan i cellen, er oppgaven din ferdig, og du kan gå videre til neste celle. Hvis du hadde tenkt å skrive noe annet, f. eks. Williams, inn i cellen, bare fortsett å skrive for å skrive inn termen. For å slå på AutoComplete-funksjonen, klikk på Verktøy i menylinjen, velg deretter Alternativer, velg deretter Rediger, og klikk for å legge inn en boks ved siden av Aktiver AutoComplete for celleverdier. En annen måte å raskt angi gjentatte etiketter på, er å bruke Pick List-funksjonen. Høyreklikk på en celle, velg deretter Velg fra liste. Dette gir deg en meny med alle andre oppføringer i celler i den kolonnen. Klikk på et element i menyen for å skrive det inn i den valgte cellen. En verdi er et tall, dato eller klokkeslett, pluss noen få symboler om nødvendig for ytterligere å definere tallene 91such som. - () 93. Tall antas å være positivt for å angi et negativt tall, bruk et minustegn - eller legg inn tallet i parenteser (). Datoer lagres som MMDDYYYY, men du trenger ikke å skrive det nøyaktig i det formatet. Hvis du går inn 9. januar eller 9. januar, vil Excel gjenkjenne det 9. januar i år, og lagre det som 192002. Skriv inn det firesifrede året for et år annet enn det nåværende året (f. eks. 9. januar 1999). For å angi dagens dato, trykk på kontroll og samtidig. Tider som standard til en 24-timers klokke. Bruk a eller p for å indikere am eller pm hvis du bruker en 12-timers klokke (for eksempel 8:30 p tolkes som 8:30 PM). For å gå inn i gjeldende tid, trykk på kontroll og: (skift-semikolon) samtidig. En oppføring tolket som en verdi (tall, dato eller tid) er justert til høyre side av cellen, for å reformatere en verdi. Avrundingsnumre som oppfyller spesifikke kriterier: Å bruke farger til maksimale andor minimumsverdier: Velg en celle i regionen, og trykk CtrlShift (i Excel 2003, trykk dette eller CtrlA) for å velge gjeldende region. Fra Tilpass-menyen velger du Betinget formatering. I Tilstand 1, velg Formel Is, og skriv MAX (F: F) F1. Klikk Format, velg kategorien Skrift, velg en farge, og klikk deretter OK. I Tilstand 2, velg Formel Is, og skriv MIN (F: F) F1. Gjenta trinn 4, velg en annen farge enn du valgte for Tilstand 1, og klikk deretter OK. Merk: Pass på å skille mellom absolutt referanse og relativ referanse når du legger inn formlene. Rundnummer som møter spesifiserte kriterier Problem: Avrund alle tallene i kolonne A til null desimaler, unntatt de som har 5 i første desimal. Løsning: Bruk IF, MOD og ROUND-funksjonene i følgende formel: IF (MOD (A2,1) 0,5, A2, ROUND (A2,0)) Kopier og lim inn alle celler i et ark Velg cellene i arket ved å trykke CtrlA (i Excel 2003, velg en celle i et tomt område før du trykker CtrlA, eller fra en valgt celle i et Current RegionList-område, trykk CtrlAA). ELLER Klikk på Velg alt øverst til venstre i krysset mellom rader og kolonner. Trykk CtrlC. Trykk CtrlPage Down for å velge et annet ark, og velg deretter celle A1. Trykk enter. Slik kopierer du hele arket Når du kopierer hele arket, betyr det at du kopierer cellene, sideoppsettparametrene og det definerte området Navn. Alternativ 1: Flytt musepekeren til en arkfane. Trykk Ctrl, og hold musen for å dra arket til et annet sted. Slett museknappen og Ctrl-tasten. Alternativ 2: Høyreklikk riktig arkfanen. Fra snarveismenyen, velg Flytt eller Kopier. Dialogboksen Flytt eller Kopier gjør det mulig å kopiere arket enten til en annen plassering i gjeldende arbeidsbok eller til en annen arbeidsbok. Pass på at du merker av for Opprett en kopi. Alternativ 3: Velg Ordne fra Vindu-menyen. Velg Tiled til fliser alle åpne arbeidsbøker i vinduet. Bruk Alternativ 1 (dra arket mens du trykker på Ctrl) for å kopiere eller flytte et ark. Sortering etter kolonner Standardinnstillingen for sortering i stigende eller synkende rekkefølge er etter rad. Slik sorterer du etter kolonner: Velg Sorter, og velg Valg i Data-menyen. Velg alternativet Sorter til venstre til høyre og klikk OK. I sorteringsalternativet i sorteringsdialogen velger du radenummeret som kolonnene skal sorteres til og klikker på OK. Beskrivende statistikk Data Analysis ToolPak har et beskrivende statistikkverktøy som gir deg en enkel måte å beregne oppsummeringsstatistikk for et sett med eksempeldata. Sammendragsstatistikk inkluderer Mean, Standard Error, Median, Mode, Standard Avvik, Varians, Kurtosis, Skewness, Range, Minimum, Maximum, Sum og Count. Dette verktøyet eliminerer behovet for å skrive individuelle funksjoner for å finne hvert av disse resultatene. Excel inneholder utførlige og tilpassbare verktøylinjer, for eksempel standard verktøylinje som vises her: Noen av ikonene er nyttige matematiske beregninger: er Autosum-ikonet, som kommer inn i formelsummeret () for å legge til en rekke celler. er FunctionWizard-ikonet, som gir deg tilgang til alle tilgjengelige funksjoner. er GraphWizard-ikonet, som gir tilgang til alle graftyper som er tilgjengelige, som vist på denne skjermen: Excel kan brukes til å generere målinger av plassering og variabilitet for en variabel. Anta at vi ønsker å finne beskrivende statistikk for en eksempeldata: 2, 4, 6 og 8. Trinn 1. Velg rullegardinmenyen Verktøy, hvis du ser dataanalyse, klikker du på dette alternativet, ellers klikker du på tillegg . alternativ til å installere analyseverktøyet pakke. Trinn 2. Klikk på dataanalysen. Trinn 3. Velg Beskrivende statistikk fra listen Analyseverktøy. Trinn 4. Når dialogboksen vises: Skriv inn A1: A4 i inntaksområdet, A1 er en verdi i kolonne A og rad 1. I dette tilfellet er denne verdien 2. Ved å bruke samme teknikk, skriv inn andre verdier til du kommer til den siste. Hvis en prøve består av 20 tall, kan du for eksempel velge A1, A2, A3, etc. som inngangsområde. Trinn 5. Velg et utdataområde. i dette tilfellet B1. Klikk på sammendragsstatistikk for å se resultatene. Når du klikker på OK. Du vil se resultatet i det valgte området. Som du vil se, er gjennomsnittet av prøven 5, medianen 5, standardavviket er 2,581989, prøven variansen er 6,6666667, intervallet er 6 og så videre. Hver av disse faktorene kan være viktig i beregningen av ulike statistiske prosedyrer. Normal distribusjon Vurder problemet med å finne sannsynligheten for å få mindre enn en viss verdi under normal sannsynlighetsfordeling. Som et illustrativt eksempel, la oss anta at SAT-resultatene landsdekkende er normalt fordelt med henholdsvis en gjennomsnittlig og standardavvik på henholdsvis 500 og 100. Svar på følgende spørsmål basert på den oppgitte informasjonen: A: Hva er sannsynligheten for at en tilfeldig valgt studentpoengsumme er mindre enn 600 poeng B: Hva er sannsynligheten for at en tilfeldig valgt studentpoeng vil overstige 600 poeng C: Hva er sannsynligheten at en tilfeldig valgt student score vil være mellom 400 og 600 Tips: Ved hjelp av Excel kan du finne sannsynligheten for å få en verdi som er omtrent mindre enn eller lik en gitt verdi. I et problem, når gjennomsnittet og standardavviket for befolkningen er gitt, må du bruke sunn fornuft til å finne forskjellige sannsynligheter basert på spørsmålet, siden du vet at området under en normal kurve er 1. I arbeidsarket, velg celle hvor du vil at svaret skal vises. Anta at du valgte celle nummer ett, A1. Fra menyene, velg quotinsert pull-downquot. Trinn 2-3 Fra menyene, velg innsats, og klikk deretter på funksjonsalternativet. Trinn 4. Etter å ha klikket på funksjonen Funksjon, vises dialogboksen Lim inn funksjon fra Funksjonskategori. Velg Statistisk deretter NORMDIST fra funksjonsnavn-boksen Klikk OK trinn 5. Etter å ha klikket på OK, vises NORMDIST-distribusjonsboksen: i. Skriv inn 600 i X (verdien boksen) ii. Skriv inn 500 i Mean-boksen iii. Skriv inn 100 i standardavviksboksen iv. Skriv quottruequot i den kumulative boksen, og klikk deretter OK. Som du ser verdien 0,84134474 vises i A1, indikerer sannsynligheten for at en tilfeldig valgt student score er under 600 poeng. Ved hjelp av sunn fornuft kan vi svare på en del quotbquot ved å subtrahere 0.84134474 fra 1. Så del quotequot svaret er 1- 0.8413474 eller 0.158653. Dette er sannsynligheten for at en tilfeldig valgt student score er større enn 600 poeng. For å svare på delkvotot, bruk samme teknikker for å finne sannsynlighetene eller området i venstre side av verdiene 600 og 400. Siden disse områdene eller sannsynlighetene overlapper hverandre for å svare på spørsmålet, bør du trekke den mindre sannsynligheten fra den større sannsynligheten. Svaret er lik 0,84134474 - 0,155865526 ​​som er 0,68269. Skjermbildet skal se ut som følgende: Beregne verdien av en tilfeldig variabel som ofte kalles quotxquot-verdien. Du kan bruke NORMINV fra funksjonsboksen til å beregne en verdi for tilfeldige variabelen - hvis sannsynligheten for den venstre siden av denne variabelen er gitt. Faktisk bør du bruke denne funksjonen til å beregne forskjellige prosentiler. I dette problemet kan man spørre hva som er poenget til en student hvis prosentil er 90 Dette betyr at omtrent 90 av studentene er mindre enn dette nummeret. På den annen side, hvis vi ble bedt om å gjøre dette problemet for hånd, ville vi måtte beregne x-verdien ved hjelp av normalfordelingsformel x m zd. Nå kan vi bruke Excel til å beregne P90. I Paste-funksjonen klikker dialogboksen på statistisk, og klikker deretter på NORMINV. Skjermbildet vil se ut som følgende: Når du ser NORMINV, vises dialogboksen. Jeg. Skriv inn 0,90 for sannsynligheten (dette betyr at omtrent 90 av studentpoengene er mindre enn verdien vi leter etter) ii. Skriv inn 500 for gjennomsnittet (dette er gjennomsnittet av normalfordelingen i vårt tilfelle) iii. Oppgi 100 for standardavviket (dette er standardavviket for normalfordelingen i vårt tilfelle) På slutten av dette skjermbildet vil du se formelresultatet som er ca 628 poeng. Dette betyr at topp 10 av studentene skårte bedre enn 628. Confidence Interval for Mean Anta at vi ønsker å estimere et konfidensintervall for gjennomsnittet av en befolkning. Depending on the size of your sample size you may use one of the following cases: Large Sample Size (n is larger than, say 30): The general formula for developing a confidence interval for a population means is: In this formula is the mean of the sample Z is the interval coefficient, which can be found from the normal distribution table (for example the interval coefficient for a 95 confidence level is 1.96). S is the standard deviation of the sample and n is the sample size. Now we would like to show how Excel is used to develop a certain confidence interval of a population mean based on a sample information. As you see in order to evaluate this formula you need quotthe mean of the samplequot and the margin of error Excel will automatically calculate these quantities for you. The only things you have to do are: add the margin of error to the mean of the sample, Find the upper limit of the interval and subtract the margin of error from the mean to the lower limit of the interval. To demonstrate how Excel finds these quantities we will use the data set, which contains the hourly income of 36 work-study students here, at the University of Baltimore. These numbers appear in cells A1 to A36 on an Excel work sheet. After entering the data, we followed the descriptive statistic procedure to calculate the unknown quantities. The only additional step is to click on the confidence interval in the descriptive statistics dialog box and enter the given confidence level, in this case 95. Here is, the above procedures in step-by-step: Step 1. Enter data in cells A1 to A36 (on the spreadsheet) Step 2. From the menus select Tools Step 3. Click on Data Analysis then choose the Descriptive Statistics option then click OK . On the descriptive statistics dialog, click on Summary Statistic. After you have done that, click on the confidence interval level and type 95 - or in other problems whatever confidence interval you desire. In the Output Range box enter B1 or what ever location you desire. Now click on OK . The screen shot would look like the following: As you see, the spreadsheet shows that the mean of the sample is 6.902777778 and the absolute value of the margin of error 0.231678109. This mean is based on this sample information. A 95 confidence interval for the hourly income of the UB work-study students has an upper limit of 6.902777778 0.231678109 and a lower limit of 6.902777778 - 0.231678109. On the other hand, we can say that of all the intervals formed this way 95 contains the mean of the population. Or, for practical purposes, we can be 95 confident that the mean of the population is between 6.902777778 - 0.231678109 and 6.902777778 0.231678109. We can be at least 95 confident that interval 6.68 and 7.13 contains the average hourly income of a work-study student. Smal Sample Size (say less than 30) If the sample n is less than 30 or we must use the small sample procedure to develop a confidence interval for the mean of a population. The general formula for developing confidence intervals for the population mean based on small a sample is: In this formula is the mean of the sample. is the interval coefficient providing an area of in the upper tail of a t distribution with n-1 degrees of freedom which can be found from a t distribution table (for example the interval coefficient for a 90 confidence level is 1.833 if the sample is 10). S is the standard deviation of the sample and n is the sample size. Now you would like to see how Excel is used to develop a certain confidence interval of a population mean based on this small sample information. As you see, to evaluate this formula you need quotthe mean of the samplequot and the margin of error Excel will automatically calculate these quantities the way it did for large samples. Again, the only things you have to do are: add the margin of error to the mean of the sample, , find the upper limit of the interval and to subtract the margin of error from the mean to find the lower limit of the interval. To demonstrate how Excel finds these quantities we will use the data set, which contains the hourly incomes of 10 work-study students here, at the University of Baltimore. These numbers appear in cells A1 to A10 on an Excel work sheet. After entering the data we follow the descriptive statistic procedure to calculate the unknown quantities (exactly the way we found quantities for large sample). Here you are with the procedures in step-by-step form: Step 1. Enter data in cells A1 to A10 on the spreadsheet Step 2. From the menus select Tools Step 3. Click on Data Analysis then choose the Descriptive Statistics option. Click OK on the descriptive statistics dialog, click on Summary Statistic, click on the confidence interval level and type in 90 or in other problems whichever confidence interval you desire. In the Output Range box, enter B1 or whatever location you desire. Now click on OK . The screen shot will look like the following: Now, like the calculation of the confidence interval for the large sample, calculate the confidence interval of the population based on this small sample information. The confidence interval is: 6.8 0.414426102 or 6.39 7.21. We can be at least 90 confidant that the interval 6.39 and 7.21 contains the true mean of the population. Test of Hypothesis Concerning the Population Mean Again, we must distinguish two cases with respect to the size of your sample Large Sample Size (say, over 30): In this section you wish to know how Excel can be used to conduct a hypothesis test about a population mean. We will use the hourly incomes of different work-study students than those introduced earlier in the confidence interval section. Data are entered in cells A1 to A36. The objective is to test the following Null and Alternative hypothesis: The null hypothesis indicates that the average hourly income of a work-study student is equal to 7 per hour however, the alternative hypothesis indicates that the average hourly income is not equal to 7 per hour. I will repeat the steps taken in descriptive statistics and at the very end will show how to find the value of the test statistics in this case, z, using a cell formula. Step 1. Enter data in cells A1 to A36 (on the spreadsheet) Step 2. From the menus select Tools Step 3. Click on Data Analysis then choose the Descriptive Statistics option, click OK . On the descriptive statistics dialog, click on Summary Statistic. Select the Output Range box, enter B1 or whichever location you desire. Now click OK . (To calculate the value of the test statistics search for the mean of the sample then the standard error. In this output, these values are in cells C3 and C4.) Step 4. Select cell D1 and enter the cell formula (C3 - 7)C4. The screen shot should look like the following: The value in cell D1 is the value of the test statistics. Since this value falls in acceptance range of -1.96 to 1.96 (from the normal distribution table), we fail to reject the null hypothesis. Small Sample Size (say, less than 30): Using steps taken the large sample size case, Excel can be used to conduct a hypothesis for small-sample case. Lets use the hourly income of 10 work-study students at UB to conduct the following hypothesis. The null hypothesis indicates that average hourly income of a work-study student is equal to 7 per hour. The alternative hypothesis indicates that average hourly income is not equal to 7 per hour. I will repeat the steps taken in descriptive statistics and at the very end will show how to find the value of the test statistics in this case quottquot using a cell formula. Step 1. Enter data in cells A1 to A10 (on the spreadsheet) Step 2. From the menus select Tools Step 3. Click on Data Analysis then choose the Descriptive Statistics option. Click OK . On the descriptive statistics dialog, click on Summary Statistic. Select the Output Range boxes, enter B1 or whatever location you chose. Again, click on OK . (To calculate the value of the test statistics search for the mean of the sample then the standard error, in this output these values are in cells C3 and C4.) Step 4. Select cell D1 and enter the cell formula (C3 - 7)C4. The screen shot would look like the following: Since the value of test statistic t -0.66896 falls in acceptance range -2.262 to 2.262 (from t table, where 0.025 and the degrees of freedom is 9), we fail to reject the null hypothesis. Difference Between Mean of Two Populations In this section we will show how Excel is used to conduct a hypothesis test about the difference between two population means assuming that populations have equal variances. The data in this case are taken from various offices here at the University of Baltimore. I collected the hourly income data of 36 randomly selected work-study students and 36 student assistants. The hourly income range for work-study students was 6 - 8 while the hourly income range for student assistants was 6-9. The main objective in this hypothesis testing is to see whether there is a significant difference between the means of the two populations. The NULL and the ALTERNATIVE hypothesis is that the means are equal and the means are not equal, respectively. Referring to the spreadsheet, I chose A1 and A2 as label centers. The work-study students hourly income for a sample size 36 are shown in cells A2:A37 . and the student assistants hourly income for a sample size 36 is shown in cells B2:B37 Data for Work Study Student: 6, 6, 6, 6, 6, 6, 6, 6.5, 6.5, 6.5, 6.5, 6.5, 6.5, 7, 7, 7, 7, 7, 7, 7, 7.5, 7.5, 7.5, 7.5, 7.5, 7.5, 8, 8, 8, 8, 8, 8, 8, 8, 8. Data for Student Assistant: 6, 6, 6, 6, 6, 6.5, 6.5, 6.5, 6.5, 6.5, 7, 7, 7, 7, 7, 7.5, 7.5, 7.5, 7.5, 7.5, 7.5, 8, 8, 8, 8, 8, 8, 8, 8.5, 8.5, 8.5, 8.5, 8.5, 9, 9, 9, 9. Use the Descriptive Statistics procedure to calculate the variances of the two samples. The Excel procedure for testing the difference between the two population means will require information on the variances of the two populations. Since the variances of the two populations are unknowns they should be replaced with sample variances. The descriptive for both samples show that the variance of first sample is s 1 2 0.55546218 . while the variance of the second sample s 2 2 0.969748 . To conduct the desired test hypothesis with Excel the following steps can be taken: Step 1. From the menus select Tools then click on the Data Analysis option. Step 2. When the Data Analysis dialog box appears: Choose z-Test: Two Sample for means then click OK Step 3. When the z-Test: Two Sample for means dialog box appears: Enter A1:A36 in the variable 1 range box (work-study students hourly income) Enter B1:B36 in the variable 2 range box (student assistants hourly income) Enter 0 in the Hypothesis Mean Difference box (if you desire to test a mean difference other than 0, enter that value) Enter the variance of the first sample in the Variable 1 Variance box Enter the variance of the second sample in the Variable 2 Variance box and select Labels Enter 0.05 or, whatever level of significance you desire, in the Alpha box Select a suitable Output Range for the results, I chose C19 . then click OK. The value of test statistic z-1.9845824 appears in our case in cell D24. The rejection rule for this test is z 1.96 from the normal distribution table. In the Excel output these values for a two-tail test are z 1.959961082. Since the value of the test statistic z-1.9845824 is less than -1.959961082 we reject the null hypothesis. We can also draw this conclusion by comparing the p-value for a two tail - test and the alpha value. Since p-value 0.047190813 is less than a0.05 we reject the null hypothesis. Overall we can say, based on the sample results, the two populations means are different. Small Samples: n 1 OR n 2 are less than 30 In this section we will show how Excel is used to conduct a hypothesis test about the difference between two population means. - Given that the populations have equal variances when two small independent samples are taken from both populations. Similar to the above case, the data in this case are taken from various offices here at the University of Baltimore. I collected hourly income data of 11 randomly selected work-study students and 11 randomly selected student assistants. The hourly income range for both groups was similar range, 6 - 8 and 6-9. The main objective in this hypothesis testing is similar too, to see whether there is a significant difference between the means of the two populations. The NULL and the ALTERNATIVE hypothesis are that the means are equal and they are not equal, respectively. Referring to the spreadsheet, we chose A1 and A2 as label centers. The work-study students hourly income for a sample size 11 are shown in cells A2:A12 . and the student assistants hourly income for a sample size 11 is shown in cells B2:B12 . Unlike previous case, you do not have to calculate the variances of the two samples, Excel will automatically calculate these quantities and use them in the calculation of the value of the test statistic. Similar to the previous case, but a bit different in step 2, to conduct the desired test hypothesis with Excel the following steps can be taken: Step 1. From the menus select Tools then click on the Data Analysis option. Step 2. When the Data Analysis dialog box appears: Choose t-Test: Two Sample Assuming Equal Variances then click OK Step 3 When the t-Test: Two Sample Assuming Equal Variances dialog box appears : Enter A1:A12 in the variable 1 range box (work-study student hourly income) Enter B1:B12 in the variable 2 range box (student assistant hourly income) Enter 0 in the Hypothesis Mean Difference box(if you desire to test a mean difference other than zero, enter that value) then select Labels Enter 0.05 or, whatever level of significance you desire, in the Alpha box Select a suitable Output Range for the results, I chose C1, then click OK. The value of the test statistic t-1.362229828 appears, in our case, in cell D10. The rejection rule for this test is t 2.086 from the t distribution table where the t value is based on a t distribution with n 1 - n 2 -2 degrees of freedom and where the area of the upper one tail is 0.025 ( that is equal to alpha2). In the Excel output the values for a two-tail test are t 2.085962478. Since the value of the test statistic t-1.362229828, is in an acceptance range of t 2.085962478, we fail to reject the null hypothesis. We can also draw this conclusion by comparing the p-value for a two-tail test and the alpha value. Since the p-value 0.188271278 is greater than a0.05 again . we fail to reject the null hypothesis. Overall we can say, based on sample results, the two populations means are equal. Enter data in an Excel work sheet starting with cell A2 and ending with cell C8. The following steps should be taken to find the proper output for interpretation. Step 1. From the menus select Tools and click on Data Analysis option. Step 2. When data analysis dialog appears, choose Anova single-factor option enter A2:C8 in the input range box. Select labels in first row. Step3. Select any cell as output(in here we selected A11). Click OK. The general form of Anova table looks like following: Source of Variation Suppose the test is done at level of significance a 0.05, we reject the null hypothesis. This means there is a significant difference between means of hourly incomes of student assistants in these departments. The Two-way ANOVA Without Replication In this section, the study involves six students who were offered different hourly wages in three different department services here at the University of Baltimore. The objective is to see whether the hourly incomes are the same. Therefore, we can consider the following: Treatment: Hourly payments in the three departments Blocks: Each student is a block since each student has worked in the three different departments The general form of Anova table would look like: Source of Variation Degrees of freedom To find the Excel output for the above data the following steps can be taken: Step 1. From the menus select Tools and click on Data Analysis option. Step2. When data analysis box appears: select Anova two-factor without replication then Enter A2: D8 in the input range. Select labels in first row. Step3. Select an output range (in here we selected A11) then OK. Source of Variation NOTE: FMSTMSE 0.9805560.497222 1.972067 F 3.33 from table (5 numerator DF and 10 denominator DF) Since 1.972067 Goodness-of-Fit Test for Discrete Random Variables The CHI-SQUARE distribution can be used in a hypothesis test involving a population variance. However, in this section we would like to test and see how close a sample results are to the expected results. Example: The Multinomial Random Variable In this example the objective is to see whether or not based on a randomly selected sample information the standards set for a population is met. There are so many practical examples that can be used in this situation. For example it is assumed the guidelines for hiring people with different ethnic background for the US government is set at 70(WHITE), 20(African American) and 10(others), respectively. A randomly selected sample of 1000 US employees shows the following results that is summarized in a table. EXPECTED NUMBER OF EMPLOYEES OBSERVED FROM SAMPLE As you see the observed sample numbers for groups two and three are lower than their expected values unlike group one which has a higher expected value. Is this a clear sign of discrimination with respect to ethnic background Well depends on how much lower the expected values are. The lower amount might not statistically be significant. To see whether these differences are significant we can use Excel and find the value of the CHI-SQUARE. If this value falls within the acceptance region we can assume that the guidelines are met otherwise they are not. Now lets enter these numbers into Excel spread - sheet. We used cells B7-B9 for the expected proportions, C7-C9 for the observed values and D7-D9 for the expected frequency. To calculate the expected frequency for a category, you can multiply the proportion of that category by the sample size (in here 1000). The formula for the first cell of the expected value column, D7 is 1000B7. To find other entries in the expected value column, use the copy and the paste menu as shown in the following picture. These are important values for the chi-square test. The observed range in this case is C7: C9 while the expected range is D7: D9. The null and the alternative hypothesis for this test are as follows: H A . The population proportions are not P W 0.70, P A 0.20 and P O 0.10 Now lets use Excel to calculate the p-value in a CHI-SQUARE test. Step 1. Select a cell in the work sheet, the location which you like the p value of the CHI-SQUARE to appear. We chose cell D12. Step 2. From the menus, select insert then click on the Function option, Paste Function dialog box appears. Step 3. Refer to function category box and choose statistical . from function name box select CHITEST and click on OK . Step 4. When the CHITEST dialog appears: Enter C7: C9 in the actual-range box then enter D7: D9 in the expected-range box, and finally click on OK . The p-value will appear in the selected cell, D12. As you see the p value is 0.002392 which is less than the value of the level of significance (in this case the level of significance, a 0.10). Hence the null hypothesis should be rejected. This means based on the sample information the guidelines are not met. Notice if you type CHITEST(C7:C9,D7:D9) in the formula bar the p-value will show up in the designated cell. NOTE: Excel can actually find the value of the CHI-SQUARE. To find this value first select an empty cell on the spread sheet then in the formula bar type CHIINV(D12,2). D12 designates the p-Value found previously and 2 is the degrees of freedom (number of rows minus one). The CHI-SQUARE value in this case is 12.07121. If we refer to the CHI-SQUARE table we will see that the cut off is 4.60517 since 12.071214.60517 we reject the null. The following screen shot shows you how to the CHI-SQUARE value. Test of Independence: Contingency Tables The CHI-SQUARE distribution is also used to test and see whether two variables are independent or not. For example based on sample data you might want to see whether smoking and gender are independent events for a certain population. The variables of interest in this case are smoking and the gender of an individual. Another example in this situation could involve the age range of an individual and his or her smoking habit. Similar to case one data may appear in a table but unlike the case one this table may contains several columns in addition to rows. The initial table contains the observed values. To find expected values for this table we set up another table similar to this one. To find the value of each cell in the new table we should multiply the sum of the cell column by the sum of the cell row and divide the results by the grand total. The grand total is the total number of observations in a study. Now based on the following table test whether or not the smoking habit and gender of the population that the following sample taken from are independent. On the other hand is that true that males in this population smoke more than females You could use formula bar to calculate the expected values for the expected range. For example to find the expected value for the cell C5 which is replaced in c11 you could click on the formula bar and enter C6D5D6 then enter in cell C11. Step 1. Observed Range b4:c5 Smoking and gender So the observed range is b4:c5 and the expected range is b10:c11. Step 3. Click on fx (paste function) Step 4. When Paste Function dialog box appears, click on Statistical in function category and CHITEST in the function name then click OK. When the CHITEST box appears, enter b4:c5 for the actual range, then b10:c11 for the expected range. Step 5. Click on OK (the p-value appears). 0.477395 Conclusion: Since p-value is greater than the level of significance (0.05), fails to reject the null. This means smoking and gender are independent events. Based on sample information one can not assure females smoke more than males or the other way around. Step 6. To find the chi-square value, use CHINV function, when Chinv box appears enter 0.477395 for probability part, then 1 for the degrees of freedom. Degrees of freedom(number of columns-1)X(number of rows-1) Test Hypothesis Concerning the Variance of Two Populations In this section we would like to examine whether or not the variances of two populations are equal. Whenever independent simple random samples of equal or different sizes such as n 1 and n 2 are taken from two normal distributions with equal variances, the sampling distribution of s 1 2 s 2 2 has F distribution with n 1 - 1 degrees of freedom for the numerator and n 2 - 1 degrees of freedom for the denominator. In the ratio s 1 2 s 2 2 the numerator s 1 2 and the denominator s 2 2 are variances of the first and the second sample, respectively. The following figure shows the graph of an F distribution with 10 degrees of freedom for both the numerator and the denominator. Unlike the normal distribution as you see the F distribution is not symmetric. The shape of an F distribution is positively skewed and depends on the degrees of freedom for the numerator and the denominator. The value of F is always positive. Now let see whether or not the variances of hourly income of student-assistant and work-study students based on samples taken from populations previously are equal. Assume that the hypothesis test in this case is conducted at a 0.10. The null and the alternative are: Rejection Rule: Reject the null hypothesis if Flt F 0.095 or Fgt F 0.05 where F, the value of the test statistic is equal to s 1 2 s 2 2. with 10 degrees of freedom for both the numerator and the denominator. We can find the value of F .05 from the F distribution table. If s 1 2 s 2 2. we do not need to know the value of F 0.095 otherwise, F 0.95 1 F 0.05 for equal sample sizes. A survey of eleven student-assistant and eleven work-study students shows the following descriptive statistics. Our objective is to find the value of s 1 2 s 2 2. where s 1 2 is the value of the variance of student assistant sample and s 2 2 is the value of the variance of the work study students sample. As you see these values are in cells F8 and D8 of the descriptive statistic output. To calculate the value of s 1 2 s 2 2. select a cell such as A16 and enter cell formula F8D8 and enter. This is the value of F in our problem. Since this value, F1.984615385, falls in acceptance area we fail to reject the null hypothesis. Hence, the sample results do support the conclusion that student assistants hourly income variance is equal to the work study students hourly income variance. The following screen shoot shows how to find the F value. We can follow the same format for one tail test(s). Linear Correlation and Regression Analysis In this section the objective is to see whether there is a correlation between two variables and to find a model that predicts one variable in terms of the other variable. There are so many examples that we could mention but we will mention the popular ones in the world of business. Usually independent variable is presented by the letter x and the dependent variable is presented by the letter y. A business man would like to see whether there is a relationship between the number of cases of sold and the temperature in a hot summer day based on information taken from the past. He also would like to estimate the number cases of soda which will be sold in a particular hot summer day in a ball game. He clearly recorded temperatures and number of cases of soda sold on those particular days. The following table shows the recorded data from June 1 through June 13. The weatherman predicts a 94F degree temperature for June 14. The businessman would like to meet all demands for the cases of sodas ordered by customers on June 14. Now lets use Excel to find the linear correlation coefficient and the regression line equation. The linear correlation coefficient is a quantity between -1 and 1. This quantity is denoted by R . The closer R to 1 the stronger positive (direct) correlation and similarly the closer R to -1 the stronger negative (inverse) correlation exists between the two variables. The general form of the regression line is y mx b. In this formula, m is the slope of the line and b is the y-intercept. You can find these quantities from the Excel output. In this situation the variable y (the dependent variable) is the number of cases of soda and the x (independent variable) is the temperature. To find the Excel output the following steps can be taken: Step 1. From the menus choose Tools and click on Data Analysis. Step 2. When Data Analysis dialog box appears, click on correlation. Step 3. When correlation dialog box appears, enter B1:C14 in the input range box. Click on Labels in first row and enter a16 in the output range box. Click on OK. As you see the correlation between the number of cases of soda demanded and the temperature is a very strong positive correlation. This means as the temperature increases the demand for cases of soda is also increasing. The linear correlation coefficient is 0.966598577 which is very close to 1. Now lets follow same steps but a bit different to find the regression equation. Step 1. From the menus choose Tools and click on Data Analysis Step 2 . When Data Analysis dialog box appears, click on regression . Step 3. When Regression dialog box appears, enter b1:b14 in the y-range box and c1:c14 in the x-range box. Click on labels . Step 4. Enter a19 in the output range box . Note: The regression equation in general should look like Ym X b. In this equation m is the slope of the regression line and b is its y-intercept. Adjusted R Square The relationship between the number of cans of soda and the temperature is: Y 0.879202711 X 9.17800767 The number of cans of soda 0.879202711(Temperature) 9.17800767. Referring to this expression we can approximately predict the number of cases of soda needed on June 14. The weather forecast for this is 94 degrees, hence the number of cans of soda needed is equal to The number of cases of soda0.879202711(94) 9.17800767 91.82 or about 92 cases. Moving Average and Exponential Smoothing Moving Average Models: Use the Add Trendline option to analyze a moving average forecasting model in Excel. You must first create a graph of the time series you want to analyze. Select the range that contains your data and make a scatter plot of the data. Once the chart is created, follow these steps: Click on the chart to select it, and click on any point on the line to select the data series. When you click on the chart to select it, a new option, Chart, s added to the menu bar. From the Chart menu, select Add Trendline. The following is the moving average of order 4 for weekly sales: Exponential Smoothing Models: The simplest way to analyze a timer series using an Exponential Smoothing model in Excel is to use the data analysis tool. This tool works almost exactly like the one for Moving Average, except that you will need to input the value of a instead of the number of periods, k. Once you have entered the data range and the damping factor, 1- a. and indicated what output you want and a location, the analysis is the same as the one for the Moving Average model. Applications and Numerical Examples Descriptive Statistics: Suppose you have the following, n 10, data: 1.2, 1.5, 2.6, 3.8, 2.4, 1.9, 3.5, 2.5, 2.4, 3.0 Type your n data points into the cells A1 through An. Click on the Tools menu. (At the bottom of the Tools menu will be a submenu Data Analysis. , if the Analysis Tool Pack has been properly installed.) Clicking on Data Analysis. will lead to a menu from which Descriptive Statistics is to be selected. Select Descriptive Statistics by pointing at it and clicking twice, or by highlighting it and clicking on the Okay button. Within the Descriptive Statistics submenu, a. for the input range enter A1:Dn, assuming you typed the data into cells A1 to An. b. click on the output range button and enter the output range C1:C16. c. click on the Summary Statistics box d. finally, click on Okay. The Central Tendency: The data can be sorted in ascending order: 1.2, 1.5, 1.9, 2.4, 2.4, 2.5, 2.6, 3.0, 3.5, 3.8 The mean, median and mode are computed as follows: (1.2 1.5 2.6 3.8 2.4 1.9 3.5 2.5 2.4 3.0) 10 2.48 The mode is 2.4, since it is the only value that occurs twice. The midrange is (1.2 3.8) 2 2.5. Note that the mean, median and mode of this set of data are very close to each other. This suggests that the data is very symmetrically distributed. Variance: The variance of a set of data is the average of the cumulative measure of the squares of the difference of all the data values from the mean. The sample variance-based estimation for the population variance are computed differently. The sample variance is simply the arithmetic mean of the squares of the difference between each data value in the sample and the mean of the sample. On the other hand, the formula for an estimate for the variance in the population is similar to the formula for the sample variance, except that the denominator in the fraction is (n-1) instead of n. However, you should not worry about this difference if the sample size is large, say over 30. Compute an estimate for the variance of the population . given the following sorted data: 1.2, 1.5, 1.9, 2.4, 2.4, 2.5, 2.6, 3.0, 3.5, 3.8 mean 2.48 as computed earlier. An estimate for the population variance is: s 2 1 (10-1) (1.2 - 2.48) 2 (1.5 - 2.48) 2 (1.9 - 2.48) 2 (2.4 -2.48) 2 (2.4 - 2.48) 2 (2.5 - 2.48) 2 (2.6 - 2.48) 2 (3.0 - 2.48) 2 (3.5 -2.48) 2 (3.8 - 2.48) 2 (1 9) (1.6384 0.9604 0.3364 0.0064 0.0064 0.0004 0.0144 0.2704 1.0404 1.7424) 0.6684 Therefore, the standard deviation is s ( 0.6684 ) 12 0.8176 Probability and Expected Values: Newsweek reported that average take for bank robberies was 3,244 but 85 percent of the robbers were caught. Assuming 60 percent of those caught lose their entire take and 40 percent lose half, graph the probability mass function using EXCEL. Calculate the expected take from a bank robbery. Does it pay to be a bank robber To construct the probability function for bank robberies, first define the random variable x, bank robbery take. If the robber is not caught, x 3,244. If the robber is caught and manages to keep half, x 1,622. If the robber is caught and loses it all, then x 0. The associated probabilities for these x values are 0.15 (1 - 0.85), 0.34 (0.85)(0.4), and 0.51 (0.85)(0.6). After entering the x values in cells A1, A2 and A3 and after entering the associated probabilities in B1, B2, and B3, the following steps lead to the probability mass function: Click on ChartWizard. The ChartWizard Step 1 of 4 screen will appear. Highlight Column at ChartWizard Step 1 of 4 and click Next. At ChartWizard Step 2 of 4 Chart Source Data, enter B1:B3 for Data range, and click column button for Series in. A graph will appear. Click on series toward the top of the screen to get a new page. At the bottom of the Series page, is a rectangle for Category (X) axis labels: Click on this rectangle and then highlight A1:A3. At Step 3 of 4 move on by clicking on Next, and at Step 4 of 4, click on Finish. The expected value of a robbery is 1,038.08. E(X) (0)(0.51)(1622)(0.34) (3244)(0.15) 0 551.48 486.60 1038.08 The expected return on a bank robbery is positive. On average, bank robbers get 1,038.08 per heist. If criminals make their decisions strictly on this expected value, then it pays to rob banks. A decision rule based only on an expected value, however, ignores the risks or variability in the returns. In addition, our expected value calculations do not include the cost of jail time, which could be viewed by criminals as substantial. Discrete Continuous Random Variables: Binomial Distribution Application: A multiple choice test has four unrelated questions. Each question has five possible choices but only one is correct. Thus, a person who guesses randomly has a probability of 0.2 of guessing correctly. Draw a tree diagram showing the different ways in which a test taker could get 0, 1, 2, 3 and 4 correct answers. Sketch the probability mass function for this test. What is the probability a person who guesses will get two or more correct Solution: Letting Y stand for a correct answer and N a wrong answer, where the probability of Y is 0.2 and the probability of N is 0.8 for each of the four questions, the probability tree diagram is shown in the textbook on page 182. This probability tree diagram shows the branches that must be followed to show the calculations captured in the binomial mass function for n 4 and 0.2. For example, the tree diagram shows the six different branch systems that yield two correct and two wrong answers (which corresponds to 4(22) 6. The binomial mass function shows the probability of two correct answers as P(x 2 n 4, p 0.2) 6(.2)2(.8)2 6(0.0256) 0.1536 P(2) Which is obtained from excel by using the BINOMDIST Command, where the first entry is x, the second is n, and the third is mass (0) or cumulative (1) that is, entering BINOMDIST(2,4,0.2,0) IN ANY EXCEL CELL YIELDS 0.1536 AND BINOMDIST(3,4,0.2,0) YIELDS P(x3n4, p 0.2) 0.0256 BINOMDIST(4,4,0.2,0) YIELDS P(x4n4, p 0.2) 0.0016 1-BINOMDIST(1,4,0.2,1) YIELDS P(x 179 2 n 4, p 0.2) 0.1808 Normal Example: If the time required to complete an examination by those with a certain learning disability is believed to be distributed normally, with mean of 65 minutes and a standard deviation of 15 minutes, then when can the exam be terminated so that 99 percent of those with the disability can finish Solution: Because t he average and standard deviation are known, what needs to be established is the amount of time, above the mean time, such that 99 percent of the distribution is lower. This is a distance that is measured in standard deviations as given by the Z value corresponding to the 0.99 probability found in the body of Appendix B, Table 5,as shown in the textbook OR the commands entered into any cell of Excel to find this Z value is NORMINV(0.99,0,1) for 2.326342. The closest cumulative probability that can be found is 0.9901, in the row labeled 2.3 and column headed by .03, Z 2.33, which is only an approximation for the more exact 2.326342 found in Excel. Using this more exact value the calculation with mean m and standard deviation s in the following formula would be Z ( X - m ) s That is, Z ( x - 65)15 Thus, x 65 15(2.32634) 99.9 minutes. Alternatively, instead of standardizing with the Z distribution using Excel we can simply work directly with the normal distribution with a mean of 65 and standard deviation of 15 and enter NORMINV(0.99,65,15). In general to obtain the x value for which alpha percent of a normal random variables values are lower, the following NORMINV command may be used, where the first entry is a. the second is m. and the third is s. Another Example: In the early 1980s, the Toro Company of Minneapolis, Minnesota, advertised that it would refund the purchase price of a snow blower if the following winters snowfall was less than 21 percent of the local average. If the average snowfall is 45.25 inches, with a standard deviation of 12.2 inches, what is the likelihood that Toro will have to make refunds Solution: Within limits, snowfall is a continuous random variable that can be expected to vary symmetrically around its mean, with values closer to the mean occurring most often. Thus, it seems reasonable to assume that snowfall (x) is approximately normally distributed with a mean of 45.25 inches and standard deviation of 12.2 inches. Nine and one half inches is 21 percent of the mean snowfall of 45.25 inches and, with a standard deviation of 12.2 inches, the number of standard deviations between 45.25 inches and 9.5 inches is Z: Z ( x - m ) s (9.50 - 45.25)12.2 -2.93 Using Appendix B, Table 5, the textbook demonstrates the determination of P(x 163 9.50) P(z 163 -2.93) 0.17, the probability of snowfall less than 9.5 inches. Using Excel, this normal probability is obtained with the NORMDIST command, where the first entry is x, the second is mean m. the third is standard deviation s, and the fourth is CUMULATIVE (1). Entering NORMDIST(9.5,45.25,12.2,1), Gives P( x 163 9.50) 0.001693. Sampling Distribution and the Central Limit Theorem : A bakery sells an average of 24 loaves of bread per day. Sales (x) are normally distributed with a standard deviation of 4. If a random sample of size n 1 (day) is selected, what is the probability this x value will exceed 28 If a random sample of size n 4 (days) is selected, what is theprobability that xbar 179 28 Why does the answer in part 1 differ from that in part 2 1. The sampling distribution of the sample mean xbar is normal with a mean of 24 and a standard error of the mean of 4. Thus, using Excel, 0.15866 1-NORMDIST(28,24,4,1). 2. The sampling distribution of the sample mean xbar is normal with a mean of 24 and a standard error of the mean of 2 using Excel, 0.02275 1-NORMDIST(28,24,2,1). Regression Analysis: The highway deaths per 100 million vehicle miles and highway speed limits for 10 countries, are given below: (Death, Speed) (3.0, 55), (3.3, 55), (3.4, 55), (3.5, 70), (4.1, 55), (4.3, 60), (4.7, 55), (4.9, 60), (5.1, 60), and (6.1, 75). From this we can see that five countries with the same speed limit have very different positions on the safety list. For example, Britain. with a speed limit of 70 is demonstrably safer than Japan, at 55. Can we argue that, speed has little to do with safety. Use regression analysis to answer this question. Solution: Enter the ten paired y and x data into cells A2 to A11 and B2 to B11, with the death rate label in A1 and speed limits label in B1, the following steps produce the regression output. Choose Regression from Data Analysis in the Tools menu. The Regression dialog box will will appear. Note: Use the mouse to move between the boxes and buttons. Click on the desired box or button. The large rectangular boxes require a range from the worksheet. A range may be typed in or selected by highlighting the cells with the mouse after clicking on the box. If the dialog box blocks the data, it can be moved on the screen by clicking on the title bar and dragging. For the Input Y Range, enter A1 to A11, and for the Input X Range enter B1 to B11. Because the Y and X ranges include the Death and Speed labels in A1 and B1, select the Labels box with a click. Click the Output Range button and type reference cell, which in this demonstration is A13. To get the predicted values of Y (Death rates) and residuals select the Residuals box with a click. Your screen display should show a Table, clicking OK will give the SUMMARY OUTPUT, ANOVA AND RESIDUAL OUTPUT The first section of the EXCEL printout gives SUMMARY OUTPUT. The Multiple R is the square root of the R Square the computation and interpretation of which we have already discussed. The Standard Error of estimate (which will be discussed in the next chapter) is s 0.86423, which is the square root of Residual SS 5.97511 divided by its degrees of freedom, df 8, as given in the ANOVA section. We will also discuss the adjusted R-square of 0.21325 in the following chapters. Under the ANOVA section are the estimated regression coefficients and related statistics that will be discussed in detail in the next chapter. For now it is sufficient to recognize that the calculated coefficient values for the slope and y intercept are provided (b 0.07556 and a -0.29333). Next to these coefficient estimates is information on the variability in the distribution of the least-squares estimators from which these specific estimates were drawn: the column titled Std. Error contains the standard deviations (standard errors) of the intercept and slope distributions the t-ratio and p columns give the calculated values of the t statistics and associated p-values. As shown in Chapter 13, the t statistic of 1.85458 and p-value of 0.10077, for example, indicates that the sample slope (0.07556) is sufficiently different from zero, at even the 0.10 two-tail Type I error level, to conclude that there is a significant relationship between deaths and speed limits in the population. This conclusion is contrary to assertion that speed has little to do with safety. SUMMARY OUTPUT: Multiple R 0.54833, R Square 0.30067, Adjusted R Square 0.21325, Standard Error 0.86423, Observations 10 ANOVA df SS MS F P-value Regression 1 2.56889 2.56889 3.43945 0.10077 Residual 8 5.97511 0.74689 Total 9 8.54400 Coeffs. Estimate Std. Error T Stat P-value Lower 95 Upper 95 Intercept -0.29333 2.45963 -0.11926 0.90801 -5.96526 5.37860 Speed 0.07556 0.04074 1.85458 0.10077 -0.01839 0.16950 Predicted Residuals 3.86222 -0.86222 3.86222 -0.56222 3.86222 -0.46222 4.99556 -1.49556 3.86222 0.23778 4.24000 0.06000 3.86222 0.83778 4.24000 0.66000 4.24000 0.86000 5.37333 0.72667 Microsoft Excel Add-Ins Forecasting with regression requires the Excel add-in called Analysis ToolPak , and linear programming requires the Excel add-in called Solver . How you check to see if these are activated on your computer, and how to activate them if they are not active, varies with Excel version. Here are instructions for the most common versions. If Excel will not let you activate Data Analysis and Solver, you must use a different computer. Excel 20022003: Start Excel, then click Tools and look for Data Analysis and for Solver. If both are there, press Esc (escape) and continue with the respective assignment. Otherwise click Tools, Add-Ins, and check the boxes for Analysis ToolPak and for Solver, then click OK. Click Tools again, and both tools should be there. Excel 2007: Start Excel 2007 and click the Data tab at the top. Look to see if Data Analysis and Solver show in the Analysis section at the far right. If both are there, continue with the respective assignment. Otherwise, do the following steps exactly as indicated: - click the 8220Office Button8221 at top left - click the Excel Options button near the bottom of the resulting window - click the Add-ins button on the left of the next screen - near the bottom at Manage Excel Add-ins, click Go - check the boxes for Analysis ToolPak and Solver Add-in if they are not already checked, then click OK - click the Data tab as above and verify that the add-ins show. Excel 2010: Start Excel 2010 and click the Data tab at the top. Look to see if Data Analysis and Solver show in the Analysis section at the far right. If both are there, continue with the respective assignment. Otherwise, do the following steps exactly as indicated: - click the File tab at top left - click the Options button near the bottom of the left side - click the Add-ins button near the bottom left of the next screen - near the bottom at Manage Excel Add-ins, click Go - check the boxes for Analysis ToolPak and Solver Add-in if they are not already checked, then click OK - click the Data tab as above and verify that the add-ins show. Solving Linear Programs by Excel Some of these examples can be modified for other types problems Computer-assisted Learning: E-Labs and Computational Tools My teaching style deprecates the plug the numbers into the software and let the magic box work it out approach. Personal computers, spreadsheets, e. g. Excel. professional statistical packages (e. g. such as SPSS), and other information technologies are now ubiquitous in statistical data analysis. Without using these tools, one cannot perform any realistic statistical data analysis on large data sets. The appearance of other computer software, JavaScript Applets. Statistical Demonstrations Applets. and Online Computation are the most important events in the process of teaching and learning concepts in model-based statistical decision making courses. These tools allow you to construct numerical examples to understand the concepts, and to find their significance for yourself. Use any or online interactive tools available on the WWW to perform statistical experiments (with the same purpose, as you used to do experiments in physics labs to learn physics) to understand statistical concepts such as Central Limit Theorem are entertaining and educating. Computer-assisted learning is similar to the experiential model of learning. The adherents of experiential learning are fairly adamant about how we learn. Learning seldom takes place by rote. Learning occurs because we immerse ourselves in a situation in which we are forced to perform and think. You get feedback from the computer output and then adjust your thinking-process if needed. A SPSS-Example . SPSS-Examples . SPSS-More Examples . (Statistical Package for the Social Sciences) is a data management and analysis product. It can perform a variety of data analysis and presentation functions, including statistical analyses and graphical presentation of data. SAS (Statistical Analysis System) is a system of software packages some of its basic functions and uses are: database management inputting, cleaning and manipulating data, statistical analysis, calculating simple statistics such as means, variances, correlations running standard routines such as regressions. Available at: SPSSSAS Packages on Citrix (Installing and Accessing ) Use your email ID and Password: Technical Difficulties OTS Call Center (401) 837-6262 Excel Examples. Excel More Examples It is Excellent for Descriptive Statistics, and getting acceptance is improving, as computational tool for Inferential Statistics. The Value of Performing Experiment: If the learning environment is focused on background information, knowledge of terms and new concepts, the learner is likely to learn that basic information successfully. However, this basic knowledge may not be sufficient to enable the learner to carry out successfully the on-the-job tasks that require more than basic knowledge. Thus, the probability of making real errors in the business environment is high. On the other hand, if the learning environment allows the learner to experience and learn from failures within a variety of situations similar to what they would experience in the real world of their job, the probability of having similar failures in their business environment is low. This is the realm of simulations-a safe place to fail. The appearance of statistical software is one of the most important events in the process of decision making under uncertainty. Statistical software systems are used to construct examples, to understand the existing concepts, and to find new statistical properties. On the other hand, new developments in the process of decision making under uncertainty often motivate developments of new approaches and revision of the existing software systems. Statistical software systems rely on a cooperation of statisticians, and software developers. Beside the professional statistical software Online statistical computation . and the use of a scientific calculator is required for the course. A Scientific Calculator is the one, which has capability to give you, say, the result of square root of 5. Any calculator that goes beyond the 4 operations is fine for this course. These calculators allow you to perform simple calculations you need in this course, for example, enabling you to take square root, to raise e to the power of say, 0.36. and so on. These types of calculators are called general Scientific Calculators. There are also more specific and advanced calculators for mathematical computations in other areas such as Finance, Accounting, and even Statistics. The last one, for example, computes mean, variance, skewness, and kurtosis of a sample by simply entering all data one-by-one and then pressing any of the mean, variance, skewness, and kurtosis keys. Without a computer one cannot perform any realistic statistical data analysis. Students who are signing up for the course are expected to know the basics of Excel. As a starting point, you need visiting the Excel Web site created for this course. If you are challenged by or unfamiliar with Excel, you may seek tutorial help from the Academic Resource Center at 410-837-5385, E-mail. What and How to Hand-in My Computer Assignment For the computer assignment I do recommend in checking your hand computation homework, and checking some of the numerical examples from your textbook. As part of your homework assignment you don not have to hand in the printout of the computer assisted learning, however, you must include within your handing homework a paragraph entitled Computer Implementation describing your (positive or negative) experience. Interesting and Useful Sites The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for Educational Multimedia. of materials presented on this Web site is permitted for non-commercial and classroom purposes only. This site may be mirrored intact (including these notices), on any server with public access. All files are available at home. ubalt. eduntsbarshBusiness-stat for mirroring. Kindly e-mail me your comments, suggestions, and concerns. Takk skal du ha. EOF: CopyRights 1994-2015.Smoothing and filtering are two of the most commonly used time series techniques for removing noise from the underlying data to help reveal the important features and components (e. g. trend, seasonality, etc.). However, we can also use smoothing to fill in missing values andor conduct a forecast. In this issue, we will discuss five (5) different smoothing methods: weighted moving average (WMA i ), simple exponential smoothing, double exponential smoothing, linear exponential smoothing, and triple exponential smoothing. Why should we care Smoothing is very often used (and abused) in the industry to make a quick visual examination of the data properties (e. g. trend, seasonality, etc.), fit in missing values, and conduct a quick out-of-sample forecast. Why do we have so many smoothing functions As we will see in this paper, each function works for a different assumption about the underlying data. For instance, simple exponential smoothing assumes the data has a stable mean (or at least a slow moving mean), so simple exponential smoothing will do poorly in forecasting data exhibiting seasonality or a trend. In this paper, we will go over each smoothing function, highlight its assumptions and parameters, and demonstrate its application through examples. Weighted Moving Average (WMA) A moving average is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. A weighted moving average has multiplying factors to give different weights to data at different positions in the sample window. The weighted moving average has a fixed window (i. e. N) and the factors are typically chosen to given more weight to recent observations. The window size (N) determines the number of points averaged at each time, so a larger windows size is less responsive to new changes in the original time series and a small window size can cause the smoothed output to be noisy. For out of sample forecasting purposes: Example 1: Lets consider monthly sales for Company X, using a 4-month (equal-weighted) moving average. Note that the moving average is always lagging behind the data and the out-of-sample forecast converges to a constant value. Lets try to use a weighting scheme (see below) which gives more emphasis to the latest observation. We plotted the equal-weighted moving average and WMA on the same graph. The WMA seems more responsive to recent changes and the out-of sample forecast converges to the same value as the moving average. Example 2: Lets examine the WMA in the presence of trend and seasonality. For this example, well use the international passenger airline data. The moving average window is 12 months. The MA and the WMA keep pace with the trend, but the out-of-sample forecast flattens. Furthermore, although the WMA exhibits some seasonality, it is always lagging behind the original data. (Browns) Simple Exponential Smoothing Simple exponential smoothing is similar to the WMA with the exception that the window size if infinite and the weighting factors decrease exponentially. As we have seen in the WMA, the simple exponential is suited for time series with a stable mean, or at least a very slow moving mean. Example 1: Lets use the monthly sales data (as we did in the WMA example). In the example above, we chose the smoothing factor to be 0.8, which begs the question: What is the best value for the smoothing factor Estimating the best value from the data Using the TSSUB function (to compute the error), SUMSQ, and Excel data tables, we computed the sum of the squared errors (SSE) and plotted the results: The SSE reaches its minimum value around 0.8, so we picked this value for our smoothing. (Holt-Winters) Double Exponential Smoothing Simple exponential smoothing does not do well in the presence of a trend, so several method devised under the double exponential umbrella are proposed to handle this type of data. NumXL supports Holt-Winters double exponential smoothing, which take the following formulation: Example 1: Lets examine the international passengers airline data We chose an Alpha value of 0.9 and a Beta of 0.1. Please note that although double smoothing traces the original data well, the out-of-sample forecast is inferior to the simple moving average. How do we find the best smoothing factors We take a similar approach to our simple exponential smoothing example, but modified for two variables. We compute the sum of the squared errors construct a two-variable data table, and pick the alpha and beta values that minimize the overall SSE. (Browns) Linear Exponential Smoothing This is another method of double exponential smoothing function, but it has one smoothing factor: Browns double exponential smoothing takes one parameter less than Holt-Winters function, but it may not offer as good a fit as that function. Example 1: Lets use the same example in Holt-Winters double exponential and compare the optimal sum of the squared error. The Browns double exponential does not fit the sample data as well as the Holt-Winters method, but the out-of sample (in this particular case) is better. How do we find the best smoothing factor ( ) We use the same method to select the alpha value that minimizes the sum of the squared error. For the example sample data, the alpha is found to be 0.8. (Winters) Triple Exponential Smoothing The triple exponential smoothing takes into account seasonal changes as well as trends. This method requires 4 parameters: The formulation for triple exponential smoothing is more involved than any of the earlier ones. Please, check our online reference manual for the exact formulation. Using the international passengers airline data, we can apply winters triple exponential smoothing, find optimal parameters, and conduct an out-of sample forecast. Obviously, the Winters triple exponential smoothing is best applied for this data sample, as it tracks the values well and the out-of sample forecast exhibits seasonality (L12). How do we find the best smoothing factor ( ) Again, we need to pick the values that minimize the overall sum of the squared errors (SSE), but the data tables can be used for more than two variables, so we resort to the Excel solver: (1) Setup the minimization problem, with the SSE as the utility function (2) The constraints for this problem Conclusion support Files

Comments

Popular posts from this blog

Forexpros Eple Inc

Anmeldelser Of Online Trading Akademi

Forex Kjøp Salg Trykkindikatoren Mt4