Monday, January 27, 2020

Introduction To Cricket In The 21st Century History Essay

Introduction To Cricket In The 21st Century History Essay When considering the extensive amount of research that has been directed toward the sporting world from a mathematical, statistical and operational research perspective, the Duckworth/Lewis method (Duckworth and Lewis, 1998, 2004) perhaps stands alone as the most significant contribution to sport. The common practice in dealing with interrupted one-day cricket matches until 1992 was to compare the run rates (the total number of runs scored divided by the number of completed overs) of the competing teams; the team with the higher run rate was declared the winner. However, this rule tended to benefit the team batting second (Team 2) at the expense of the team batting first (Team 1), leading to the common practice of inviting the other team to bat first if rain was expected. The difficulty with run rates is that targets are determined by taking the remaining overs into account, while ignoring the number of lost wickets. As is well known, batsmen tend to bat less aggressively and score fewer runs when more wickets have been taken. The first team does not have the same strategic options as the second team and, in that sense, the rule does not provide both teams with equal opportunities. Realising that this rule is biased towards the side batting second, the Australian Cricket Board introduced its most productive overs rule during the 1992/93 season. This rule calculates the target for Team 2 by taking the n highest scoring overs of Team 1 where n is the number of played overs (for example, 40 if 10 overs are lost due to rain). Ironically, this rule was now considered as tending to favour the side batting first and transparently unfair to the team batting second. To illustrate, Suppose that Team 2 requires 20 off 19 balls to win, when a short shower takes three overs away. The reset target would now be 20 off 1 ball since the three least productive overs are deduced from the original target (which we may believe were three maiden overs in this case). However, this seems to be unfair and even ironic: the second teams excellent bowling (three maiden overs) in the first innings is now turning against them; it would have been better for Team 2 in this case if Team 1 had reached the same total score without any maidens. The Duckworth/Lewis method was utilised and gained prominence during the 1999 World Cup, and since that time, it has been adopted by every major cricketing board and competition. In one-day cricket, the Duckworth/Lewis method is based on the recognition that at the beginning of a match, each side has resources available (typically 50 overs and 10 wickets). When the match is shortened, the resources of one or both teams are reduced and the two teams usually have different resources for their innings. In this case, in an attempt to be fair, a revised target for the team batting second is set. The determination of the target using resources is known as the Duckworth/Lewis method. What makes the adoption of the Duckworth/Lewis method remarkable is that the method is widely perceived by the public as a black box procedure. Generally, people do not understand how the targets are set but they do agree that the targets are sensible or at least preferable to the approach based on run rates. Although the Duckworth/Lewis (D/L) method was designed for one-day cricket, it has also been applied to Twenty20 cricket. Twenty20 is a relatively new version of limited overs cricket with only 20 overs per side. In contrast to the one-day game and first-class cricket (which can take up to five days to complete), Twenty20 matches have completion times that are comparable to other popular team sports. With the introduction of the biennial World Twenty20 tournament in 2007 and the Indian Premier League in 2008, Twenty20 cricket has gained widespread popularity. Although Twenty20 (t20) cricket is similar to one-day cricket, there exist subtle variations in the rules (e.g. fielding restrictions, limits on bowling, etc) between the two versions of cricket. The variations in the rules, and most importantly, the reduction of overs from 50 to 20 suggest that scoring patterns in t20 may differ from the one-day game. In particular, t20 is seen as a more explosive game where the ability to score 4s and 6s is more highly valued than in one-day cricket. Since the D/L method (and its associated resource table) is based on the scoring patterns in one-day cricket, it is therefore reasonable to ask whether the D-L method is appropriate for t20. With the rise of Twenty20, an investigation of the D/L method applied to t20 is timely. Up until this point in time, such an investigation might not have been possible due to the dearth of t20 match results. Now analysts have at their disposal nearly 200 international matches, and through the use of efficient estimation procedures, the question may be at least partially addressed. Also, since t20 matches have a shorter duration, to date, few matches have been interrupted and resumed according to D/L. Consequently, if there is a problem with D/L applied to t20, it may not have yet manifested itself. This was true before the third editon of the World t20 in May 2010 when a controversial outcome occurred in a game between England and the West Indies. The criticism directed at the usage and appropriateness of the method by players, commentators and fans provide sufficient motivation to adjust the table in this project. In Section 2, the construction of the Duckworth/Lewis resource table is reviewed as well as its effective inception relative to past rain rules. Some comments are provided on aspects of the table and the limitations of the method. In Section 3, an alternative Twenty20 resource table is obtained using a non-parametric approach based on Gibbs sampling. The data used in the construction of the new table consist of all international Twenty20 matches to date involving Test-playing nations as recognised by the International Cricket Council (ICC). The project concludes with a short discussion in Section 4. A heat map is provided to facilitate comparisons between the two tables. 2. For their eyes only: Evaluation of the current method and its appropriateness A condensed version of the Duckworth/Lewis resource table (Standard Edition) is shown in Table 1 (taken from the ICC Playing Handbook 2008-09). In an uninterrupted innings of one-day cricket, a team starts batting with maximum resources available, equivalent to 50 overs and zero wickets taken. Reflect now on a one-day match where Team 1 scores 276 runs at the end of its 50 overs, as a simple example of the use of the Duckworth/Lewis resource table. Before Team 2 has a chance to start their chase of Team 1s total, it rains and they only receive 30 overs for their innings. A look at the resource table shows that Team 2 has only 75.1% of their resources in hand and, consequently, their target for winning the match is set at 276(0.751)=208 runs. Contrast the Duckworth/Lewis target with the unreasonably low target of 276(30/50)=166 runs based on run rates. Table 1. Abbreviated version of the Duckworth-Lewis resource table (Standard Edition) Overs available Wickets lost 0 1 2 3 4 5 6 7 8 50 100.0 93.4 85.1 74.9 62.7 49.0 34.9 22.0 11.9 40 89.3 84.2 77.8 69.6 59.5 47.6 34.6 22.0 11.9 30 75.1 71.8 67.3 61.6 54.1 44.7 33.6 21.8 11.9 25 66.5 63.9 60.5 56.0 50.0 42.2 32.6 21.6 11.9 20 56.6 54.8 52.4 49.1 44.6 38.6 30.8 21.2 11.9 10 32.1 31.6 30.8 29.8 28.3 26.1 22.8 17.9 11.4 5 17.2 17.0 16.8 16.5 16.1 15.4 14.3 12.5 9.4 1 3.6 3.6 3.6 3.6 3.6 3.5 3.5 3.4 3.2 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 The table entries indicate the percentage of resources remaining in a match with the specified number of wickets lost and overs available. The D/L method has several advantages, which make it undoubtedly preferable to all previously used retargeting rules: completeness (it is able to handle all kinds of interruptions, even multiple interruptions and other unusual situations); the underlying mathematical model is internally consistent; tables are easily accessible/the computer programme is user-friendly; and the method compared to previous rules preserves the chance of winning by providing a relatively realistic reset target. Duckworth and Lewis (1998) only make available incomplete information relating to the creation of the resource table. Nevertheless, they do reveal that the table entries are based on the estimation of the 20 parameters Z0(w) and b(w), w=0, †¦, 9 corresponding to the function where Z(u,w) is the average total score obtained in u overs in an unlimited overs match where w wickets have been taken. While the utility of the Duckworth/Lewis table in one-day cricket cannot be questioned, a number of questions arise based on (1) and the estimates found in Table 1: Is (1) the best curve when considering that there are several parametric curves that could be fit? Is there any benefit in using a non-parametric fit to derive the table entries? The function (1) refers to unlimited overs cricket but is formed from a basis of one-day rules. Since one-day cricket is limited overs cricket, is there an advantage in taking the structure of the one-day game into account? How are the parameters estimated? If the 10 curves corresponding to w=0, †¦, 9 are fit separately, there are little data available beyond u=30 for fitting the curve with w=9. Also, the asymptotes for the curves with w=0,1,2 (see Figure 1 of Duckworth and Lewis (1998)) fall beyond the range of the data. In Table 1, the last two columns have many identical entries going down the columns. Although very few matches occur under these conditions, is it really sensible for resources to remain constant as the available overs decrease? This is a consequence of the asymptote imposed by (1). Although the D/L method maintains the margin of victory, it does not preserve the probability of victory. The resource table employed by the D/L method, and throughout its several updates, is based on detailed information from a large number of first innings scoring patterns. Therefore, the method assumes that the expected proportion of overall scoring for a particular over when a given number of wickets have been lost is the same in both innings. The validity of this assumption (that scoring patterns are the same in both innings) can be questioned. It has been found that there are a greater relative proportion of runs scored in the early and late overs of second innings, than in the first innings. The rule assumes that run-scoring accelerates right from the beginning of the innings so that runs come at a faster rate for every over completed; an exponential relationship between runs and overs is assumed. Although this captures the fact that run-scoring accelerates at the end of an innings, the moment of stabilisation somewhere after the relaxing of fielding restrictions is overlooked. 50 overs has been the standard format for a One-day International (ODI) for so long (over 20 years) that there is a period between the end of the fifteenth over and the start of the 41st where the batting side keep the scorecard ticking over through nudged and nurdled singles whilst the fielding side are perfectly happy to concede. Furthermore, no consideration is given to powerplay overs in which fielding restrictions are in place. Losing two overs during a period of fielding restrictions reduces a teams resources more than when a team loses the same couple of overs somewhere between, say, overs 25 and 30. The D/L method does not reflect the fact that the first period has a much higher run-scoring capacity than the second. The asymmetry between the equations for resetting targets impairs the quality of impartiality and may even lead to strategic options which are not equally open to both teams. When the target is large and Team 2 forsees a substantial reduction of its innings, Team 2 could take the strategic option to keep as many wickets as possible in hand, even if the scoring rate is less than required: a score of 99/1 (or 110/2, 123/3†¦) after 25 overs in the second innings against a target of 286 for 50 overs would win if no further play is possible. This distorted result is not merely due to the scaling of limited early data but also stems from an idealised assumption of how batting sides deploy their resources during an innings. The D/L method, like other (target) prediction algorithms, tries to fit historical data into a function curve, and uses this to predict future match states. Although this approach is generic and scales well, the specificity of the match is lost. For example, say in two instances a match is interrupted in the first innings with the score at 100/3 after 25 overs. The prediction (extrapolation) for both the matches will be the same. However, if one of the teams were 90/0 after 15 overs and the other team were 40/3 at the same stage, it is highly probable that the second team would have gone on to score more than the first. 3. Turn the tables: A new model for Twenty20 matches For ease of discussion, it is convenient to convert the Duckworth/Lewis resource table to the context of Twenty20; the resource table is shortened to 20 overs and the entries scaled so that an innings beginning with 20 overs and zero wickets corresponds to 100% resources. Table 2 gives the full Duckworth/Lewis resource table (Standard Edition) for Twenty20 where the entries are obtained by dividing the corresponding entry in Table 1 by 0.566 (the resources remaining in a 1-day match where 20 overs are available and zero wickets taken). Table 2. The Duckworth/Lewis resource table (Standard Edition) scaled for Twenty20 Overs available Wickets lost 0 1 2 3 4 5 6 7 8 20 100.0 96.8 92.6 86.7 78.8 68.2 54.4 37.5 21.3 19 96.1 93.3 89.2 83.9 76.7 66.6 53.5 37.3 21.0 18 92.2 89.6 85.9 81.1 74.2 65.0 52.7 36.9 21.0 17 88.2 85.7 82.5 77.9 71.7 63.3 51.6 36.6 21.0 16 84.1 81.8 79.0 74.7 69.1 61.3 50.4 36.2 20.8 15 79.9 77.9 75.3 71.6 66.4 59.2 49.1 35.7 20.8 14 75.4 73.7 71.4 68.0 63.4 56.9 47.7 35.2 20.8 13 71.0 69.4 67.3 64.5 60.4 54.4 46.1 34.5 20.7 12 66.4 65.0 63.3 60.6 57.1 51.9 44.3 33.6 20.5 11 61.7 60.4 59.0 56.7 53.7 49.1 42.4 32.7 20.3 10 56.7 55.8 54.4 52.7 50.0 46.1 40.3 31.6 20.1 9 51.8 51.1 49.8 48.4 46.1 42.8 37.8 30.2 19.8 8 46.6 45.9 45.1 43.8 42.0 39.4 35.2 28.6 19.3 7 41.3 40.8 40.1 39.2 37.8 35.5 32.2 26.9 18.6 6 35.9 35.5 35.0 34.3 33.2 31.4 29.0 24.6 17.8 5 30.4 30.0 29.7 29.2 28.4 27.2 25.3 22.1 16.6 4 24.6 24.4 24.2 23.9 23.3 22.4 21.2 18.9 14.8 3 18.7 18.6 18.4 18.2 18.0 17.5 16.8 15.4 12.7 2 12.7 12.5 12.5 12.4 12.4 12.0 11.7 11.0 9.7 1 6.4 6.4 6.4 6.4 6.4 6.2 6.2 6.0 5.7 The table entries indicate the percentage of resources remaining in a match with the specified number of wickets lost and overs available. To build a resource table for Twenty20 (t20), it is imperative to consider the scoring patterns specific to the shortest version of the game. Hence, consider the 141 international t20 matches involving ICC full member teams that have taken place from the first in 17 February 2005 through to 14 January 2011 (details of these matches can be accessed from ESPN Cricinfo). The shortened matches where the Duckworth/Lewis method was present have been excluded along with the t20 matches involving non-test playing nations (ICC Associates); the latter disqualification is in place to ensure matches are of a consistently high standard. Since scoring patterns in the second innings show a level of dependency to the number of runs scored by Team 1, consider first innings data only in the examination of t20 scoring patterns. Note that in their development of a simulator for one-day cricket match results, Swartz et al (2009) consider batting behaviour in the second innings. Match summary results are obtainable from ESPN Cricinfos statistics website but this study calls for ball-by-ball data. For this, Stephen Lynch (statistician) took pains to compose the associated commentary log for each match and store the data in a tabular form for easy access. For each match, define z(u,w(u)) as the runs scored from the point in the first innings where u overs remain and w(u) wickets have been taken until the conclusion of Team 1s innings. Calculate z(u,w(u)) for all values of u that occur in the first innings for each match beginning with u=20 and w(u)=w(20)=0. Next calculate the matrix T=(tuw) where tuw is the estimated percentage of resources remaining when u overs are available and w wickets have been taken. Calculate (100%) tuw by averaging z(u,w(u)) over all matches where w(u)=w and dividing by the average of z(20, 0) over all matches; the denominator is the average score by a side batting first in a t20 match. In the case of u=0, set tuw=t0w=0.0%. Table 3 displays the matrix T, an initial attempt at a resource table for t20. Note that t20,0=100% as desired. Although T is a non-parametric estimate of resources and makes no assumptions concerning the scoring patterns in t20, it is less than ideal. First, there are many table entries where there are missing data for the given situation. In addition, Table 3 does not exhibit the monotonicity expected. Logically, there is a requirement for a resource table that is non-decreasing from left to right along rows and a requirement for a resource table that is non-decreasing down columns. Also o bserve some conspicuous entries in Table 3, particularly the entry of 110.2% resources corresponding to 19 overs available and two wickets taken. This entry is clearly misleading and should be less than 100%. It arises due to the small sample size (three matches) corresponding to the given situation. For this non-parametric resource table (upcoming), the estimation procedure is robust to observations based on small sample sizes as the surrounding observations based on larger sample sizes have greater influence in the determination of the table. Therefore, there is retention of conspicuous observations such as 110.2%. This investigation of Duckworth/Lewis in Twenty20 should be viewed as one of discovery rather than an attempt to replace the Duckworth/Lewis table. Table 3. The matrix R=(r ow) of estimated resources for Twenty20 Overs available Wickets lost 0 1 2 3 4 5 6 7 8 20 100.0 19 93.6 83.0 110.2 18 90.4 85.8 78.3 17 86.7 80.5 82.8 53.7 16 81.7 74.5 81.9 70.7 32.8 15 76.5 71.4 71.5 65.9 59.9 14 68.3 69.1 67.6 66.2 58.4 13 63.8 68.2 62.4 62.9 59.0 24.3 12 62.1 62.3 60.6 57.3 58.8 44.1 11 60.5 56.3 57.0 53.6 61.0 39.7 10 57.6 49.6 52.1 52.8 48.1 38.6 41.0 35.2 9 54.9 52.1 43.6 49.0 44.1 33.8 35.0 29.7 8 51.0 46.4 41.7 42.2 41.2 36.7 27.5 28.7 7 48.6 45.8 38.9 35.9 39.1 34.8 24.1 25.5 6 54.0 37.9 36.6 30.3 36.2 31.3 20.9 21.4 26.7 5 44.0 32.5 25.4 28.7 29.4 23.9 17.1 14.9 4 28.2 23.4 22.5 22.2 20.9 14.3 10.6 3 20.6 19.9 16.9 17.8 15.8 12.4 7.6 2 21.2 17.6 11.9 13.4 10.6 11.0 7.2 1 8.7 5.2 7.3 6.0 5.5 6.0   The table entries indicate the percentage of resources remaining in a match with the specified number of wickets lost and overs available. Note: Missing entries correspond to match situations where data are unavailable. To impose the monotonicity constraints in the rows and columns, refer to the general problem of isotonic regression. For these purposes, consider the minimisation of with respect to the matrix Y=(yuw) where the double summation corresponds to u=1, †¦, 20 and w=0, †¦, 9, the quw are weights and the minimisation is subject to the constraints yuwgreater than or equal toyu,w+1 and yu,wgreater than or equal toyu−1,w. In addition, impose y20,0=100, y0,w=0 for w=0, †¦, 9 and yu,10=0 for u=1, †¦, 20. Although the fitting of Y is completely non-parametric, there are some arbitrary choices that have been made in the minimisation of (2). First, not only was the choice of squared error discrepancy in (2) convenient for computation, minimisation of the function F with squared error discrepancy corresponds to the method of constrained maximum likelihood estimation where the data ruw are independently normally distributed with means yuw and variances 1/quw. Second, a matrix Y: 20 10 based on overs is chosen. Alternatively, a larger matrix Y: 120 10 based on balls could have been considered. The overs formulation is preferred as it involves less missing data and leads to a less computationally intensive optimization. With a matrix Y based on overs, it is possible to interpolate on a ball-by-ball basis if required. Third, a simple choice has been made with respect to the weights quw. 1/quw is set equal to the sample variance used in the calculation of ruw divided by the sample size. The rationale is that when ruw is less variable, there is stronger belief that yuw should be close to ruw. Table 4 gives a non-parametric resource table based on the minimisation of (2). An algorithm for isotonic regression in two variables was first introduced by Dykstra and Robertson (1982). Fortran code was subsequently developed by Bril et al (1984). An R code implementation has been used that is available from the Iso package on the Cran website (www.cran.r-project.org). The programme requires 27 iterations to achieve convergence. What is unsatisfactory about Table 4 is that it suffers from the same criticism that was directed at the Duckworth-Lewis resource table. There is a considerable number of adjacent entries in Table 4 that have the same value. Again, it is not sensible for resources to remain constant as available overs decrease or wickets increase. The problem is that in the minimization of (2), various fitted ys occur on the boundaries imposed by the monotonicity constraints. Table 4 is also unsatisfactory from the point of view that it is incomplete; missing values corresp ond to match situations where data are unavailable. To address the above criticisms, it is necessary take a slightly different approach to estimation. As previously mentioned, it can been seen that (2) arises from the normal likelihood Therefore, consider a Bayesian model where the unknown parameters in (3) are the ys. A flat default prior is assigned to the ys subject to the monotonicity constraints. It follows that the posterior density takes the form (3) and that Gibbs sampling can be carried out via sampling from the full conditional distributions subject to the local constraints on yuw in the given iteration of the algorithm. Sampling from (4) is easily carried out using a normal generator and rejection sampling according to the constraints. Although in statistical terminology, (3) takes a parametric form, the approach is referred to as non-parametric since no functional relationship is imposed on the ys. Table 4. A non-parametric resource table for Twenty20 based on isotonic regression Overs available Wicket lost 0 1 2 3 4 5 6 7 8 20 100.0 19 93.6 85.5 85.5 18 90.4 85.5 80.8 17 86.7 80.8 80.8 64.7 16 81.7 77.4 77.4 64.7 55.9 15 76.5 71.5 71.5 64.7 55.9 14 68.8 68.8 67.6 64.7 55.9 13 66.6 66.6 62.6 62.6 55.9 38.4 12 62.2 62.2 60.6 57.3 55.9 38.4 11 60.5 56.8 56.8 54.8 54.8 38.4 10 57.6 52.1 52.1 52.1 48.1 38.4 34.1 29.3 9 54.9 52.1 46.5 46.5 44.1 36.3 34.1 29.3 8 51.0 46.4 42.0 42.0 41.2 36.3 28.6 28.6 7 48.6 45.8 38.9 37.3 37.3 34.8 25.3 25.3 6 39.7 39.7 36.6 32.8 32.8 31.3 23.0 21.4 21.4 5 39.7 32.5 28.0 28.0 28.0 23.0 17.1 15.5 4 27.9 23.4 22.5 22.2 20.9 14.3 10.7 3 20.7 19.9 17.4 17.4 15.8 12.4 7.7 2 20.7 17.6 12.5 12.5 10.8 10.8 7.2 1 8.7 6.6 6.6 6.0 5.7 5.7 The table entries indicate the percentage of resources remaining in a match with the specified number of wickets lost and overs available. Missing entries correspond to match situations where data are unavailable. In Table 5, the estimated posterior means of the ys obtained through Gibbs sampling are given, and these provide an alternative resource table for t20. The computations pose no difficulties and the estimates stabilize after 50,000 iterations. For cases of missing data, the Duckworth/Lewis table entries are used to impute the missing rs. The imputation is in the spirit of a Bayesian approach where prior information is utilised. Unlike Table 4, Table 5 is a complete table. Also, there are no longer adjacent table entries with identical values and this is due to the sampling approach. Finally, it can be stated that the methodology allows the input of expert opinion. For example, suppose that there is expert consensus that a table entry yij ought to be tied down to a particular value a. To force this table entry, all that is required is to set rij=a and assign a sufficiently small standard deviation Unfortunately we are unable to provide accessible alternative text for this. If you requi re assistance to access this image, please contact [emailprotected] or the author Table 5. A non-parametric resource table for Twenty20 based on Gibbs sampling Overs available Wickets lost 0 1 2 3 4 5 6 7 8 20 100.0 96.9 93.0 87.9 81.3 72.2 59.9 44.8 29.7 19 95.6 90.9 87.7 83.0 76.9 68.3 56.5 42.0 27.2 18 91.7

Sunday, January 19, 2020

Properties of Gases Essay

Introduction Background This report covers Properties of Gases and will allow me the opportunity to explore chemical and physical properties of gases. Collection and use of these gases will also be conducted in this lab. Statement of Problem Collecting gases is a difficult process. Singling out a gas and obtaining only that gas is the challenge we face in this experiment. Purpose of Experiment The purpose of this experiment is use water, chemicals and metals along with collection tubes to extract a single gas and to store it. Then to use only that gas and see how it responds to other testing. Hypothesis If the gases are correctly singled out and collected properly. We should be able to observe changes when the gases are introduced to heat or fire. Experiment Test tubes will be used to single out gases from two forms of metals along with an acid and hydrogen peroxide. Baking soda, vinegar, alka seltzer, bromthymol blue and limewater will also be used to observe the properties of gases. Data Charts Page 2 LabPaq – Properties of gases General Chemistry Analysis Error and Trends When attempting to mix the Hydrogen and Oxygen together. I may have lost a small amount of hydrogen as I lifted the bulb filled with 2/3 hydrogen from the 24-well plate. I did not receive a reaction when I squeezed the bulb of hydrogen oxygen onto the flame. Hypothesis Conclusion It was challenging using my thumb to try and hold the gases in their pipet bulbs. I repeated a few of the experiments to make sure I received the same results and feel fairly confident that I obtained the results that were expected. Practical Applications Parts of this experiment used household items to collect data from. Learning the gas properties these household items contain is invaluable. Page 3

Saturday, January 11, 2020

A Poem Analysis Essay

Langston Hughes’ â€Å"Let America Be America Again† reveals the dismay of the speaker about the social condition of America at the time and how the country is yet to attain its reputation as the home of the free. Written from the first-person point of view, the speaker vents out frustration at the racial inequalities that cut across American society while expressing hope that â€Å"America will be† the America that the â€Å"dreamers dreamed† at the same time. Generally, the speaker aims his or her criticisms to no particular individual but the entire American society. Taken in the context of the bitterness of the tone of the poem especially in the parts where the speaker narrates whose voices he or she is representing, the speaker directs his or her attention to the reader who may not at all be aware of the social conditions pervading America at the time. Interestingly, the tone of the poem is not bitter or frustrated throughout the entire length of the poem. The poem begins with several stanzas that are imbued with emotionless force, proceeds with what appears to be the very meat of the poem—the disappointment towards the selfishness for power and property that takes away the very freedom that every American yearns for—and concludes with a fervent hope in the belief that America will rise from the din and reclaim its status as the â€Å"homeland of the free†. In summary, the poem shows how the speaker sees America—a country that never was the country the speaker envisions it to be. The speaker presents a rundown of the people in America who are at the center of the problem—the â€Å"poor white,† the â€Å"Negro,† the â€Å"red man† and the â€Å"immigrant clutching the hope I seek†Ã¢â‚¬â€all of whom are experiencing almost the same fate of inequalities. Nearing the end of the poem, the speaker expresses his or her belief that America is â€Å"the land that has never been yet† and â€Å"yet must be†, which signifies the speaker’s hope that someday â€Å"America will be†. With these things in mind, it is easy to understand that the poem’s theme revolves around the concept of â€Å"hope†. By introducing the poem with a series of expectations and following them with a sequence of how such expectations have been unfulfilled, the speaker effectively sets the space for an ending that pins the very motive of the length of the poem. A close reading of the poem shows that the Langston Hughes achieved his purpose of letting hope become known to his readers, the hope that, despite America’s social inequalities at the time, there will come a time that the country will satisfy its label as the â€Å"homeland of the free†. On a personal note, I think still applies today than it once did during the time of Hughes. I think the lines â€Å"the millions who have nothing for our pay† and â€Å"of dog eat dog, of mighty crush the weak† still closely resemble contemporary America. The current financial crisis sweeping across the country can only indicate how millions of Americans are still struggling to earn at least a decent pay, and how one person will take advantage of another just to survive in these harsh and trying times. Those things being said, there is strong reason to believe that the poem overarches from the past to the present. Hughes may not have been aware of it, but his poem is as timely now as it used to be in the past. Although there are several other significant differences between the time of Hughes and contemporary America, â€Å"Let America Be America Again† is one of the poems that remind the average individual that America remains a country always on the quest for a more perfect union. Work Cited Hughes, Langston. â€Å"Let America Be America Again†. 1994. May 11 2009. .

Friday, January 3, 2020

Depression, Anxiety, And Eating Disorders - 2472 Words

In today’s society, there is a never ending change to standards. One moment one style or trend would be acceptable and the next it would be unacceptable. Thus that is why a lot of young people compare themselves with society’s standards. This strays into a current worldwide issue right now, which is mental illnesses. Mental Illnesses such as anxiety, depression, Obsessive Compulsive Disorder (OCD), substance abuse, eating disorders, or ADHD are just the brink of this horrible world of illnesses. The definition of a mental illness is a disorder that affects your mood, thinking, and behaviour. Which is why when we consider and look at what these illnesses are doing to the young minds of this generation, we see that it is quickly devouring†¦show more content†¦Imagine how much different type of mental illnesses there are. Don’t forget that every day, this number is substantially increasing as more young teens are being exposed to the illness in everyday lif e. One bad experience can lead the person to never try that thing ever again and if it’s a requirement, anxiety appears for that individual. Many people experience anxiety disorder very differently which is why anything can trigger anyone so you have to be mindful of what you say or do. The majority of people experiencing this disorder undergo physical and cognitive symptoms such as higher increase heartbeat rhythms, sweating in multiple areas, and sometimes major bodily function shutdowns resulting in the person to breakdown and in extreme situations to black out. In my experiences with this disorder I experience both symptoms of physical and mentally, my heartbeat definitely increases, sweaty palms occur, and my body temperature would heat up really high. Inside my mind I have anxious thoughts such as I’m such a mess and I’m not in control. I have some friends so go through this every day, as one of my friends suffers a more extreme case in that she faints due to the amount of anxiety and fear that is building up within her. Depression is one of the most