Another interesting finding is that although of all

Another interesting finding is that, although 79.1% of all patients discontinued treatment within 6 months, the discontinuation rate decreased sharply after 12 months. This is in agreement with the works of Mondaini et al and Jern et al, which were performed in real-world settings with follow-up times longer than 1 year. Jern et al reported that decisions to discontinue were usually made relatively soon after medication commenced; no patient discontinued medication after 30 months of usage. This emphasizes that, to improve compliance, it is essential that patients receive proper counseling, especially before starting treatment and/or in the early treatment period.
We found that patients with acquired PE (vs lifelong PE), with IELTs longer than 2 minutes before treatment, on PDE5 inhibitors, and with IIEF-EFD scores lower than 26 tended to exhibit high dropout rates at the end of the study. If patients with PE and ED on PDE5 inhibitors also took dapoxetine, the costs might have become too burdensome. Moreover, PDE5 inhibitors have been recently suggested to be useful treatments for PE. Thus, patients on PDE5 inhibitors might more readily stop dapoxetine treatment. However, unlike what we found, Jern et al reported that ED was more prevalent among those who continued dapoxetine treatment. Further study is needed on how concomitant PE and ED affect treatment of the other condition.


Statement of authorship

Mayer-Rokitansky-Küster-Hauser syndrome (MRKHS) is characterized by agenesis of the uterus and compund w and can be associated with renal, skeletal, auditory, and cardiac malformations. Its prevalence is estimated at approximately 1 per 4,000 to 5,000 female births. It results from congenital malformations of unknown etiology in the lower structures of the Müllerian ducts during organogenesis. No clear genetic cause of the syndrome has been established. In some cases, familial clustering of MRKHS occurs. The syndrome is mostly diagnosed in postpubertal girls with primary amenorrhea. Women have the XX karyotype, female phenotype, normal secondary sexual characteristics, physiologic endocrine function, biphasic ovarian cycle, and female psychosexual identification. MRKHS compromises sexual life and makes natural reproduction impossible. These women can have a child by adoption, assisted reproduction, or gestational surrogacy, and uterine transplantation (UTx) also can provide women with MRKHS the opportunity to have their own biological child.
Vaginal agenesis can be treated by non-surgical dilatation methods or surgically. Surgical approaches to vaginal agenesis fall into three categories: Williams vulvovaginoplasty with suturing of the labia majora into a perineal pouch; Vecchietti vaginoplasty, in which the vagina increases in size by gradually applying traction to the vaginal vault; and methods involving the creation of a neovagina within the rectovesical space lined with various types of tissue, such as skin (McIndoe technique), peritoneum (Davydov procedure), intestine, or—perhaps in the future—tissue engineering of the vaginal mucosa. Dilatation methods have fewer complications, but patients\’ long-term cooperation is required. Some methods have definite advantages over others: the ideal neovagina maintains its original anatomic placement and is covered with original mucosa. The Vecchietti neovagina, which is covered by non-keratinized squamous epithelium, is the only option that meets the two criteria. Laparoscopic Vecchietti vaginoplasty is used at our gynecologic department. The technique, which enables the creation of a neovagina with good anatomic and functional results, is a simple and effective procedure. The principle of the Vecchietti technique is to create a neovagina by gradual stretching of the patient\’s own vaginal skin. An olive-shaped device is placed on the vaginal dimple and drawn up gradually by threads that run through the olive from the perineum into the pelvis and out through the abdomen, where they are attached to a traction device. To create a neovagina, the tension is increased on the traction device to pull the thread and stretch the vagina by approximately 1 to 1.5 cm/d until the vagina reaches approximately 7 to 8 cm in depth. Previous studies have mainly evaluated the subjective feelings of respondents using standardized questionnaires or assessed the psychosocial impact of creating a neovagina. Several studies have assessed the influence of lifelong infertility and physical integrity. Female sexuality is not determined just by the possibility of copulation. It is formed by emotional, relationship, and other social aspects. Furthermore, satisfaction with one\’s own body and perception can have substantial significance to female sexuality. The purpose of this study was to investigate the sexual well-being, satisfaction with genitals, and level of distress in women who have an anatomically functional neovagina but no possibility for natural motherhood. We wanted to determine whether these characteristics would be different from those of the general population and the views of sexual partners of women with a neovagina.

Photoluminescence analyses show that after detachment

Photoluminescence analyses show that after detachment from the neighboring amorphous matrix, the silicon nanocrystals emit light at a wavelength of about 550 nm. Utilizing a multilayer structure instead of hydrogenation of a thicker layer (100 nm) of amorphous silicon  [18] shows a significant increase in PL intensity as illustrated in Figure 5. This higher intensity is believed to be due to realization of more intense layers of luminescent silicon nanocrystals which is also confirmed by SEM analyses. Figure 6 demonstrates the impact of hydrogenation duration on the luminescence intensity of silicon nanocrystals. Increasing the hydrogenation time leads to the introduction of too many voids in the nanocrystals layer  [18] which annihilates most of the nanocrystallites and reduces the PL intensity. Further investigation into nanocrystallite formation as well as the origin of light emission is being pursued and fabrication of light-emitting diodes using this cholesterol absorption inhibitor technique is underway.



Bone quality is considered to be related to the stress condition at each position in vivo  [1]. However, stress transfer between a metallic implant and a bone is non-homogeneous when the Young’s modulus of the metallic implant and that of the bone are different. The Young’s modulus of the metals and alloys used for fabricating the metallic implants are much higher than that of bone; therefore, stress stimulation to the bone is reduced (stress shielding effect). Under such conditions, bone atrophy is likely to occur and can lead to loosening of the metallic implant and re-fracturing of the bone. Thus, to mitigate the stress shielding effect, metals and alloys with a Young’s modulus equal to that of bone (10–30 GPa) are believed to be ideal for fabricating metallic implants  [2]. Consequently, extensive efforts have been made to develop -type titanium alloys with Young’s moduli nearly equal to that of bone  [3–6]. One such alloy, Ti–29Nb–13Ta–4.6Zr (TNTZ), has been developed by the authors  [4]. This alloy has a Young’s modulus of around 60 GPa under a solutionized condition. However, the mechanical strength of this alloy under the aforementioned solutionized condition is less than that of the conventional titanium alloy, Ti–6Al-4 V ELI, used for fabricating metallic implants. Thus, various thermomechanical treatments have been examined to improve the mechanical strength of TNTZ  [7,8]. These cholesterol absorption inhibitor treatments improve the mechanical strength, but they also increase the Young’s modulus, because a large amount of precipitates are formed. Therefore, presently, as part of our current research, we are developing a method that affords TNTZ with both a low Young’s modulus and high mechanical strength  [9–11].
Grain refinement is an effective method for strengthening metals and alloys, and, as a method for obtaining submicron-sized grains, severe plastic deformation has attracted considerable attention  [12–14]. In comparison with other strengthening methods, grain refinement is expected to achieve high mechanical strength while maintaining a low Young’s modulus for TNTZ because axons retains the phase, which ensures the desired low Young’s modulus.
By employing High Pressure Torsion (HPT) as a representative severe plastic deformation technique, we recently attempted to improve the mechanical strength of TNTZ while keeping its Young’s modulus low  [15]. The effect of HPT on the microstructure and hardness of TNTZ was systematically investigated in the present study.

Experimental procedures
A hot-forged TNTZ bar (Nb:28.6, Ta:12.3, Zr:4.75, O:0.09, Ti:bal. (mass%)) was used in this study. The bar was subjected to solution treatment in vacuum at 1063 K for 3.6 ks followed by water quenching. Thereafter, the TNTZ bar was cold-rolled to a plate of thickness 0.8 mm (reduction ratio: ), and the thus-obtained plate was machined to disks of diameter 20 mm and thickness 0.8 mm for HPT.

It is evident from this study that the post

It is evident from this study that the post-interventional P-PILs knowledge-based user-testing scores significantly improved from baseline scores from 44.25 to 69.62 with p value <0.001. The impact of pictograms in PILs in recalling the information conducted by the Leiia and Ros (2003) has shown similar improvements. The pre- and post-interventional PIL study conducted in the community pharmacy observed that the recall drug information significantly improved from 30% to 65% (Carina et al., 1996). The verbal advice along with PIL is shown to have improved knowledge levels in recognizing the uses and side effects of medications from 40% to 67% (Gibbs et al., 1990). Similar studies conducted by others with patient information leaflets as an educational intervention have significant impact on the knowledge, attitude and practice among the patients suffering from diabetes, hypertension, asthma peptic ulcer and rheumatoid arthritis (Adepu and Swamy, 2012; Sathvik et al., 2007; Hill and Bird, 2003; Louis and Halparin, 1979).


Funding source


As in almost every health system, medication costs at the King Abdulaziz Hospital (KAH) have increased noticeably over time (Saggabi, 2012). High prices of essential medicines are a heavy burden on the government budget (Saggabi, 2012). Policymakers are thus in search of the most cost-effective options for the government and society as a whole.
Data from KAH show that the carbapenem MG 262 manufacturer were the third most expensive pharmacological class procured during 2009. The current hospital formulary lists two carbapenems: the fixed-dose combination of imipenem/cilastatin (IC) and meropenem (MEM). MEM is restricted to infection control physicians, while IC is restricted to infection control, intensivists and haematology/oncology practitioners. These antibiotics share a similar spectrum of activity, but the unit cost of IC (500mg/500mg) is less than that for the equipotent dose of MEM (1g). There are conflicting reviews with regard to the relative cost-effectiveness of these two medicines (Attanasio et al., 2000; Edwards et al., 2006).
An unpublished pharmacoeconomic review, at the Ministry of National Guard Health Affairs, showed that an interchange programme, substituting MEM with IC, would lead to a cost saving of 2,306,257 Saudi Riyals (SARs) per year (614,309 US dollars per year). Hospital antimicrobial usage data since 2004 showed that usage of IC had been markedly lower than the usage of MEM. There have been limited applications of pharmacoeconomic evaluations in Saudi Arabia (Al Aqeel and Al-Sultan, 2012). It would be appropriate, therefore, to test the economic impact of the proposed substitution as well as the main factors influencing hospital costs, in this setting, based on pharmacoeconomic principles. In this regard, cost-minimization analysis (CMA) could provide an estimate of the economic impact of these therapeutically equivalent medicines, using local Saudi Arabian data.


Literature review
In order to justify the CMA approach used in this study, a literature review was first conducted to justify the a priori assumption of clinical equivalence of IC and MEM in the types of infections treated and the doses recommended in the KAH guidelines.


Although it was planned to include 50 patients in each group from 1 January 2012 until 31 December 2012, IC was prescribed to only 45 patients on IC during this period. One file could not be accessed as it was locked by the Health Information Management Department. Only 44 patients on MEM met the inclusion and exclusion criteria. In total, six patients were excluded from the study, due to diagnosis with meningitis (n=1), pregnancy (n=1), being under 18years of age (n=1), files being locked (n=2) and administration of only a single dose (n=1). A total of 44 patients receiving IC and 44 receiving MEM could therefore be evaluated.

Para este autor existen cuatro

Para este autor, existen cuatro condicionantes para que los dcb “tratos agrarios” funcionen bien: “derechos de propiedad legalmente reconocidos; un sistema judicial que funcione bien; registros públicos actualizados e información sobre los bienes dcb transar entre los participantes” (Robles 2005: 82); de éstos, sólo lo relativo al primer punto se cumple en México. Este trabajo concuerda con la idea de que mientras los sujetos interesados en participar en el mercado de tierras no tengan bases sólidas que garanticen que sus intereses no se verán afectados, y entre tanto no se establezca un sistema de valuación de los terrenos ejidales acorde con la calidad de las tierras y con la ubicación de los predios, seguirán realizándose actos al margen de la ley que afectaran el patrimonio de los pueblos.
Para el caso de la cesión y sucesión, el mercado funciona de manera muy limitada, aunque es importante el impacto que ejercen estos actos sobre la circulación de la tierra, toda vez que la principal fuerza que los motiva es la demanda desde el interior del núcleo agrario; además, para quien hereda o recibe la tierra mediante la “cesión de derechos en vida” pocas veces importa la calidad de la tierra, son más importantes en todo caso las relaciones de parentesco y las formas de organización social que se originan en el interior de los ejidos y comunidades agrarias. De hecho, para el año de 1997, del total de titulares de tierra de tipo social, “la mitad, 50.8 por ciento, recibió la tierra por relaciones de parentesco, por cesión o sucesión de otros ejidatarios” (Warman 2001: 84).
De acuerdo con Concheiro y Quintana, existen:
La idea que proponen los autores resulta útil aunque en ella no se distingue la transferencia de la trasmisión de los derechos que corresponden a los sujetos que participan o pueden participar en un mercado de tierra desde la posición que les reconoce la ley. Aunque más adelante señalan la compra-venta, el traslado de derechos y las herencias y, por otro lado, la mediería y el arrendamiento como formas permanentes de las transacciones de tierra, éstas “responden a la clara voluntad de continuar siendo ejidatarios” (Concheiro y Quintan 2001: 31), pero no indican qué tipos de derechos son transferidos ni mediante qué mecanismos se concede el uso y el aprovechamiento de las superficies ejidales.
No obstante lo complejo que puede resultar un análisis si sólo se toma en cuenta lo propuesto por estos autores, se reconoce su aporte a grana la idea de distinguir entre lo que denominan “formas permanentes” y “formas transitorias” de movilidad de la tierra. Además, incluyen dentro de las formas transitorias del mercado de tierras al “empeño”, reconociendo al mismo tiempo que esta forma es “poco utilizada en las zonas ejidales, pero se encuentra entre los microfundistas privados” (Concheiro y Quintana 2001: 24). Considero que el “empeño”poco favorece la movilidad de las tierras, puesto que es una actividad usurera que amortiza este recurso sacándolo de la circulación aunque sólo sea temporalmente.
Por otro lado, Guillermo Zepeda identifica por lo menos dos vertientes de los mercados de tierra: una por medio de la cual se concede el uso de la tierra sólo mediante la “transferencia de derechos indirectos”; la otra en la que el objeto de la transacción es la propiedad plena y que distingue como “trasmisión de derechos directos”: “Los derechos indirectos se otorgan, principalmente, a través de préstamo, arrendamiento, aparcería, concesión (en caso de que sea un predio de dominio público), entre otros actos jurídicos” (Zepeda 2000: 212).
En este sentido, las trasmisiones de derechos directos implican la idea de dominio pleno sobre los predios pues “Estos derechos pueden transmitirse por diversos actos jurídicos como el traslado de derechos y el traslado de dominio […] el traslado de derechos agrarios indica fundamentalmente la sucesión en el dominio de la tierra tras la muerte del titular” (Zepeda 2000: 216). Existe, sin embargo, un mecanismo mediante el cual se transmiten también los derechos directos: la cesión de derechos que puede realizar en vida el titular de la parcela; en lo referente a la segunda modalidad: “la enajenación se inscribe como traslado de dominio” (Zepeda 2000: 217).

br Introduction Wireless Sensor Networks WSNs exhibit unique

Wireless Sensor Networks (WSNs) exhibit unique properties, such as self-organizing, multi-hop, low cost and data centric networking, thus, finding growing applications in several domains in recent years. In these networks, sensor nodes observe physical phenomenon, such as temperature in some areas. Although WSNs continuously sense and report information in a dynamic environment, the energy supply for each sensor node is limited. Since WSNs always operate in an unattended environment, it is infeasible or high costly to replace nodes batteries  [1]. To conserve resources for sensor nodes, in-network aggregation techniques are commonly adopted, which compact the data collected using sensor nodes during the routing process, and, therefore, significantly reduce the number of packets the network has to transmit  [2,3].
In order to achieve complete coverage, WSN applications require spatially dense deployment of sensor nodes that leads to a single event being recorded by several nodes. This leads sensor observations to have spatial correlation. This kind of data redundancy, due to the spatial correlation between sensor observations, enriches the research of in-network data aggregation. The cluster-based communication model can provide an architectural framework for exploring data correlation in sensor networks  [4]. In this model, for each cluster, sensor nodes send their data to one specific node, called the Cluster Head (CH). CH regulator of g protein signaling cluster members data and sends the results to the sink node.
For a physical phenomenon to be observed, it can be modeled either as a field source, as in the monitoring of environment temperature, or as a point source, such as in target detection applications  [5]. Suppose, as shown in Figure 1, that an object that generates data is moving across the network. In this paper, we explore several low overhead methods that use the Rate Distortion (RD) theory for data aggregation of the object via a cluster-based communication model. One technique is the static cluster-based approach. Another approach uses dynamic clustering. Finally we, propose a hybrid method that can take advantage of both static and dynamic methods. The Rate Distortion (RD) theory uses spatial correlation for reducing the network traffic, provided that resultant distortion does not exceed a certain value defined by the user.
The rest of this paper is organized as follows. Section  2 discusses the background and related work on data aggregation techniques utilizing correlation. Section  3 describes the aggregation model and derives all the mathematical relationships. In Section  4, the proposed protocols are disclosed and their operation is elaborated upon. Simulation is disclosed in Section  5, and finally, the paper is concluded in Section  6.

Related work
Various studies have been done based on using spatial correlation for reducing the network traffic size. In  [5], the authors exploit the spatial correlation on the Medium Access Control (MAC) layer for preventing redundant transmissions from closely located sensor nodes. In  [6,7], correlation is used for the purpose of lossy data aggregation in the aggregation points. In other algorithms, such as in  [4,8–10], correlations are used for grouping sensor nodes into clusters. Some other works, such as  [11], are based on distributed source coding for data aggregation. The authors in  [12] present the YEAST method that maximizes data aggregation along the communication route, based on the notion of a spatial correlated region, and decreases the costs in route discovery. All these methods only consider field source phenomenon, but in the real world, the physical event may be mobile, so the communication protocols should be designed to support mobility in an energy-efficient manner.
Research efforts in  [13–16] address aggregation for mobile object. LPSS  [13] experiments with filtering out spatial correlation among the sensing reports of sensor nodes from mobile sources. The authors in  [13] first obtain a distance to the event source in which all sensing reports are collected to minimize the reconstruction distortion. Then, in order that the mobility of the event source may be considered, sensor nodes are self-scheduled to join or leave the representative node group, which guarantees that those appropriate nodes would always be a group member. LPSS only emphasizes optimal distance to a mobile source and does not use any specific structure for data aggregation. Also, this method causes a high overhead when sending relay messages when the velocity of the object increases. The protocol in  [14] is concentrated on the aggregation for a mobile object. In this study, a cluster of nodes is constructed and updated around the object by moving that object. Aggregation is performed based on a max aggregator. The authors do not use the correlation between data in the aggregation operation. Studies in  [15,16] also focus on the aggregation for mobile events. In these works, no specific structure is used for aggregation, due to its dynamic nature. They propose two corresponding mechanisms: Data-Aware Anycast at the MAC layer and Randomized Waiting at the application layer. Here, too, no correlation is supposed between the data. In addition, no specific data model is introduced.

br is Professor of Aalto University

is Professor of Aalto University, Finland, and Vice Director of Sino-Finnish Centre at Tongji University, China. He specializes in personalization strategies, user innovation concepts and customer experience management processes and tools. Professor Suominen is director of the Future Home Institute at Aalto University in the School of Art & Design. He is also visiting scientist at the MIT Design Lab, and a principal of Suominen Architects. He has won numerous prizes in architectural competitions in Finland and in Europe.

received his Bachelor\’s degree in Business Administration degree from The Amsterdam Business School (HES). He then worked for several years as a strategic advisor in various functional industry groups before returning to NY to pursue his Master\’s degree. He enrolled in the Ph.D. program at the New School for Social Research (NSSR). Following his move to Australia, he gpr44 inhibitor works as a CSIRO & University Research Fellow in the Centre for Design Innovation at the School of Design, Swinburne University of Technology. His expertise lies in the development of innovation strategies that create resilient business models, product and service offerings that adapt to changing market conditions, innovation strategy and execution, complexity analysis and management, sustainable practice. Heico serves as a strategic adviser to several international based design-led organizations.

is Professor of the School of Design, University of Cincinnati, USA. He received his MFA in 1977 from Yale University where Translocation of a chromosome was the Carl Purrington Rollins Fellow. In 2004 he was a Medical Informatics Course Fellow at the Marine Biology Laboratory. He is the editor-in-chief of . His research interests include: collaboration with medicine to create health interventions, visualizations, and informatics; development of complex interactive visual systems; design and evaluation of international programs for social change; research of the interaction of symbols and creation of iconic language systems.

The phenomenon of machines replacing

The phenomenon of machines replacing labor belongs on what could be called the “disruptive” side of capitalist innovation. Disruptions contribute to novel business models and product development, seek to make the production, distribution and allocation of products and services increasingly efficient, and to make the use of such offerings more pleasurable to a greater number of people. Disruptions are typically made possible through the use of digital technologies. By accelerating the pace at which they replace routine work with machine intelligence, advanced economies are swiftly moving from the post-industrial to the auto-industrial era. The post-industrial era was characterized by the expansion of “knowledge work”: mid-tier, mid-skill, university-qualified, information processing office-based work. The expansion of such work is over. In London, between 2001 and 2013, 65 percent of library assistants, 48 percent of counter clerks, and 44 percent of PAs disappeared.
The alternative to newfound salaried work is capital work. In the major economies in the eighteenth and nineteenth centuries, income accruing to capital (via dividends, rents, profits, interest, royalties, etc.) as a portion of total national income was significant and steady over time. Income generated by capital then declined sharply in the twentieth century (from 1910 onwards). It grew again after 1970. That capital work shrank in the first part of the twentieth century is not surprising. The nineteenth century was the era of liberal capitalism. The first half of the twentieth century was the age of socialism. Socialism despised income from capital while it I-BET-762 eulogized income from wage work. The share of national income in major economies echoed this political disposition. After 1970, the portion of income from capital grew again. This in part reflected the decline of socialism, though it equally reflected the way in which socialism\’s agenda was replaced by regulation. The post-industrial era saw the proliferation of regulatory and process bureaucracies. This was the kind of salaried work that delivered few productivity gains. Accordingly, the income it generated over time stagnated. The tacit response of many individuals to this was to move away from salaried work. This is reflected in the rise in the numbers and the income of sole traders and partnerships. In 1995, there were 16,423,000 sole proprietor tax returns lodged in the United States. In 2012, the number was 23,426,000, an increase of 42 percent in 18 years; meanwhile, over the same period, the population of the US increased by 22 percent only. Over the same interval, the total business receipts of sole traders grew from $807 billion to $1.3 billion, outpacing inflation by 7 percent. There is every indication that, in the age of auto-industrialism, capital\’s share of income will rise further still.

Design Capitalism
Modern capitalism is creative thanks to its double-coding. What I mean by this is that creative systems, modern capitalism included, interpolate opposites. In other words, capitalism is like art. Cubism mimicked three-dimensional space on a two-dimensional canvas. René Magritte painted nighttime street scenes with a daytime sky. Machines are not ironic, ambiguous, incongruous, contradictory, or paradoxical…but human beings are. This is not a deficiency, but rather the great strength of being human. It is how human beings adapt to all manner of environments, places and circumstances. We adapt well because we interpolate opposites. We can see that what is down can also be up, as in an M. C. Escher drawing; or see what is not there, as in Alan Fletcher\’s design of the Victoria and Albert Museum logo. This is not simply illusionistic; making sense of these kinds of things is central to human cognitive functioning. This cognitive double-coding—to see, hear or feel one thing as another—is what makes the human imagination possible. When human beings imagine something, they conceive one thing in terms of another thing. This is the basis of metaphor, analogy, resemblance, similes, figurative thought, allegories, and all kinds of powerful images and symbols, be they visual, aural, textual, or tactile.

This scenario may change While low growth is acutely visible

This scenario may change. While low growth is acutely visible in advanced economies today, less visible is the accelerating replacement of labor by machines. Contemporary labor-substitution takes the form of automation, computerization, and robotics, which are long-existing technologies. Computing and robotics for example were both well established by the 1950s. In the 2010s, their labor-replacing power accelerated. This kind of long-run development punctuated by a late arriving, sharp upswing is true of many major technology impacts. Rarely are they overnight stories. In fact, automation is as old as industrialism. What is interesting about the post-2008 acceleration of automation is its focus: routine work on a large-scale, including both repetitive manual work in the case of robotics; and repetitive, white collar office work in the case of computerization. Across the medium term of ten to twenty years (2008–2030), the amount of routine work expected to be replaced is very large indeed. Thirty-to-forty percent of existing jobs will be affected in major ways, suggesting that, in net terms, as least 20 percent of existing work will be completely eliminated.
The process of reducing routine work has been going on beneath the economic surface in advanced economies since 1990. In each succeeding cyclical downturn since then, another portion of the total volume of repetitive work has been replaced by machines. This labor-substitution was disguised by the simultaneous growth of government and corporate regulation. Productivity gains through automation were negated by the post-industrial expansion of private and public bureaucracies. As government, health and education sectors, and corporations became more efficient, they also became less efficient. In the 1980s, socialism as political idea collapsed in many countries. After that, though, regulation flourished. The aggressive expansion of regulation in advanced economies generated routine work functions. Checking, auditing, reviewing, inspecting, assessing, appraising, and examining became the RVX-208 cost industry of the 1990s and 2000s in major economies. As this was happening, technology was being developed that would eventually automate these functions. Today, online processing of routine customer applications by government is one-thirtieth the cost of performing the same operation in-person, face-to-face. Offshoring the application process is, by comparison, only one-fifth of a face-to-face transaction. Consequently, the balance between the multiplication of routine functions and their automation is now shifting decisively in favor of automation.
In economies like the United States, automating routine work has produced the phenomenon of “job polarization.” Jobs at the top and bottom ends of the work-skill spectrum have grown since 1990, while demand for routine mid-tier, mid-skill work, notably in offices, has declined—and is projected to decline further. The result has been a hollowing-out of middle class jobs with the prospect that many more such occupations will disappear over the next two decades. Since 1990, low-income service work has been less affected by automation. It is often not routine enough to be replaced by the current generation of machines. But with rapid advances in robotics in the last decade (2006–2016) that is changing. Wider and wider swathes of routine work of all kinds, manual and non-manual, are projected to be replaced by machines in the near- and medium-term future. The scale of replacement is massive.
The execution of a routine task involves repetitious and well-defined steps that can be easily mimicked by a computer algorithm. Once that has been written, the work can be done by a machine or machines. Progressively, software is replacing mid-tier accounting, HR, payroll, tax agent, travel agent, clerical processing, and numerous similar functions. As machine intelligence has improved and sensor technology has grown cheaper and better, advances in autonomous robotic systems are replacing ever-increasing numbers of mobile and manual operations with machines. Autonomous cars, military vehicles, and aircraft along with domestic, factory, and hospital robot assistants are appearing. In twenty years\’ time, road and transport systems will be largely driverless and operator-less. People-less systems of factories-warehousing-transport-retail lie not too far in the future. For the moment, robots are still poor at carrying out tasks that involve high levels of manual dexterity. With time the dexterity of machines will grow.

Case had an iridociliary melanoma who presented with localized

Case 4 had an iridociliary melanoma who presented with localized extraocular extension, and 360° angle involvement with secondary glaucoma. Enucleation revealed direct angle invasion by malignant spindle cells. In a review of almost 9000 cases of uveal melanoma, Demirci et al., recorded an incidence ring melanoma of anterior chamber angle to be 0.2%. All cases had secondary glaucoma and twelve out of thirteen cases were managed with enucleation. Histopathology revealed involvement of Schlemm’s canal in all cases. Metastases developed in 25% of cases at a mean follow up of 6years. Ring melanoma was misdiagnosed in four cases of refractory glaucoma as angle recession, iridocorneal endothelial syndrome and melanocytoma of ciliary body. Because of the epibulbar mass and the gonioscopy performed, the diagnosis of ring melanoma was well established in our case. Ultrasound biomicroscopy beautifully demonstrated the ciliary body mass, the iris and angle involvement and the contiguity with the epibulbar mass through the sclera. Our patient did well with no evident metastases 18months following enucleation.
Cases 5 to 8, all presented with neovascular glaucoma secondary to a large ciliochoroidal melanoma with a long standing exudative retinal detachment resulting in retinal ischemia. Ischemia triggers the release of factors that ace inhibitor both inhibit and promote new vessel growth. Greater concentration in promoting factors results in neovascularization. Rubeosis iridis with secondary peripheral anterior synechiae and angle closure was present in all cases.
A recent report described the management of secondary glaucoma in eyes with initial presentation of uveal melanoma with cyclophtocoagulation. The target of the intervention was to preserve vision and relieve pain. The cohort included 27 patients, of them 14 cases (52%) died during follow up reflecting an advanced disease, 4 eyes were enucleated and 15 (65%) out of 23 preserved eyes had a visual acuity of NLP. Cyclophotocoagulation can be a useful palliative way in pain management in advanced intraocular melanoma.
In conclusion, uveal melanoma can mimic any form of secondary glaucoma, which is the presenting feature in only 3% of cases. Glaucoma was found to be an independent bad prognostic factor on multivariate analysis, perhaps related to delay in diagnosis and/or mismanagment. Despite being a rare presentation, it is important to exclude clinically and by thorough investigations intraocular malignancy before taking any decision for filtering surgery while dealing with a potentially fatal disease.

Conflict of interest

Circumscribed choroidal hemangioma is a benign, vascular, hamartomatous tumor generally detected at the posterior pole of the ocular fundus as a solitary lesion without systemic findings. The typical characteristics of this tumor include an orange-red color, an echodense appearance on ultrasonography, and early hyperfluorescence with fluorescein and indocyanine green angiography Shields et al. reported that the visual acuity (VA) at the initial presentation decreased profoundly to 20/200 or worse in 54% of patients with circumscribed choroidal hemangioma, and the pathogenesis of visual loss generally is associated with a serous retinal detachment. Persistent macular detachment can cause various morphologic changes in the sensory retina and retinal pigment epithelium (RPE), including cystoid macular changes and RPE atrophy as in chronic central serous chorioretinopathy (CSC). These changes account for poor visual prognoses.

Materials and methods
We reviewed the digital medical records in the Department of Ophthalmology, Fukushima Medical University Hospital, Fukushima, Japan, and identified six previously non-treated eyes of six patients with well-documented extrafoveal circumscribed choroidal hemangioma with visual symptoms due to subfoveal retinal detachment. The patient data included age, gender, VA, duration of visual symptoms, tumor diameter, quadrantic location of the tumor epicenter, proximity to the foveola, color fundus photography, optical coherence tomography (OCT) (Stratus OCT, Model 3000, Carl Zeiss Ophthalmic System, Dublin, CA, and 3D OCT-1000, Topcon, Tokyo, Japan), fundus autofluorescence (FAF) (Heidelberg Retina Angiograph 2 [HRA2], Heidelberg Engineering, Dossenheim, Germany), and fluorescein angiography (FA) (TRC-50IX, Topcon, Tokyo, Japan) at the initial presentation. Ancillary test results were reviewed regarding ultrasonography (A- and B-scans), magnetic resonance imaging (T1- and T2-weighted images), FA (prearterial, arterial, venous, and late phase), and indocyanine green angiography (ICGA) (early, middle, and late phases).

br Results From the consecutive NVAMD patients seen over months

From the 186 consecutive NVAMD patients seen over 3months, a total of 9 (4.8%) patients (10 eyes) with chronic subretinal fluid were identified for this thrombopoietin receptor agonist study (Table 1). The mean patient age was 78years (range 55–91). Of these subjects, 3 were male and 6 were female. All patients were white. In addition to neovascular AMD, the only other ocular comorbidity shared by these patients was the presence of nuclear sclerotic cataracts. 5 eyes of 4 patients had nuclear sclerotic cataracts and the remaining 5 eyes of 5 patients had undergone uncomplicated cataract surgery with posterior chamber intraocular lens placement.
SD-OCT data showed the presence of vascularized pigment epithelial detachment (PED) consistent with type 1 neovascularization in all eyes and baseline fluorescein angiography was consistent with type 1 neovascularization in all eyes (Fig. 1A–D). All 10 eyes had subfoveal subretinal fluid, and 1 eye also had an additional area of subretinal fluid located temporal to the fovea. The mean duration of persistent subretinal fluid was 5.2years (range 1.3–11.0). Only 1 eye had or developed cystoid macular edema detected by SD-OCT. At least partial preservation of the foveal ellipsoid zone and external limiting membrane was identified in all patients. No eyes had or developed the presence of foveal or non-foveal geographic atrophy over the follow-up period.
Reliable measurements of choroidal thickness were attained in all cases. Mean baseline subfoveal choroidal thickness was measured to be 285.3μm (range 100–573μm) and the mean follow-up subfoveal choroidal thickness was 239.7μm (range 83–470μm). Data for normal age-matched choroidal thickness were obtained from another study which measured choroidal thickness in 42 eyes of 42 healthy subjects. These subjects had no history of retinal or choroidal pathology, and patients with myopic refractive error of greater than 6.0 diopters were excluded. The average subfoveal choroidal thickness in this group of healthy patients was measured to be 256.8±75.8μm (Fig. 2).
All eyes were being treated with intravitreal anti-VEGF therapy in order to control their disease (Table 2). Eyes had received a mean of 36.5 injections (range 17–66) of either bevacizumab (intra-vitreal 1.25mg/0.05ml), ranibizumab (intra-vitreal 0.5mg/0.05ml), or aflibercept (2.0mg/0.05ml). Only a single eye had received verteporfin photodynamic therapy (PDT) prior to the initation of intravitreal therapy, with a total of 5 treatment sessions, including one session of combined PDT and intra-vitreal triamcinolone acetonide. At the most recent follow-up, 7 eyes were receiving intravitreal monthly aflibercept and the remaining 3 eyes were receiving monthly intravitreal ranibizumab. Criteria for retreatment included persistent subretinal or intraretinal fluid by OCT with or without the presence of clinically identified hemorrhage.

Jaffe et al. investigated the association of macular morphology with visual acuity in eyes with neovascular AMD treated with intravitreal ranibizumab or bevacizumab for 1year. The results of their study indicated that residual intraretinal fluid in the macula, mainly intraretinal fluid involving the fovea, had a significant negative effect on visual acuity, whereas subretinal or sub-retinal pigment epithelial (RPE) fluid was not found to have a significant negative effect on visual function. Previous studies have also reported that cystoid macular edema has an adverse impact on visual acuity when associated with subfoveal CNV. The etiology of this specific negative effect of intraretinal fluid (not subretinal or sub-RPE fluid) is unclear at present. In some cases, intraretinal fluid may simply be a manifestation of irreversible photoreceptor damage, rather than its cause. Our study further supports the possibility that persistent subretinal fluid may not always have a progressive impact on visual outcomes. Moreover, one subject in this study had intraretinal fluid inferior to the fovea in addition to the presence of subretinal fluid and was able to maintain a visual acuity of 20/25 at 5year follow-up.