oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Search Results: 1 - 10 of 4976 matches for " Christoph Kaether "
All listed articles are free for downloading (OA Articles)
Page 1 /4976
Display every page Item
NOD2-C2 - a novel NOD2 isoform activating NF-κB in a muramyl dipeptide-independent manner
Marcel Kramer, Janne Boeck, Daniela Reichenbach, Christoph Kaether, Stefan Schreiber, Matthias Platzer, Philip Rosenstiel, Klaus Huse
BMC Research Notes , 2010, DOI: 10.1186/1756-0500-3-224
Abstract: Here, we report a novel alternative transcript of the NOD2 gene, which codes for a truncated tandem CARD only protein, called NOD2-C2. The transcript isoform is highest expressed in leucocytes, a natural barrier against pathogen invasion, and is strictly linked to promoter usage as well as predominantly to one allele of the single nucleotide polymorphism rs2067085. Contrary to a previously identified truncated single CARD NOD2 isoform, NOD2-S, NOD2-C2 is able to activate NF-κB in a dose dependent manner independently of muramyl dipeptide (MDP). On the other hand NOD2-C2 competes with MDPs ability to activate the NOD2-driven NF-κB signaling cascade.NOD2 transcripts having included an alternative exon downstream of exon 3 (exon 3a) are the endogenous equivalents of a previously described in vitro construct with the putative protein composed of only the two N-terminal CARDs. This protein form (NOD2-C2) activates NF-κB independent of an MDP stimulus and is a potential regulator of NOD2 signaling.The innate immune system uses several molecules that sense pathogen-associated molecular patterns (PAMPs) including Toll-like, RIG-1 (retinoic acid inducible gene protein 1)-like and the NOD (nucleotide-binding and oligomerization domain)-like receptors (NLRs) to trigger a protective response against intracellular danger signals, e.g. cytoinvasive pathogens. The NLRs family consists of more than 20 related members defined by a tripartite structure consisting of: (i) a variable N-terminal protein-protein interaction domain, defined by the caspase recruitment domain (CARD), pyrin domain (PYD), or the baculovirus inhibitor domain (BIR); (ii) a centrally located NOD domain facilitating self-oligomerization during activation [1], and (iii) a C-terminal leucine-rich repeat (LRR) responsible for binding/detecting of PAMPs. The N-terminal effector binding domains are essential elements of the NLRs to elicit a signal subsequent to NLR activation. In case of NOD1 and NOD2 (CARD15), the N-
Transit Time and Charge Correlations of Single Photoelectron Events in R7081 PMTs
Florian Kaether,Conradin Langbrandtner
Physics , 2012, DOI: 10.1088/1748-0221/7/09/P09002
Abstract: During the calibration phase of the photomultiplier tubes (PMT) for the Double Chooz experiment the PMT response to light with single photoelectron (SPE) intensity was analysed. With our setup we were able to measure the combined transit time and charge response of the PMT and therefore we could deconstruct and analyse all physical effects having an influence on the PMT signal. Based on this analysis charge and time correlated probability density functions were developed to include the PMT response in a Monte Carlo simulation.
Establishing Payment Hubs—Unwind the Spaghetti?  [PDF]
Christoph Markert
American Journal of Industrial and Business Management (AJIBM) , 2014, DOI: 10.4236/ajibm.2014.44024
Abstract:

Banks and financial services providers are facing a more and more competitive business within the retail banking as well as corporate market. Increasing productivity and efficiency by decreasing operational costs is very often one milestone on the strategic business roadmap of a bank or financial services providers. The payment area is usually seen as a cost intensive, but necessary part of the business and information technology (IT) landscape. Many banks and financial services providers do still follow a best-of-breed approach within the system payments landscape, which ends up in high operational and maintenance costs as different payment processing platforms are serving different business purposes. The establishment of a single globally centralized payment hub cannot be only the solution for the ending of a heterogeneous payment processing landscape, but also for supporting the strategic management roadmap by decreasing system complexity and increasing the efficiency of the payment platforms and thus decreasing operational and maintenance IT costs. Furthermore it can support banks to establish a far more flexible technological implementation approach for an entire core banking transformation program. This paper is analyzing the challenges and issues banks and financial services providers are facing with the establishment of payment hubs in their enterprise system landscape from a management as well as IT point of view.

Determination of Material Properties like Permittivity and Density with Microwaves  [PDF]
Christoph Sklarczyk
Journal of Modern Physics (JMP) , 2014, DOI: 10.4236/jmp.2014.56043
Abstract:

With the help of electromagnetic waves in deci-, centi- and millimeter-wave range (microwaves) it is possible to determine the properties of non-metallic objects like permittivity or density in a nondestructive and if necessary in a contactless way. Depending on the type of the test object the measurement can be carried out both with low-cost narrowband or more expensive wideband devices and sensors. To get the characteristic value in most cases it is necessary to calibrate the test device with the help of reference materials. It is recommendable to sustain a constant distance (lift-off or standoff) between the antenna of the sensor and the test object. The paper deals with the characterization of asphalt, especially the determination of its density.

Photon Structure Function Revisited  [PDF]
Christoph Berger
Journal of Modern Physics (JMP) , 2015, DOI: 10.4236/jmp.2015.68107
Abstract: The flux of papers from electron positron colliders containing data on the photon structure function \"\"ended naturally around 2005. It is thus timely to review the theoretical basis and confront the predictions with a summary of the experimental results. The discussion will focus on the increase of the structure function with x (for x away from the boundaries) and its rise with \"\", both characteristics being dramatically different from hadronic structure functions. The agreement of the experimental observations with the theoretical calculations is a striking success of QCD. It also allows a new determination of the QCD coupling constant \"\"which very well corresponds to the values quoted in the literature.
Reanalysis of the GALLEX solar neutrino flux and source experiments
F. Kaether,W. Hampel,G. Heusser,J. Kiko,T. Kirsten
Physics , 2010, DOI: 10.1016/j.physletb.2010.01.030
Abstract: After the completion of the gallium solar neutrino experiments at the Laboratori Nazionali del Gran Sasso (GALLEX}: 1991-1997; GNO: 1998-2003) we have retrospectively updated the GALLEX results with the help of new technical data that were impossible to acquire for principle reasons before the completion of the low rate measurement phase (that is, before the end of the GNO solar runs). Subsequent high rate experiments have allowed the calibration of absolute internal counter efficiencies and of an advanced pulse shape analysis for counter background discrimination. The updated overall result for GALLEX (only) is (73.4 +7.1 -7.3) SNU. This is 5.3% below the old value of (77.5 + 7.5 -7.8) SNU (PLB 447 (1999) 127-133) with a substantially reduced error. A similar reduction is obtained from the reanalysis of the 51Cr neutrino source experiments of 1994/1995.
Optimal Costly Information Gathering in Public Service Provision  [PDF]
Paul Geertsema, Christoph Schumacher
Theoretical Economics Letters (TEL) , 2012, DOI: 10.4236/tel.2012.23060
Abstract: Imperfect information regarding the true needs of recipients is a common problem for governmental or not-for-profit service providers. This can lead to potentially dangerous under-provision or wasteful over-provision of services. We provide a method for optimally improving a service provider’s information regarding true client need through costly information gathering. Our contribution is to allow providers to endogenously and optimally choose the intensity of information gathering. Providers do so by specifying the level of correlation between observed and true recipient need, subject to an arbitrary cost function over the specified correlation. We derive the conditions that characterize the choice of optimal correlation for providers with quadratic utility. Using a realistic exponential correlation cost function, we show that there exists a critical value of true client need variance below which it is never optimal to engage in information gathering. Further, for true client variance above this critical level the optimal correlation will always exceed 0.5. Our findings have a wide range of policy implications in areas such as health care, social wellfare and even counter-terroism.
Relative importance of different physical processes on upper crustal specific heat flow in the Eifel-Maas region, Central Europe and ramifications for the production of geothermal energy  [PDF]
Lydia Dijkshoorn, Christoph Clauser
Natural Science (NS) , 2013, DOI: 10.4236/ns.2013.52A039
Abstract:

We study the recent upper crustal heat flow variations caused by long-term physical processes such as paleoclimate, erosion, sedimentation and mantle plume upwelling. As specific heat flow is a common lower boundary condition in many models of heat en fluid flow in the Earth’s crust we quantify its long-term transient variation caused by paleoclimate, erosion or sedimentation, mantle plume upwelling and deep groundwater flow. The studied area extends between the Eifel mountains and the Maas river inCentral Europe. The total variation due to these processes in our study area amounts to tectonic events manifested in the studied area 20 mW/m2, about 30% of the present day specific heat flow in the region.

A Methodology to Assess the Safety of Aircraft Operations When Aerodrome Obstacle Standards Cannot Be Met  [PDF]
Hartmut Fricke, Christoph Thiel
Open Journal of Applied Sciences (OJAppS) , 2015, DOI: 10.4236/ojapps.2015.52007
Abstract: When Aerodrome Obstacle Standards cannot be met as a result of urban or technical development, an aeronautical study can be carried out with the permission of EASA, in conjunction with ICAO, to prove how aircrafts can achieve an equivalent level of safety. However currently, no detailed guidance for this procedure exists. This paper proposes such a safety assessment methodology in order to value obstacle clearance violations around airports. This method has already been applied to a safety case at Frankfurt Airport where a tower elevating 4 km out of threshold 25R severely violates obstacle limitation surfaces. The model data refers to a take-off and landing performance model (TLPM) computing precisely aircraft trajectories for both standard and engine out conditions at ground proximity. The generated tracks are used to estimate collision risk incrementally considering EASA/FAA, EU-OPS & ICAO clearance criteria. Normal operations are assessed with a probabilistic analysis of empirical take-off/landing track data generating the local actual navigation performance (ANP) on site. The ANP shows integration to collision risk for an aircraft with any obstacle. The obstacle is tested for clearance within a “5-step-plan” against all performance requirements for landing climb and take-off climb. The methodology thereby delivers a comprehensive risk picture: The presented safety case for Frankfurt Airport showed an equivalent safety level despite the violation of standards. The collision risk during both normal and degraded performance operations was still found to be within ICAO Collision Risk Model (CRM) limits, requiring only limited risk mitigation measures. The presented work should complement ICAO Doc 9774 Appendix 3.
The Planck Length and the Constancy of the Speed of Light in Five Dimensional Spacetime Parametrized with Two Time Coordinates  [PDF]
Christoph K?hn
Journal of High Energy Physics, Gravitation and Cosmology (JHEPGC) , 2017, DOI: 10.4236/jhepgc.2017.34048
Abstract: In relativity and quantum field theory, the vacuum speed of light is assumed to be constant; the range of validity of general relativity is determined by the Planck length. However, there has been no convincing theory explaining the constancy of the light speed. In this paper, we assume a five dimensional spacetime with three spatial dimensions and two local time coordinates giving us a hint about the constancy of the speed of light. By decomposing the five dimensional spacetime vector into four-dimensional vectors for each time dimension and by minimizing the resulting action, for a certain class of additional time dimensions, we observe the existence of a minimal length scale, which we identify as the Planck scale. We derive an expression for the speed of light as a function of space and time and observe the constancy of the vacuum speed of light in the observable universe.
Page 1 /4976
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.