url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://docs.collartoken.com/collarquest/collarquest/gameplay/satchel
code
DRAFT COLLARQUEST DOCS - SUBJECT TO CHANGE WITHOUT NOTICE. The Sparc-e Automated Transportation & Conveyance Humanoid ELement (SATCHEL) will be your robot's remote presence in CollarQuest Ariomont (Land) gameplay. If you have a wallet with a SPARC-E, you will have a SATCHEL; the DAO may vote to charge for SATCHEL mods. SATCHEL - Sample SATCHEL pros/cons of this humanoid robot could be that a SATCHEL travels fastest with no SPARC-Es deployed and slowest with all 3 SPARC-Es deployed; SPARC-Es can find hidden items when deployed; SPARC-Es do not need to be deployed all or none, thus providing 4 levels of options and cost/benefit of using different gameplay strategy for travel. We will have other modes of travel to allow SATCHELs to get around the large land play for quests. SATCHEL - schematic/design development level Join the conversation https://discord.gg/collarcrew
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476442.30/warc/CC-MAIN-20240304101406-20240304131406-00144.warc.gz
CC-MAIN-2024-10
878
6
https://www.simula.no/people/magnej?f%5B0%5D=biblio_year%3A0&f%5B1%5D=field_research_area%3A6&f%5B2%5D=field_publication_state%3A18&f%5B3%5D=biblio_year%3A2010&mefibs-form-filter-exposed-sort-sort_by=biblio_primary_author_name
code
- 1 of 5 Simula in the media - 1 of 3 Dr. Scient. in Informatics Institute of Informatics, University of Oslo Norwegian digitisation council (Digitaliseringsrådet) The Norwegian Governement (Regjeringen) Impact paper award (most influential paper) Journal of Systems and Software 83 (2010): 1039-1050. Information and Software Technology 52 (2010): 506-516. Journal of Systems and Software 83 (2010): 29-36. In Keynote at: Norsk informatikk-konferanse (NIK), 2010. In Invited presentation at the Omar Dengo fundacion seminar: Research, Software Development, and Strategic Investments, Costa Rica, 2010. In Presentation at CIO Forum (IDG), 2010. In Invited presentation at the Omar Dengo fundacion seminar: Software Engineering and Emerging Markets, Costa Rica, 2010. In Presentation at Prepare's seminar for the IT-industry (and at an internal Bearing Point seminar), 2010. In Presentation at TrygVesta seminar, 2010. In Seminar at University of Auckland, 2010.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00482.warc.gz
CC-MAIN-2021-04
962
18
https://github.com/CSSEGISandData/COVID-19/commit/0cea9b2179306618bd7917798819ebf6608d67de
code
There was a problem hiding this comment. The reason will be displayed to describe this comment to others. Learn more. I think are there some issues with this update: Sorry, something went wrong. Also, there is a duplicate entry for Taiwan as well. The updated version includes an entry "Taiwan, Taipei and environs" which is inconsistent with the previous records, which were using "Taiwan, Taiwan". The countries rename produced bad data. We have two records: ,Republic of Korea,36.0,128.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,54 Old name haven't the last data and the new have all zeros except the last. The same for other renamed records. Thank you for adding State-level US data. To add to what @aatishb said: all US cities are missing data for 3/10/20. This inconsistency might be coming from the latest daily report. Might be changing sources. There's an issue open for this: #405 US is still 605...... I'm assuming the Hong Kong rename has obliterated Hong Kong from the "Total Confirmed" section of the website? It looks like there's some inconsistency between state level and county level data in the US? Take Washington as an example - the new 'Washington' state classification shows 267 cases for 10-Mar but from county level data I only get to 162 In addition to @JBrooks137 comments, having double data for state and city/county is confusing. No other countries have double data like this. Please review. First, let me say that I really really appreciate this data set. I'm sure I represent a large number of academics and software/data savvy people when I say that this has been an invaluable resource. That said, I think it behooves you to maintain as much forwards and backwards compatibility as possible. If the structure of data is going to change, it's much better to give some advance warning and preserve backwards compatibility in existing files. If a format change is absolutely necessary, you can mark the old file deprecated and create a new file with a new name and new notation. The current time series files are broken going backwards and forwards for the COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_19*.csv files are severely broken. Please don't break the who_covid_19_situation_reports/who_covid_19_sit_rep_time_series/who_covid_19_sit_rep_time_series.csv without considering some of the steps I've outlined above. Thanks! @danslee You have a really great point. But given the fact that the team working on this might not have a strong CS/Data Science background, I'm not sure whether they would have the capacity of maintaining this repo with these compatibilities. Having said that -- I think a better way is to either help them with this repository or create a fork/seperate repo to support better usability. I am not down-playing their contribution, I do think they have provided us a great resource, but I think people have different priorities and I would really like them to focus on the correctness and speedy update of the numbers. Just my two cents. @eugene-yang I'd love to help out, but the only data I can see is what they have checked into the repo. I am working on some scripts which will unify the time-series csv files with unified naming schemes and such, but have run into what are clearly some doubly entered data points around 2020-03-10 and 03-11 which brings the integrity of the entire file into question. Hopefully, the morning will bring some order to the data chaos.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00787.warc.gz
CC-MAIN-2022-33
3,505
25
https://github.com/kubedb/issues/issues/671
code
Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up Helm Chart catalog fails to be installed - validators.kubedb.com/v1alpha1: the server is currently unable to handle the request #671 The following installation of kubedb chart is failing on k8s or ocp and reports as error Helm version used: v2.15.0 REMARK: We dont have this issue using version Digging slightly further and comparing to #574, I am seeing Though of course that just implies the pod is never going ready, right? And there is nothing really interesting there, log wise. and indeed the probes are failing with TLS errors :( Still there with: Kubernetes on AKS: The only solution for now kill and make new k8s cluster. They should definitely figure out how to make the healthchecks reliable; but for the record, they do provide a way to scrub stuff without resorting to a cluster recreate: This will leave the helm releases behind (but the resources gone) - so tack on a We are having the same problem. Running K8s 1.16.2 and kubedb version 0.12.0 when trying to change the deployment for redis. "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference" The loggings of the failing operator show the following: goroutine 1268 [running]:
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00549.warc.gz
CC-MAIN-2020-05
1,367
19
https://training.incf.org/search?f%5B0%5D=difficulty_level%3Aintermediate&f%5B1%5D=lesson_type%3A26&f%5B2%5D=lesson_type%3A30&f%5B3%5D=topics%3A25&f%5B4%5D=topics%3A36&f%5B5%5D=topics%3A38&f%5B6%5D=topics%3A47&f%5B7%5D=topics%3A57&f%5B8%5D=topics%3A60&f%5B9%5D=topics%3A66&f%5B10%5D=topics%3A73&f%5B11%5D=topics%3A196&f%5B12%5D=topics%3A248&f%5B13%5D=topics%3A283&amp%3Bf%5B1%5D=topics%3A61
code
In this lesson, users will learn about human brain signals as measured by electroencephalography (EEG), as well as associated neural signatures such as steady state visually evoked potentials (SSVEPs) and alpha oscillations. This talk gives an overview of the Human Brain Project, a 10-year endeavour putting in place a cutting-edge research infrastructure that will allow scientific and industrial researchers to advance our knowledge in the fields of neuroscience, computing, and brain-related medicine. This lecture gives an introduction to the European Academy of Neurology, its recent achievements and ambitions. This lecture discusses the the importance and need for data sharing in clinical neuroscience. This lecture gives insights into the Medical Informatics Platform's current and future data privacy model. This lecture gives an overview on the European Health Dataspace. This is a continuation of the talk on the cellular mechanisms of neuronal communication, this time at the level of brain microcircuits and associated global signals like those measureable by electroencephalography (EEG). This lecture also discusses EEG biomarkers in mental health disorders, and how those cortical signatures may be simulated digitally. This lesson describes the principles underlying functional magnetic resonance imaging (fMRI), diffusion-weighted imaging (DWI), tractography, and parcellation. These tools and concepts are explained in a broader context of neural connectivity and mental health. Introduction to the Brain Imaging Data Structure (BIDS): a standard for organizing human neuroimaging datasets. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. Tutorial on collaborating with Git and GitHub. This tutorial was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. This lecture and tutorial focuses on measuring human functional brain networks. The lecture and tutorial were part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. Next generation science with Jupyter. This lecture was part of the 2019 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. This lecture introduces you to the basics of the Amazon Web Services public cloud. It covers the fundamentals of cloud computing and go through both motivation and process involved in moving your research computing to the cloud. This lecture was part of the 2018 Neurohackademy, a 2-week hands-on summer institute in neuroimaging and data science held at the University of Washington eScience Institute. This lecture on generating TVB ready imaging data by Paul Triebkorn is part of the TVB Node 10 series, a 4 day workshop dedicated to learning about The Virtual Brain, brain imaging, brain simulation, personalised brain models, TVB use cases, etc. TVB is a full brain simulation platform. This lecture focuses on ontologies for clinical neurosciences. This talk presents state-of-the-art methods for ensuring data privacy with a particular focus on medical data sharing across multiple organizations. This presentation discusses the impact of data sharing in stroke. This talks presents an overview of the potential for data federation in stroke research. The Medical Informatics Platform (MIP) is a platform providing federated analytics for diagnosis and research in clinical neuroscience research. The federated analytics is possible thanks to a distributed engine that executes computations and transfers information between the members of the federation (hospital nodes). In this talk the speaker will describe the process of designing and implementing new analytical tools, i.e. statistical and machine learning algorithms. Mr. Sakellariou will further describe the environment in which these federated algorithms run, the challenges and the available tools, the principles that guide its design and the followed general methodology for each new algorithm. One of the most important challenges which are faced is to design these tools in a way that does not compromise the privacy of the clinical data involved. The speaker will show how to address the main questions when designing such algorithms: how to decompose and distribute the computations and what kind of information to exchange between nodes, in order to comply with the privacy constraint mentioned above. Finally, also the subject of validating these federated algorithms will be briefly touched. This lecture discusses risk-based anonymization approaches for medical research.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650264.9/warc/CC-MAIN-20230604193207-20230604223207-00638.warc.gz
CC-MAIN-2023-23
4,841
20
http://docs.oracle.com/cd/E11223_01/doc.910/e11210/lof.htm
code
List of Figures 1-1 Data Flow During Full Reconciliation 1-2 Data Flow During Incremental Reconciliation 2-1 Dialog Box Displayed on Running the SAP JCo Test 4-1 Attribute Details for Attribute Mapping B-1 Part of a Sample IDoc Scripting on this page enhances content navigation, but does not change the content in any way.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00559-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
323
7
https://forums.unrealengine.com/t/why-cant-i-get-my-game-below-15mb-its-48mb-but-my-content-is-1-98mb/90956
code
I’ve followed the documentation to a T. I’ve disabled plugins, unchecked lighting, specified maps, don’t include engine content in my build, ect. But my game packages out to be 48MB and takes up more than that once it’s been installed. I’m aware that by changing from Development to Shipping I’ll be removing about “13%” of my package size but I’m just building a 2D game with very basic pixel art. Why is my game so large? I’m not new to 2D and in my experience 12MB is what I ended up with in a full polished game with music, sound effects, 48+ pixel characters with several animations, full HD menu backgrounds, a couple of fonts, ect. This is barely more than a prototype. I’ve got a single texture that I’ve turned into a sprite. That’s my content. What’s going on? Edit: Installed size is 152MB and that’s using the Android (ATC) build out (which I believe means that I’m only packaging and including ATC Textures). I’m currently packaging out a Nativized version of my game. Took ages as expected (JUST finished humorously enough) and when it installs, assuming it doesn’t crash somehow, I’ll see what changing it to a nativized non-OBB apk does. If the nativization doesn’t crash it I’m pretty sure the huge file size might. But I figure if the Match 3 project could fit into a .Apk I’ll be fine. Edit 2: 164MB Installed, Package size increased by about 2MB as well.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00156.warc.gz
CC-MAIN-2021-39
1,418
5
https://help.safariportal.app/hc/en-us/articles/360061026354-What-happens-after-I-publish-a-content-page-
code
Once you publish a content page, it is sent over to the Safari Portal team for approval. You should receive a message from us with an approval or request for edits within 48 hours. While the page is being approved, you cannot make new changes to the page. Once the content has been approved, it becomes public and is live site-wide. Please keep in mind: Future changes that are made to a page do not have to go through the same approval process so changes can be seen in real time for the entire platform.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00179.warc.gz
CC-MAIN-2023-14
505
2
https://github.com/dvuckovic/BusPlus
code
How it works It works by sending USSD (MMI) queries into info service. App streamlines these queries by organizing stations and lines that uses them along the way. Of course, locations of vehicles are coarse and the system can only show how many bus stops is a vehicle away from you (your current bus stop). Every bus stop has a unique code which is used as an input for the service. This service is available only on three local mobile networks in Serbia, and is charged accordingly (1.8 RSD per query since September 16th 2012). Unfortunately, there is still no USSD API for Android in the works, but I managed to execute them by raising Intent to the system dialer app. BusPlus app has three views and can be used to make queries in several ways: - manual code entry (if you know it :) - search by station name (not easy, because most of station names are duplicated for both directions) - map of the city with your current location and nearby stations - ability to plot station locations on a map - list of favorites with manual entry and entry from database App also supports two locales (Serbian and English), which can be switched in app Settings menu. Several new features have been squeezed in over time (mostly from comments on Play Store): - more complete station database - more precise locations for most stations - new line database (with stations in both directions) - direct launcher shortcuts - custom tab positions - satellite map view - renaming of favorites - custom color launcher icons - support for both Cupcake (1.5) and ICS/JB (4.x) - option to move app to SD card (2.2+) - home screen widget with balance - dismissible charge warning before each query - suburban stations & lines - Tasker integration - ICS API Take a look at the app in action: App can be downloaded using Google Play Store, as a free app. Feel free to visit its page if you want to check it out before downloading: Link is safe for Android devices too, because it can be opened in the Market app. This project was open to optional donations, but because of absolute lack of interest it isn't anymore. I will continue to support and update the app in my free time. App has been written in Java for Android, and compiled using official Android SDK. Note: Since this app uses Google Maps MapView, you will need your own API key (just change it in <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <com.dvuckovic.busplus.MyMapView android:id="@+id/mapview" android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1" android:apiKey="-----------YOUR_API_KEY_HERE-----------" android:clickable="true" /> ... Source code is released under WTFPL license.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949489.63/warc/CC-MAIN-20180427060505-20180427080505-00142.warc.gz
CC-MAIN-2018-17
2,836
34
http://dot-bit.org/Tasks-DNSTools
code
From Namecoin DNS Register and manage full configuration of your domains (ns forwarding, subdomains, ip, ipv6, alias, etc) - NamecoinToBind is a php script that generates bind zones for .bit domains from the namecoin blockchain by using the name_scan rpc command. - Status : working - Last updated: 2011-09-19 - Percentage of completion: 50% - Sources : https://github.com/khalahan/NamecoinToBind - Name: khalahan - Supports only initial syntax, not the new domain spec. Read a little more in the documentation. - A new (rewritten) version that will support the new domain spec is on the road but far from finished (better resource management and cache usage). Contact me if you are interested. - A running namecoin accepting rpc commands - A bind daemon
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446300.49/warc/CC-MAIN-20151124205406-00272-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
754
12
https://forums.opensuse.org/t/13-2-new-install-cant-connect-to-internet-sheeesh/106375
code
Just installed SUSE 13.2 64 bit, with kde front end. I have a Huawei Mobile Broadband dongle. model 173 I think. It works fine in Mint 15, 17, windows xp, vista, and win7. In SUSE I enter all of the fields in the network manager (GSM – Sri Lanka – Mobitel … ) but when I’m finished the device doesn’t show up in the list. I tried a different dongle, an earlier model E156. This time the dolphin saw the dongle and its extra memory, but still it doesn’t appear in network manager. Had a similar problem with Ubuntu 14.04 and Mint 17.1 after an update – which took away GSM connectivity. I had to revert to another mint 17 installation in order to connect. The solution in mint 17.1 was to download and install some legacy modem manger. Using it I have to go through 4 or more screens entering data just to make a connection. So I quit 17.1 entirely. Could somebody explain an easy way to get SUSE to connect to a GSM network? --so I can finish the setup and start working. I thought SUSE was a mature product … but it’s broken out of the box!!! These are very common GSM dongles in Asia, where we have few hardwired connections >:(
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00763.warc.gz
CC-MAIN-2023-50
1,148
6
https://softwareengineering.stackexchange.com/questions/284193/truncating-html-content-at-specific-content-blocks
code
I have HTML content in my DB and I would like to present a list of these individual items but truncate each of them so they're not fully displayed. I would like to keep truncated items human-meaningful, so I don't want to just cut content at some specific character index. As my content is HTML that specific index may be in the middle of an HTML tag for what it's worth. I would like to truncate at specific meaningful breakpoints like end of block elements. Be it paragraphs, block quotes, code blocks, lists, list items etc. This gives the reader the possibility to get some semantically complete excerpt of the whole content. My implementation should therefore be called like: string truncateNear(string HTMLContent, int closestToIndex); I would be able to provide some proximity index where function should be searching for closest HTML block end and return that content. How would you go about truncating HTML content in such parsable way that would: - output valid HTML - scale well performance wise - maybe allow searching for the first image in content and placing it right after text in trimmed content Would it be better to transform HTML to some other format first and then manipulate that instead if it was that faster and easier to manipulate?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818067.32/warc/CC-MAIN-20240421225303-20240422015303-00456.warc.gz
CC-MAIN-2024-18
1,257
10
http://security.stackexchange.com/questions/tagged/corporate-policy?sort=unanswered&pagesize=15
code
My compliance manager recently told me that Lync IMs should be treated as e-mail for compliance purposes. This also made me realize that our other email policies (encrypted device, pin required, ... From what I can tell, I Apple Watch apps act like a remote control to a nearby iPhone using Bluetooth or BLE. Conversely Android watches have the ability to run full applications, and therefore ... I recently helped a new person at our company. We use N-Able to connect to machines, which has prompt/notification turned off by default. I've never considered it extensively, though it crossed my ... I want to blacklist all the external storage devices and only allow specific brand of device such as SanDisk. I had blacklisted the external storage devices by using USBSTOR* and whitelist all the ...
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093899.18/warc/CC-MAIN-20150627031813-00092-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
798
4
https://researchwith.njit.edu/en/publications/minimum-effort-driven-dynamic-faceted-search-in-structured-databa
code
In this paper, we propose minimum-effort driven navigational techniques for enterprise database systems based on the faceted search paradigm. Our proposed techniques dynamically suggest facets for drilling down into the database such that the cost of navigation is minimized. At every step, the system asks the user a question or a set of questions on different facets and depending on the user response, dynamically fetches the next most promising set of facets, and the process repeats. Facets are selected based on their ability to rapidly drill down to the most promising tuples, as well as on the ability of the user to provide desired values for them. Our facet selection algorithms also work in conjunction with any ranked retrieval model where a ranking function imposes a bias over the user preferences for the selected tuples. Our methods are principled as well as efficient, and our experimental study validates their effectiveness on several application scenarios.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817398.21/warc/CC-MAIN-20240419110125-20240419140125-00293.warc.gz
CC-MAIN-2024-18
976
1
http://best-binary-try-options-platform.pw/option-pricing-models-and-volatility-using-excel-vba-cd-files-9062.php
code
Option pricing models and volatility using excel-vba cd files First make sure that IQfeed is open. You can either download daily or intraday data. Note the period parameter. It can take any of the following values: QuantTools makes the process of managing and storing tick market data easy. You just setup storage parameters and you are ready to go. The parameters are where, since what date and which symbols you would like to be stored. Any time you can add more symbols and if they are not present in a storage, QuantTools tries to get the data from specified start date. The code below will save the data in the following directory: There is one sub folder by instrument and the data is aved in. You can also store data between specific dates. In the example below, I first retrieve the data stored above, then select the first price observations and finally draw the chart. Two things to notice: You can refer to the Examples section on QuantTools website. Overall I find the package extremely useful and well documented. The only missing bit is the live feed between R and IQFeed which will make the package a real end to end solution. A few months ago a reader point me out this new way of connecting R and Excel. At the time of writing the current version of BERT is 1. Ultimately I have a single Excel file gathering all the necessary tasks to manage my portfolio: In the next sections I present the prerequisite to developed such an approach and a step by step guide that explains how BERT could be used for simply passing data from R to Excel with minimal VBA code. Once the installation has completed you should have a new Add-Ins menu in Excel with the buttons as shown below. This is what we want to retrieve in Excel. Save this in a file called myRCode. R any other name is fine in a directory of your choice. In this file paste the following code. Then save and close the file functions. Create and save a file called myFile. This is a macro-enabled file that you save in the directory of your choice. Once the file is saved close it. Once the file is open, paste the below code. You should see something like this. Paste the code below in the newly created module. You should see something like the below appearing. From my perspective the interest of such an approach is the ability to glue together R and Excel obviously but also to include via XML and batch pieces of code from Python, SQL and more. This is exactly what I needed. Making the most of the out of sample data August 19, , 9: Then a comparison of the in and out of sample data help to decide whether the model is robust enough. This post aims at going a step further and provides a statistical method to decide whether the out of sample data is in line with what was created in sample. There is a non-parametric statistical test that does exactly this: Using the Kruskal-Wallis Test , we can decide whether the population distributions are identical without assuming them to follow the normal distribution. It exists other tests of the same nature that could fit into that framework. Then I tested each in sample subset against the out of sample data and I recorded the p-values. This process creates not a single p-value for the Kruskall-Wallis test but a distribution making the analysis more robust. As usual what is presented in this post is a toy example that only scratches the surface of the problem and should be tailored to individual needs. As usual with those things just a kind reminder: This is a very first version of the project so do not expect perfection but hopefully it will get better over time. Please report any comment, suggestion, bug etc… to: Doing quantitative research implies a lot of data crunching and one needs clean and reliable data to achieve this. What is really needed is clean data that is easily accessible even without an internet connection. The most efficient way to do this for me has been to maintain a set of csv files. I have one csv file per instrument and each file is named after the instrument it contains. The reason I do so is twofold: Simple yet very efficient so far. The process is summarized in the chart below. In everything that follows, I assume that data is coming from Yahoo. The code will have to be amended for data from Google, Quandl etc… In addition I present the process of updating daily price data. The code below is placed on a. Note that I added an output file updateLog. The process above is extremely simple because it only describes how to update daily price data. Callable Range Accrual Notes. The Heston Model of Stochastic Volatility: Fast Option Pricing and Accurate Calibration. Pricing Callable Range Accrual Swaps. Why Don't My Numbers Match? Next Generation Variance Derivatives. Best practices for VBA coding. Taming the Emerging Market Curve. Pricing floating rate notes FRNs with multiple payments per reset period. Competitive Advantage in Curve Building.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824115.18/warc/CC-MAIN-20181212181507-20181212203007-00476.warc.gz
CC-MAIN-2018-51
4,927
16
https://bugs.ruby-lang.org/users/16806
code
alanwu (Alan Wu) - Login: alanwu - Registered on: 10/28/2018 - Last connection: 05/02/2021 - 12:46 AM Ruby master Bug #17843: Ruby on Rails error[BUG] Segmentation fault at 0x0000000000000110 ruby 3.0.1p64 (2021-04-05 revision 0fb782ee38) [x86_64-darwin15] (#42110) - Apologies about my misleading statement about support for the OS. It doesn't seem to be receiving security updates an... - 11:11 PM Ruby master Bug #17843: Ruby on Rails error[BUG] Segmentation fault at 0x0000000000000110 ruby 3.0.1p64 (2021-04-05 revision 0fb782ee38) [x86_64-darwin15] (#42110) - OS X El Capitan is no longer supported by Apple. I'm not sure about Ruby's policy for trying to support OSes that the... - 02:17 AM Ruby master Revision dee58d7a (git): Add back checks for empty kw splat with tests (#4405) - This reverts commit a224ce8150f2bc687cf79eb415c931d87a4cd247. Turns out the checks are needed to handle splatting an ... - 11:37 PM Ruby master Revision a224ce81 (git): Remove unnecessary checks for empty kw splat - These two checks are surrounded by an if that ensures the call site is not a kw splat call site. - 05:33 PM Ruby master Bug #17822 (Open): Inconsistent visibility behavior with refinements - Running the following script, case 0 raises `NoMethodError` for privacy violation, while all other cases print `:ref... - 11:26 PM Ruby master Revision eb4e3206 (git): Man page: correct defaults for RUBY_THREAD_VM_STACK_SIZE - See RUBY_VM_THREAD_VM_STACK_SIZE in vm_core.h. - 03:25 AM Ruby master Bug #17806 (Open): Bad interaction between method cache, prepend, and refinements - I'm running into a couple of issues with Ruby 3's new method cache and The first script raises `Syst... - 12:15 AM Ruby master Bug #17573: Crashes in profiling tools when signals arrive in non-Ruby threads - > Ah, OK. This issue doesn't expose on recent Linux system. I can somewhat reliably repro by running `ruby --jit r... - 07:03 PM Ruby master Revision b9908ea6 (git): Make a few functions static - 11:26 PM Ruby master Bug #17540: A segfault due to Clang/LLVM optimization on 32-bit ARM Linux - If this fixes the Ruby crash too, maybe we should put `-fno-strict-aliasing` as a default compilation option like the... Also available in: Atom
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.97/warc/CC-MAIN-20210506114045-20210506144045-00443.warc.gz
CC-MAIN-2021-21
2,224
31
https://discuss.fogcreek.com/CityDesk/3533.html
code
Req: Regular expression support in CityScript I am a new user, evaluating CityDesk. I have server-side scripts in place to scan a bunch of files and build a dynamic menu. I was looking at CityScript and I can do it using a keyword field. What would really be nice is if I could use perl-style regular expressions for the various conditions, like file or folder name. Adriaan van den Brand Why not use keywords: That is what I am doing now, using the keyword field. I just want perl-style regular expressions. It is a lot more flexible. On a completely unrelated topic, the perl regex engine is being completely revamped. VBScript includes regular expressions Much as I hate to talk about unimplemented features :/ Fog Creek Home
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00380.warc.gz
CC-MAIN-2019-22
728
9
https://www.knackelibang.com/
code
SIGN UP FOR EARLY BETA ACCESS As soon as we have a beta for you to play, we'll let you know! WISHLIST ON STEAM A surreal co-op first person shooter game where the only thing that matters is following the next set of instructions. Enemies will flood the scene, your weapons will break, more instructions will come, and then repeat. Aspire to not die.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657720.82/warc/CC-MAIN-20230610131939-20230610161939-00545.warc.gz
CC-MAIN-2023-23
349
5
https://throwexceptions.com/php-authentication-required-packagist-org-laravel-installation-throwexceptions.html
code
I’m using Ubuntu 16.04 and trying to install Laravel (any version). Actually I cloned from GitHub the Laravel project (https://github.com/laravel/laravel) After cloning I’m running the command as below: root:/var/www/html/laravel$ composer install Loading composer repositories with package information Updating dependencies (including require-dev) Authentication required (packagist.org): Username: This is the issue I’m facing, I don’t know what username I have to give, and why its asking authentication.. And if I run composer diagnose I get this output: composer diagnose Checking composer.json: OK Checking platform settings: OK Checking git settings: OK Checking http connectivity to packagist: Authentication required (packagist.org): Username: Any suggestions or can anyone tell what was I’m missing here? composer config --global repo.packagist composer https://packagist.org and then try again. This should prevent it from using http protocol and force https which might fix it in case you have a bad proxy in the way. In my case, solved the issue as below: $ composer diagnose Checking composer.json: OK Checking platform settings: OK Checking git settings: OK Checking http connectivity to packagist: OK Checking https connectivity to packagist: OK Checking github.com rate limit: OK Checking disk free space: OK Checking pubkeys: FAIL Missing pubkey for tags verification Missing pubkey for dev verification Run composer self-update --update-keys to set them up Checking composer version: WARNING You are not running the latest stable version, run `composer self-update` to update (1.6.3 => 1.7.2) Composer version: 1.6.3 PHP version: 7.2.8 PHP binary path: /usr/local/Cellar/php/7.2.8/bin/php The I ran $ composer self-update --update-keys Open https://composer.github.io/pubkeys.html to find the latest keys Enter Dev / Snapshot Public Key (including lines with -----): [copy and paste the dev pub key] Enter Tags Public Key (including lines with -----): [copy and paste the tags pub key] Then again, I ran $ composer self-update However, during installing the package, it still shows: Authentication required (repo.packagist.org): Username: After providing my username and password for my packagist.org account and having my credential stored in /Users/xxx/.composer/auth.json, the issue was resolved.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00115.warc.gz
CC-MAIN-2021-04
2,328
17
http://www.rcgroups.com/forums/printthread.php?t=61636
code
For sale: Radio, servos, etc. from a DHC-2 Beaver :cool: Hello, I'm interested in selling the parts from my DHC-2 Beaver. All the parts are just about new, the plane was only used for 10 short flights. I'll sell them separately or altogether. Focus 3 AM radio 72mHz & 3 channel receiver (2) HS-55 servos GWS GS400 15amp speed control with brake (2) 7.2v 600mah batt NICAD Anyone interested please make an offer by emailing me at: (any amount will be considered) parts for sale sent you Email and PM. :eek: Sold already! Thank you everyone! :) |All times are GMT -5. The time now is 01:07 AM.|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065534.46/warc/CC-MAIN-20150827025425-00252-ip-10-171-96-226.ec2.internal.warc.gz
CC-MAIN-2015-35
592
12
https://training.epam.uz/en/training/3473
code
Do you wish to learn more about the DevOps engineer profession and try your skills in this direction? Then we invite you to participate in the DevOps Essentials self-study program — a course for those who possess basic skills in computer network and wish to grow their knowledge of this sphere. Self-paced courses from EPAM are available on the EPAM Learn educational platform. This course consists of video lectures, hands-on assignments, and knowledge tests to help you enhance your skills. It starts as soon as you register on this page: the course has no fixed start/end dates. The participants will be able to study the provided materials at a convenient pace. Anyone is encouraged to register for the course since no practical experience or technical skills are required — yet, it would be easier to master the program having a basic understanding of the OOP principles. After the successful completion of the course, you will receive a certificate from EPAM and further course recommendations for career development in this direction. You will learn the basics of computer networks, including the OSI model, the TCP/IP stack, and common protocols. The course includes topics on Linux, package management, and the command line. In the Bash module, we will review running commands, writing scripts, variables, conditional statements, and logical operations. You will learn how to use Git to work with GitHub, and in the Python section, we will take a detailed look at the virtual environment, working with data types, lists, sets, and dictionaries. This program format does not allow you to continue your studies in the EPAM Laboratory. Its goal is to provide the participants with essential, relevant, and topical knowledge for the career start. Among the other advantages of the course: Recommended materials for the course preparation: Have any questions? Contact us
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476464.74/warc/CC-MAIN-20240304165127-20240304195127-00287.warc.gz
CC-MAIN-2024-10
1,878
7
https://www.webmasterworld.com/apache/4280698.htm
code
joined:Jan 18, 2008 I am having an issue where we originally had a site up and running using the default domain that the host provided us. Yesterday I began to point our main example.com domain to the new webhost. Since it began switching when I try to visit the site in any browser I get a prompt telling me: "You have chosen to open which is a: application/x-httpd-php what should firefox do with this file?" I have tried restarting both apache and the server. .htaccess directives(these worked before the nameserver change): DirectoryIndex index.htm index.html index.php # Use PHP5 as default AddHandler application/x-httpd-php .php AddType application/x-httpd-php .php to get the site to stop doing this when it was the default domain, but now they seem to have no effect. In my httpd.conf file I have the following: allow from all Any ideas? Thanks so much for any help.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721141.89/warc/CC-MAIN-20161020183841-00460-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
875
14
http://lfoster18.weebly.com/blag
code
Over the past couple of weeks, I've worked really hard trying to soak up as much information as I can and make the most out of this experience. It has been said that CERN was build next to the mountains so that it's physicists could unwind there. In order to test this, I went with some friends to the Albert 1er hut above Chamonix over the weekend. Here are some pictures from our adventure: The LHC began collisions on the 23rd of May and I was in the ATLAS control room when they started. This is really exciting because the LHC has not collided protons for several months during an upgrade period. Hopefully with an upgraded machine we will be able to find hints at new physics! I built a jet clustering algorithm based on the anti-kt model used in professional analyses here at CERN. Mine is, of course, more rudimentary but was still quite fun to make. The histogram attached here shows the distribution of the algorithm's results for number of jets. The input was 10,000 events from the MC jet program I built earlier with two jets sent in opposite directions. The histogram clearly shows a peak at 2, which implies that the algorithm is working. Today and yesterday I have been working on a toy model of a parton jet. These jets are one of the most common structures that result from collisions at the LHC here at CERN. These jets form when quarks and gluons are separated by high energy collisions. Because these particles are held so tightly together, the amount of energy required to separate them is enough to actually form new particles to join up again. The model I made uses Monte Carlo, a computational technique used to model systems with inherent randomness. This is especially useful in quantum systems due to their probabilistic nature. The picture here is an output from the model I made. The guide I used for this project can be found here: Today I got to lean about various decay structures of W and Z bosons and how they show up in the ATLAS detector at CERN. This image shows a ZZ decaying into a muon/antimuon pair and an electron/antielectron pair. The muons pass all the way through the detector and are colored orange here while the electrons are seen through the large energy deposits (yellow blobs) left in the electromagnetic calorimeter (green). In the second picture a filter is applied to get rid of lower energy particles that could have been from earlier collisions or glancing blows. This makes the paths the particles take in the detector much more obvious. If you want to download and play with the software yourself, check out these links:
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529813.23/warc/CC-MAIN-20190420120902-20190420142902-00463.warc.gz
CC-MAIN-2019-18
2,580
5
http://wagenknecht.org/blog/category/infrastructure
code
After an uptime of 476 days on my old server I started migration to a new system. The new system is a little bit faster, offers more memory and hard disc mirroring. Unfortunately, it’s also slightly more expensive but it’s worthwhile. So please apologize any 404s, 403s or email bounces that might have happen. I hardly try to avoid any outage. I started preparing the server two weeks ago and updated the DNS entries for all website domains today. Mail is already handle by the new system and so far it’s working fine. There is a bug request for planet.eclipse.org. Unfortunately, the webmaster is pretty busy right now with the infrastructure migration and won’t have a chance to look at this before May. Thus, I decided to create http://planet-eclipse.org/. If you’r an Eclipse hacker or a committer just send me a mail with a link to your blog or to the feed of the Eclipse related cetegory and I’ll add it to Planet Eclipse. BTW, design proposals are also welcome :mrgreen:. Uhm. Some update to my Debian server must have broken my WordPress. Now it’s working again. Sorry. Tempus Fugit | TxFx.net | WordPress Smilies Mark Jaquith has compiled a list of available WordPress smilies. Thank you Mark!!! There is a German WordPress community. Quite usefull links. My next task related to my weblog is localization. I found an easy way to change the WordPress language on the fly. Needs a little bit of coding to finalize it and some nice rewrite rules for Apache. Hopefully next weekend. BTW, this Sunday is my birthday!
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145519.33/warc/CC-MAIN-20160205193905-00308-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
1,536
11
https://www.techzine.eu/news/devops/94953/atlassian-updates-jiras-open-devops-solution/
code
Atlassian has updated Jira’s Open DevOps solution. New functionality should improve the communications of teams while providing insight into the software development cycle. Last year, Atlassian expanded Jira with Open DevOps, a solution that brings the development tooling of Atlassian and partners to a central environment. Open DevOps allows developers to use the tools of their preference in Jira, the company’s flagship project management solution. Open DevOps now features new functionality to give developers an improved overview of their tooling. Developers reportedly use an average of 25 different tools for their development chain. Atlassian estimates that 10 percent of their time is spent on insight and maintenance. According to the company, developers need an all-in-one solution to cut back on the latter. Atlassian added new options in five areas. The foundation is an administrative toolchain page, which provides a central location for creating, managing and visualizing all tools used. The page allows developers to discover integrations for tools, discover and address toolchain gaps and visualize how development work moves through various tools. The toolchain page supports integrations for all phases of the development process. Developers can apply automation throughout the development lifecycle. This should improve inter-developer communications. Updates made are automatically communicated to developers. Furthermore, insights have been improved. Code repositories like Bitbucket, GitLab and GitHub can now be brought into a single code view. From there, Atlassian provides insights into teams’ work plans and the option to optimize them. Another new feature is the release tab. This helps development teams coordinate release management. Information can be pulled from various control management tools, CI/CD pipelines and feature flags, allowing development teams to overview what’s needed to push unpublished software. Once code is deployed into production environments, the new Compass development hub — which has been available in beta for several months — provides developers with insights into the health status of the software. The CheckOps functionality checks the health status of production components. Lastly, the Open DevOps tool now features a dark mode. This functionality will be rolled out in 2023.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00106.warc.gz
CC-MAIN-2023-50
2,355
8
https://www.udemy.com/user/aditya-joshi-9/
code
I'm a Software Engineer versed in Hyperledger Fabric, Hyperledger Besu, Quorum, Ethereum, R3 Corda, Elastic Search, Serverless, Web and Mobile App development, Cloud computing, IoT(Internet of Things), System Design. My interest also lies in Artificial Intelligence and Machine Learning. I am an expert in prototyping and giving life to ideas. I am a Certified Hyperledger Fabric Administrator(CHFA). And Currently, I am working on Blockchain, Hyperledger Fabric, Hyperledger Besu, Private Ethereum, Docker, Kubernetes, etc. I believe that technology is at its best when it’s simply invisible when it just works. I like creating products that look simple/clear from the outside, at the same time having a high potential of technology inside. If you feel I am doing a good job, please consider endorsing my skills on LinkedIn(@adityajoshi12)
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00424.warc.gz
CC-MAIN-2022-49
842
4
http://shauninman.com/blog
code
iOS Sticky :hover Fix Tweeted this a while back but every once in a while someone asks for this link so, for posterity! Retro Game Crunch: Primer Less than a week to go on the Kickstarter! One of the things we’ve been doing to draw attention to the dev journal aspect of the project is this Primer series. So far I’ve covered tools, getting started with Flixel, loading Tiled levels, basic player controls and camera, collectibles, and threats. Interview on The Industry Conor O’Driscoll wants to know where I’ve been, where I am, and where I’m going. So we talk apps, games, and collaboration. And bed head. Jimjam Jimjam Panic My submission to the Sworcery AV Jam answering soundtrack composer Jim Guthrie’s call. Lots of beautiful work in there.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700984410/warc/CC-MAIN-20130516104304-00021-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
760
8
https://neurocar.pl/nsproduct/pro-ncar-tech-park-02/
code
PARK technology is used to determine occupancy for parking lots. Based on data from sensors (e.g., cameras, magnetic field sensors, inductive loops, lidar), using statistical models and machine learning, the technology allows to determine the number of available parking places in a given area. Such data is transmitted via APIs to host systems (e.g., mobile applications or parking information signs). Unique features of PARK technology, developed by Neurosoft, include the ability to fuse data from different types of sensors into a single, consistent information about the state of a parking lot, the ability to compensate for errors of individual sensors, and the ability to predict the number of free parking spaces in the future – based on past data. As a result, the user receives reliable information about the status of the parking lot in real time.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474377.60/warc/CC-MAIN-20240223085439-20240223115439-00737.warc.gz
CC-MAIN-2024-10
860
1
https://bluetagpet.com/products/fluffy-fleece-blanket
code
Great fleece to give your pet a great comfortable sleep! Wash Style: Mechanical Wash Material: Fleece+ polyester - ☑️ Every time you shop, your purchases will build towards discounts. - ☑️ For every $1 you spend, earn 5 points. - ☑️ Use your points to buy products and earn discounts. Refer friends for even more savings. - ☑️ Free Shipping Worldwide - ☑️ Safe payments via Shopify® and PayPal® - ☑️ 24/7 assistance - ☑️ No hidden fees! - ☑️ 100% Money Back Guarantee! - ☑️ Tracking number for every order - ☑️ 2-3 week delivery Soft, comfortable, but does not stick to linoleum, rides I chose a size a little small but the material is ultra soft! I'm still happy with my purchase!
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735939.26/warc/CC-MAIN-20200805094821-20200805124821-00107.warc.gz
CC-MAIN-2020-34
723
15
http://shop.nordstrom.com/s/becca-animal-instincts-tankini-top-plus/3292868?origin=sizechart&tn=Sizeandfit_popup
code
Exclusive finds with a new theme each month. Our VP of creative projects shares what she’s into. Stay updated with a steady stream of new info. Write a review for a chance to win a $1,000 gift card. See details. Chat with a Customer Service Representative or call 1.888.282.6060. Watery hues charm the snakeskin print of a tankini top designed to barely skim the figure. The single shoulder strap, accented with a trio of filigree beads, pulls soft gathers into the fabric that overlays the bust.
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398444139.37/warc/CC-MAIN-20151124205404-00232-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
498
6
http://privateerpressforums.com/showthread.php?104084-Significance-of-quot-intervening-models-quot&p=1393417&viewfull=1
code
How are "intervening models" relevant to game play? Primal p.43 defines what an intervening model, but doesn't explain why it is important. The only reference to it is under the four requirements for Line of Sight on the same page, but it seems unnecessary to that section. "3. The line must not pass over the base of an intervening model that has a base size equal to or larger than Model B." It seems like "an intervening model" could simply be replaced with "any model" and still maintain the exact same effect. Why the definition? Is it significant to anything else? I can see it's significance for Slam attacks, but is there something else I'm missing? I thought I read a line in the rules about intervening models sometimes having significance to shooting, but I didn't note the page of that reference and never ran across anything that suggests an intervening model is at all significant to shooting unless EVERY line from Model A to Model B crossing the base of the intervening model... in other words the target is completely behind the intervening model.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703057881/warc/CC-MAIN-20130516111737-00003-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,064
4
https://astucesdivi.com/en/projet-divi/
code
What is a "Divi Project"? When and why use them? Many of you have asked me this. And many of you also tell me that there are not many results on this subject on Google... Maybe because those who have been using WordPress for a long time have understood their usefulness? Or maybe because they are "us eless"? (LOL!) I will tell you more in this article... Announcement: this article contains affiliate links that you will easily recognise. The classic links are in purple and sponsored links are in pink. 1 - What is a Divi Project? A "Divi Project" is a type of dynamic publication that behaves in the same way as Articles. This means that a Divi Project can be assigned Categories and Tags - which give rise to an archive page - as opposed to Pages which are so-called "static" publications. Like the ArticlesEach time you publish a Divi Project, it will be dynamically displayed on the Projects page at the top of the list. The Projects page then behaves in the same way as a Blog page. In fact, a "Divi Project" is nothing more than a "Custom Post Type " (CPT). Need to master Divi? Discover my training which will guide you step by step in the understanding and use of Divi! Learn more about Divi training. 2 - What is a Custom Post Type (CPT)? A Custom Post Type, or CPT, is a custom publication type. WordPress comes with 2 publication types by default: posts and pages. What do you do when you need to publish content that has nothing to do with blog posts or site pages? Well, that's when you need to create a new publication type: a new Custom Post Type. Thanks to this new type of content, you will be able to publish your specific content. However, it is not so easy to create a new type of content, especially if you get started with WordPress and Divi ! This requires custom development and/or the use of a specialised extension: Custom Post Type UI (CPTUI). To learn more, you can read this tutorial that I published on WP Formation and which proposed a use of ACF with CPTUI. This is why the Divi theme natively embeds a custom publication type that is neither a Page nor an Article: it is a Project. With Divi Projects, you don't need to code a new CPT within your site because it is already present and active. Did you know that you can test Divi for free? Go to this page and click on "TRY IT FOR FREE 3 - When should the "Project" publication type be used? This is the question I am often asked: what are Divi Projects for and when should I use them? My answer is going to be simple: if you don't feel the need to use a custom publication type, then you certainly don't need to use Divi Projects. However, depending on the needs of your site, Divi Projects can be useful in many cases... This type of custom publication is called "Project" to refer to a web project. I imagine that Divi was originally launched to cater to a market of web freelancers: webmasters, SEO agencies, freelancers, graphic designers, etc. So these users could use Divi Projects to display their work, the so-called "Portfolio". But let's not close our eyes: just because this Custom Post Type is called "Project" doesn't mean it can't be used for other purposes! Here are a few examples of use: - A real estate site: you can use your Pages for the structure of the site, Articles for real estate advice and Projects to display the latest real estate ads. - A food blog : Pages are used to create the structure, Articles are used to publish recipes and Projects can promote culinary events. - A coaching site in makeover : Pages for structure (again!), Articles for makeover advice and Projects to create a portfolio of makeovers already done. I'll stop here for the examples because I'm sure you've understood! But I also believe that these examples may raise another question: why use a Divi Project when I could very well mix my tips and my realisations within the blog by using Categories to separate the themes ??? This time it is up to you to answer this question! It's all about the structure of the site. And beware, the structure of a site has a lot to do with your SEO ! So think carefully. A little hint to make your choice: if you think you will publish 4 projects and they will remain 4 "forever", then these Projects deserve to be published in Pages. If, on the other hand, you are thinking of publishing Projects on a regular basis and they have nothing to do with your blog posts, then you have probably made the right choice. 4 - How to use Divi Projects? A Divi Project is created in the same way as an article or a page. Go to the Projects > All Projects tab to find the list of projects already published or waiting to be published. You can add a new Divi Project by clicking on "Add new". When creating a Divi Project, you will find the same interface as for an article... You will be able to : - Enter a project title - Choose between the use of the Divi Builder or the default formatting (Gutenberg). - Add a category - Add a label (optional) - Add a highlighted image. - Enter and optimise SEO information depending on the extension you use. - Save draft, publish There is nothing complicated about creating, publishing or using a Divi Project... Finally, you can display your projects in this way: - Add/create a page dedicated to your projects, e.g. a Portfolio page (or a "Real Estate Sales" page, a "Our Projects" page, etc.) and activate the Divi Builder. - Insert a Portfolio module or a Filterable Portfolio module. - Set up the content of the module (categories, elements, etc.) - Set the Style options. The Grid template is often nicer. - Check the rendering of the Divi Projects display and continue building your layout (Layout). Don't delay! Discover the Divi theme here ! 5 - Tips for improving the design of the Portfolio module However, it is possible that the visual rendering of your Divi Projects page will not meet your expectations... Indeed, the style options of the Portfolio module are simple and you don't have much choice. It is possible that your images are croppedfor example... I then propose a list of resources to go further in the design of the Divi Portfolio module, the one that displays your Divi Projects: - The official documentation for the Divi Portfolio Module. - How to get square images in the Portfolio module. - How to remove cropping from images in the Portfolio module. - How to change the size of Divi images. - Display the Portfolio module in 3 columns. - 9 examples of Divi Project presentations. So, with this playlist, you can create a great Portfolio page!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473524.88/warc/CC-MAIN-20240221170215-20240221200215-00278.warc.gz
CC-MAIN-2024-10
6,526
62
https://skmullen.wordpress.com/2012/10/01/only-10-of-email-legitimate/
code
At work we use a pair of Cisco Ironport Email Security appliances. In September, only 10% of the email message we received were legitimate. During September, we received over 3.5 million email messages and, of those, only 352 thousand, or 10%, were delivered to staff mail boxes. The rest were spam or contained malicious content and were blocked. We sent over 194 thousand email messages during the month. I can’t imagine how we would cope without a spam filter.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647475.66/warc/CC-MAIN-20180320130952-20180320150952-00598.warc.gz
CC-MAIN-2018-13
465
3
https://lists.debian.org/debian-devel-games/2008/11/msg00029.html
code
Re: Git vs SVN Eddy Petrișor wrote: Understood and thanks. I guess what I'm really struggling with is that I just don't know git well enough yet to make an informed decision about it's I need a mentor! :) What would you like to know? Maybe we should continue this conversation on IRC. I am confident that you'll find KiBi, myself and Rhonda good enough to explain/tutor your git usage. Still, I recommend you try to follow one of the multiple git tutorials out there and try to stick with git for a week or so. If properly done, I'm sure you'll fall in love. I have already imported one into Git (gtkatlantic) so I've used the wiki and know enough to be dangerous. But, for example, some of the ones I'm looking at like xbl (one of Joey Hess's packages) is already in his git repo so I pulled that. But can I "easily" add the pristine-tar (or even upstream) crap somehow and then get it into ours? I can even get git-buildpackage to run on it.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987750110.78/warc/CC-MAIN-20191020233245-20191021020745-00481.warc.gz
CC-MAIN-2019-43
944
17
https://community.filemaker.com/thread/116862
code
I have a table called coverage and another one called coverageReport In the table coverage the user can enter lots of data but for the purposes of this they enter date and region (coverage::date and coverage::region) the two tables are linked by date so coverage report has a start date and an end date and they are linked when coverage::date is between coverageReport::startdate and CoverageReport::Enddate. coverage::region is a drop down box with 7 possible entries. for the porposes of this lets say A,B,C,D,E,F,G I would like to do a piechart in coverageReport for coverage::region How can I do this?
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204077.10/warc/CC-MAIN-20190325153323-20190325175323-00074.warc.gz
CC-MAIN-2019-13
605
6
https://statisfaction.wordpress.com/2011/11/13/seminar-on-monte-carlo-methods-next-tuesday-in-paris/
code
Seminar on Monte Carlo methods next Tuesday in Paris A quick post on a one-day seminar on Monte Carlo methods for inverse problems in image and signal processing, that will take place at Telecom ParisTech on Tuesday, November 15th. Details and abstracts are on the seminar’s webpage: (for English-reading people, here is a google translated version). The seminar is organised by Gersende Fort, from Telecom and CNRS and the program looks very interesting, the topics are varied and fairly methodological. The webpage is in French but I think the talks are going to be in English, since there will be English-speaking people in the audience. I’m very happy to participate by presenting the Parallel Adaptive Wang Landau algorithm I’ve been blogging about lately, and Christian Robert is going to present our parallel Independent Metropolis-Hastings paper, so I can’t wait to getting more feedback on both. See you on Tuesday?
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00525-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
932
4
https://www.behaviorsupport.org/betterhelp-case-study/
code
To get going with BetterHelp, individuals can register for an account. Betterhelp Case Study. Sure! BetterHelp is a popular online counseling platform that links individuals with licensed therapists and counselors. The platform uses a variety of mental health services, including individual therapy, couples treatment, and household therapy, as well as services for specific problems such as anxiety, stress and anxiety, and trauma. Here is a more detailed take a look at BetterHelp and its services: BetterHelp was founded in 2013 by Alon Matas and Danny Bragonier. The business was developed with the objective of making psychological health treatment more hassle-free and accessible for individuals who may not have the time or resources to attend in-person treatment sessions. BetterHelp has actually considering that grown to become one of the largest and most popular online therapy platforms on the planet, with millions of users in over 170 nations. How it Functions: BetterHelp uses a variety of psychological health services through its platform, consisting of specific therapy, couples treatment, and household therapy. The company likewise provides specific services for particular problems such as trauma, depression, and anxiety. on the business’s website. The sign-up procedure includes finishing a brief survey about your psychological health history and objectives for treatment. Based on your answers, BetterHelp will advise a therapist or therapist who is best suited to meet your needs. Once you are matched with a therapist, you can begin working with them through the BetterHelp platform. This generally involves scheduling weekly online sessions with your therapist through the platform’s messaging and video chat functions. In addition to these scheduled sessions, you can also message your therapist between sessions if you have any concerns or issues. Benefits of BetterHelp: There are several benefits to utilizing BetterHelp for psychological health treatment. Among the primary benefits is benefit. With BetterHelp, you can access therapy from anywhere at any time, as long as you have an internet connection. This can be specifically useful for individuals who reside in rural areas or who have busy schedules that make it tough to attend in-person therapy sessions. Another advantage of BetterHelp is that it provides a vast array of therapy options. Whether you are looking for private therapy, couples treatment, or therapy for a specific issue such as anxiety or stress and anxiety, BetterHelp has therapists who can help. This can be specifically helpful for people who may not have access to a wide range of psychological health experts in their local area. In addition to the benefit and range of treatment alternatives, BetterHelp also provides a variety of tools and resources to support your mental health journey. These resources consist of posts, tests, and other instructional products that can assist you much better comprehend your psychological health and how to manage it. Criticisms of BetterHelp: Betterhelp Case Study One criticism of BetterHelp is that it is not a replacement for in-person therapy. While online therapy can be effective, it may not appropriate for everybody. Individuals with severe mental health concerns or those in crisis might need more extensive treatment than what is offered through an online platform. Another criticism of BetterHelp is that it is more costly than some other treatment alternatives. While the company provides a sliding scale for its fees, some individuals may still discover it tough to manage. Overall, BetterHelp is a popular and practical option for individuals seeking mental health treatment. The platform offers a vast array of treatment alternatives and resources to support your psychological health journey. While it is not a replacement for in-person treatment and may be more costly than some other choices, it can be a helpful tool for people who wish to access therapy from the benefit of their own house. ok thanks please likewise write 500 words about treatment and its advantages along with a few of the illnesses it can deal with Therapy, likewise known as psychotherapy or counseling, is a form of treatment that involves talking with a psychological health professional to deal with psychological health problems or concerns. Therapy can be an effective treatment for a vast array of emotional problems and mental diseases, and it can also be useful for individuals who are handling tough life events or obstacles. There are many different kinds of therapy, each with its own approach and techniques. Some common types of treatment consist of cognitive-behavioral therapy (CBT), dialectical behavior therapy (DBT), and interpersonal treatment (IPT). No matter what type of therapy is utilized, the goal is to help people better comprehend their behaviors, sensations, and thoughts, and to establish coping strategies and abilities to manage their mental health. Among the primary advantages of therapy is that it provides a helpful and safe space for individuals to discuss their thoughts and feelings. This can be specifically practical for those who might not have a helpful network of pals or household to turn to. Treatment can likewise assist people work and recognize through the underlying reasons for their mental health issues, which can be an essential step in the recovery procedure. In addition to offering assistance and understanding, treatment can also assist people establish coping methods and abilities to handle their mental health. This can consist of discovering how to manage tension, communicate successfully, and make much healthier choices. Treatment can be an effective treatment for a wide variety of mental illnesses and psychological issues, consisting of: Post-traumatic stress disorder (PTSD). Substance abuse disorders. Grief and loss. Therapy can be particularly handy for individuals who are dealing with tough life occasions or difficulties, such as a divorce, job loss, or the death of an enjoyed one. It can offer an area to process and work through these difficulties, and to develop coping strategies to manage the associated stress and emotions. While treatment can be an efficient treatment alternative, it is very important to keep in mind that it is not a quick fix. It needs time, effort, and dedication from both the therapist and the specific looking for treatment. It might take several sessions prior to an individual begins to see progress, and it is very important to be patient and constant in going to therapy sessions. In general, treatment can be a valuable tool for people looking for to enhance their psychological health and health and wellbeing. It supplies a helpful and understanding area to discuss sensations and thoughts, and can assist individuals establish coping techniques and abilities to manage their mental health. Online treatment platforms like BetterHelp and Talkspace offer a flexible and hassle-free alternative to in-person treatment. Both BetterHelp and Talkspace permit people to get treatment from another location, using video, phone, chat, or messaging to communicate with their therapist. This can be particularly helpful for people who have busy schedules, limited access to psychological health services, or mobility concerns that make it hard to attend in-person treatment sessions. When deciding which platform is right for you, there are some essential differences in between BetterHelp and Talkspace that you might want to think about. A few of the factors to think about consist of:. Cost: The expense of BetterHelp and Talkspace varies depending on the plan you choose and the length of your commitment. BetterHelp uses a weekly plan, a monthly strategy, and a quarterly strategy, while Talkspace uses a yearly plan and a month-to-month plan. It’s worth noting that these rates are for standard treatment sessions. Therapy modalities: Both BetterHelp and Talkspace provide a range of therapy techniques, consisting of cognitive behavior modification (CBT), dialectical behavior modification (DBT), and acceptance and dedication treatment (ACT). However, BetterHelp offers a wider series of treatment methods and techniques, consisting of integrative treatment, holistic treatment, and alternative treatment. Therapist accessibility: Both BetterHelp and Talkspace have a large network of certified therapists, however the availability of therapists might differ depending upon your place and the time of day. BetterHelp uses a therapist directory that enables you to look for therapists based upon their specialties, credentials, and schedule, while Talkspace permits you to ask for a specific therapist if they are not readily available. Resources and features: Both BetterHelp and Talkspace offer a range of functions and resources to support their clients, including therapy sessions, self-care tools, and access to mental health resources. However, the particular functions and resources provided by each platform may differ, so it deserves comparing the two to see which one fulfills your requirements. Ultimately, the option in between BetterHelp and Talkspace (or any other online treatment platform) will depend upon your private requirements and preferences. Both platforms have their own strengths and limitations, and it is necessary to do your research and consider your choices prior to making a decision. BetterHelp is an online therapy platform that connects individuals with certified therapists for remote treatment sessions. BetterHelp was founded in 2013 and has actually because grown to become one of the biggest and most popular online treatment platforms, with over 1 million customers served. BetterHelp offers a variety of therapy techniques, including cognitive behavioral therapy (CBT), dialectical behavior therapy (DBT), approval and dedication treatment (ACT), and more. Therapists on the platform are licensed experts who have undergone extensive training and are experienced in providing online treatment. BetterHelp therapy sessions are held using a variety of formats, consisting of video, messaging, chat, and phone. This enables clients to select the format that works best for them and their schedule. Treatment sessions are normally held at a frequency and duration that is agreed upon by the client and their therapist. Some individuals select to have weekly sessions, while others may have sessions less often. BetterHelp is a flexible and hassle-free option for people who are looking for treatment however might not have the time or resources to attend in-person treatment sessions. It can be particularly valuable for individuals who have hectic schedules, live in backwoods with limited access to mental health services, or have mobility concerns that make it tough to attend in-person treatment sessions. In addition to treatment sessions, BetterHelp also uses a range of resources and assistance to help customers enhance their psychological health and well-being. These resources may include self-care tools, psychological health posts, and access to a community of customers and therapists. BetterHelp has a 100% fulfillment assurance, so if a client is not delighted with their therapy experience, they can request a new therapist or cancel their subscription at any time. BetterHelp is not covered by insurance, so customers are responsible for spending for their therapy sessions out of pocket. However, some individuals might be able to use their versatile spending account (FSA) or health savings account (HSA) to spend for BetterHelp services. Overall, BetterHelp is a trusted and reliable online therapy platform that has actually helped many individuals enhance their mental health and wellness. If you are thinking about seeking therapy however are not able to participate in in-person sessions, BetterHelp might be a great choice to consider. BetterHelp is an online treatment platform that connects individuals with licensed therapists for remote treatment sessions. BetterHelp was founded in 2013 with the goal of making mental health services more available and practical for people who may not have the time or resources to attend in-person treatment sessions. Today, BetterHelp is among the largest and most popular online treatment platforms, with over 1 million customers served. BetterHelp offers a variety of treatment modalities, including cognitive behavioral therapy (CBT), dialectical habits treatment (DBT), approval and dedication therapy (ACT), and more. CBT is a commonly used and evidence-based therapy technique that focuses on assisting individuals identify and alter unfavorable patterns of thinking and behavior. Therapists on the BetterHelp platform are certified experts who have undergone extensive training and are experienced in supplying online therapy. All therapists on the platform are needed to hold a master’s degree or higher in a psychological health-related field and to be licensed in their state of practice. BetterHelp therapists come from a variety of specializeds and backgrounds, so clients can select a therapist who is a great suitable for their needs and preferences. BetterHelp treatment sessions are held using a range of formats, consisting of video, chat, messaging, and phone. Treatment sessions are generally held at a frequency and duration that is agreed upon by the client and their therapist. Among the crucial advantages of BetterHelp is the benefit and flexibility it provides. With BetterHelp, clients can have therapy sessions from the convenience of their own home, at a time that is convenient for them. This can be specifically useful for people who have hectic schedules, live in rural areas with minimal access to psychological health services, or have movement problems that make it challenging to participate in in-person treatment sessions. In addition to treatment sessions, BetterHelp likewise uses a range of resources and support to help clients improve their psychological health and wellness. These resources may consist of self-care tools, psychological health short articles, and access to a community of clients and therapists. BetterHelp has a big network of therapists and clients, and many individuals discover it valuable to share their experiences and support one another. BetterHelp has a 100% fulfillment warranty, so if a customer is not happy with their therapy experience, they can ask for a brand-new therapist or cancel their membership at any time. BetterHelp is not covered by insurance, so customers are responsible for paying for their therapy sessions expense. Some individuals. Talkspace is an online therapy platform that connects individuals with licensed therapists for remote treatment sessions. Talkspace was founded in 2012 with the goal of making psychological health services more available and hassle-free for individuals who might not have the time or resources to go to in-person treatment sessions. Today, Talkspace is a leading online therapy platform, with over 1 million customers served. Talkspace uses a range of treatment methods, including cognitive behavioral therapy (CBT), dialectical habits treatment (DBT), approval and commitment therapy (ACT), and more. CBT is an extensively utilized and evidence-based treatment method that focuses on helping people recognize and change negative patterns of thinking and behavior. Therapists on the Talkspace platform are certified experts who have actually gone through extensive training and are experienced in supplying online therapy. All therapists on the platform are needed to hold a master’s degree or greater in a mental health-related field and to be accredited in their state of practice. Talkspace therapists come from a range of specializeds and backgrounds, so clients can pick a therapist who is an excellent fit for their needs and choices. Talkspace treatment sessions are held using a variety of formats, consisting of video, messaging, chat, and phone. This permits clients to pick the format that works best for them and their schedule. Treatment sessions are usually held at a frequency and duration that is agreed upon by the client and their therapist. Some people pick to have weekly sessions, while others may have sessions less regularly. here are some common concerns that individuals might have when starting therapy:. Preconception: Many individuals stress that looking for therapy suggests that they are “weak” or “crazy.” This is a typical misconception. Looking for therapy is a brave and proactive action that many people take to enhance their psychological health and well-being. It takes nerve to admit that you need help and to deal with enhancing yourself. Expense: For some people, the cost of treatment may be a concern. Therapy can be pricey, particularly if it is not covered by insurance. There are alternatives readily available for people who are unable to manage therapy. Some therapists use a sliding scale charge based upon earnings, and some online therapy platforms offer more budget-friendly rates. Time: Some people might worry that they do not have enough time to devote to therapy. Many therapists offer flexible schedules and enable customers to select the frequency and period of their sessions. Online therapy platforms like BetterHelp and Talkspace offer a lot more versatility, as clients can have therapy sessions from the comfort of their own house at a time that is convenient for them. Finding the best therapist: It is very important to find a therapist who is an excellent fit for your needs and preferences. Some people may stress over discovering a therapist who comprehends their special challenges and viewpoints. Nevertheless, many therapy platforms offer directories or matching services that can assist you discover a therapist who is an excellent suitable for you. Opening: Some people may feel distressed about sharing individual details with a therapist. It’s natural to feel nervous about opening to someone you do not understand, but it is necessary to keep in mind that therapists are trained to be non-judgmental and to keep privacy. Your therapist is there to support you and to assist you resolve your difficulties, not to evaluate you. Not knowing what to expect: It’s regular to feel not sure about what to anticipate from treatment. Every therapist and client relationship is special, and the specifics of your therapy experience will depend on your individual needs and objectives. Many therapy includes discussing your ideas, feelings, and experiences with your therapist and working together to develop techniques for enhancing your psychological health and well-being. Fear of change: Change can be frightening, and some individuals might fret about what will take place if they.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817650.14/warc/CC-MAIN-20240420122043-20240420152043-00162.warc.gz
CC-MAIN-2024-18
18,805
60
https://www.experts-exchange.com/questions/28487393/MS-Access-2013.html
code
I have a report that is based solely on an SQL statement in the record source. It is very efficient - however, there are certain needs that need to be addressed for one particular customer. In the report there is a number of calculated fields that produce the total price of each number of items, and an invoice total. there is a textbox control that displays a customer number. it is designated as field79 in the property sheet with a control source of TrxCust. there are several hidden textbox controls that calculate the invoice total. They are as follows - Field 102 l:","") with one adjacent to that named "InvoiceTotal" What I need to do is ad $20.00 to the InvoiceTotal when the field79 "TrxCust" is 27. There is no designated format for field 79 - so I cannot tell you if it is number or text - and Decimal places is set to Auto I have tried different variations of code in the On Load property Like If Me.Field79.Value = 27 Then Me.InvoiceTotal.Value = InvoiceTotal + Text127.Value With and without Value added - and I am lost - I've attached the SQL code that pulls the info for this Invoice report Any help would be greatly appreciated
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215261.83/warc/CC-MAIN-20180819165038-20180819185038-00680.warc.gz
CC-MAIN-2018-34
1,146
10
http://www.guildwars2guru.com/topic/60871-repetition-how-guild-wars-2-is-not-your-average-mmorpg/page-3
code
I feel like I'll be repeating points so I'm not going to go quote by quote this time. First off, you're still misusing dynamic. Events do not need to repeat in order to be dynamic. They don't. Let me give you how even the events in GW2 were defined. None of this has to do with it happening again. Zero. Now that we have that out of the way, I can re-iterate the same bandit point you did except for Witcher 2 phrasing. Saying no other games have DEs is really weird since it's the whole point of RPGs in the first place. Let me give you an example. Ok I see why we dont seem to be on the same page you're going with the wiki description but It seems its wrong! (keep in mind wiki is populated by players) The official description is: "Dynamic events change and evolve in response to how you interact with them, leaving lasting effects in the game world."https://www.guildwar...dynamic-events/ And I think the change and evolve are key, so much so if we check the definition of the word Dynamichttp://www.definitio...inition/dynamic "characterized by or producing change or progression" Now this is probably the tricky part if change or progression happens just once, does it make it dynamic? In my opinion it doesnt , seems in your opinion it does. This actually an interesting question. My main problem is the moment the change happens it definitely becomes static. But then thats true of anything that is static, anything static still had a change happening to it bringing it into existance but its still considered static. Example a Wall is static yet while it was being build it was changing become larger and larger. You dont conider the wall dynamic though just cause one point in time it was changing. Thats why in my opinion a progression that happens just once isnt really dynamic but static. And yes The Dynamic events dont have to repeat to be dynamic but they have to keep changing, its just that repeating is the only way to achieve that realistically really! Saying otherwise is misuse of definition. Permanent change is a better function of dynamic events because it's more realistic. Hmm no I am afraid I completely disagree that permanent change is more realistic. Its also not realistic how gw2 implements it with everything repeating even as often as every 1 hour (game time). Ideally it should be something in the middle. If the centaurs are at war with the humans its not realistic they would attack a town only once. Or driven out of their home and never try to take it back. Obvioulsy you're right and its not realistic they would retry every hour either. The same swamp monster being triggered 50 times within the course of a day cheapens "lasting impact on the world", do you not agree? Lasting impact should be permanent because it makes your option more weighty. "Oh no I didn't protect the water supply from the bandits" is never something I think of in GW2 because they're going to attempt to do so every half hour. Would you say you're honestly so worried about the townsfolk that get attacked every 30 minutes that you protect the area and never leave it? Of course not, because you know you'd be there forever. Events that repeat every 30 minutes are bound to cheapen the experience of "lasting effects" because they are in fact not lasting. now you are cheating though you're mixing what you know with what you experiance. The swamp monster might spawn 24 times in a game day but you'll most likely only see it once or twice and probably with quite a gab between the two. They're lasting from your characters point of view. And they're certainly lasting when compared to a regular MMO quest where change is litterally counted in a couple of seconds until the mob you killed respawns. You saying I never played GW2 when you then repeated what I said is weird. I said there was 2 options for each DE, you said "have you even played this game? there's 2 options for each DE!" Uh, yeah. Yes there's 16 total options, but it's still just positive and negative. Still no shades of grey. The events I listed above are shades of grey. Henselt is a greedy person, but the alternatives to him being king are placing your faith in a 6 year old girl as well as having the blame being focused on all the mages and having them all slaughtered OR having Temaria be part of Redania and not a free country. Which is the best? There's no clear option. However - it's up to you to make that option. Well technically I asked if you played the game, I didnt put it as a statement but anyhow you're cheating again by comparing the choices at hand with the back story. In a Dynamic Event you can suceed or you can fail true. But like wise in each quest you mention all you have are two choices as well. Do I kill Henselt or do I save him? Dynamic events have their back story as well which is what you're calling shades of grey. Bandits aren't just trying to poison the water for fun or to give you something to do, they have an agenda. More then that saving the water supply is just 1 of 4 interconnected Dynamic Events chain that are happening in the area. Its not just an assault on the watersupply they also assault the mill to the east in order take over the store at the bottom of it. They assault the farms to the north in order to steal the life stock and at the same time they have a base of operations with their leader. So Tehcnically you do have the same shades of grey. Do I ignore the poising and go deal with the leader? do I ignore the poisioning and go stop the bandits from taking control of the food supply? do I ignore the poisoning and go save the farmers and their live stock? Now keep in mind that Dynamic events would be more equal to side quest rather then the main story line which is full of these choices ! I just want to make sure you understand what "dynamic" means and that "dynamic events" in GW2 is just the option of choice on how the world is impacted. The only reason they are repeated is because it's an MMO. The fact they are repeated has no holding on them being "dynamic" or not. Are we understood on that? Well no, for reasons explained further up That's because people don't really give a crap about the event; they just want the reward. Yes you lose out on 3/4 of the event xp from the chain but who cares? While you're not doing those events, you can do other things that give roughly as much xp. If thats how you enjoy playing sure go ahead! I am just point out why I am doing no crafting and am level 54 and just starting the level 45 area!
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823630.63/warc/CC-MAIN-20171020025810-20171020045810-00811.warc.gz
CC-MAIN-2017-43
6,497
24
https://www.grad.ubc.ca/researcher/17320-berman
code
Developing primary care systems “Resource Tracking and Management” (RTM) Improving health care financing mechanisms Relevant Degree Programs Open Research Positions This list of possible research projects is non-exhaustive. It only shows positions that are specifically advertised in the G+PS website.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00020.warc.gz
CC-MAIN-2021-21
305
6
https://android.stackexchange.com/questions/71510/16gb-memory-card-ive-used-2gb-and-yet-have-14gb-of-free-space-how-can-i-use-th
code
im using my 16gb memory card as to store all my apps on etc.. as my internal memory is too full so i chamged my sd card to default storage, but now as i have been using it it says storage space is getting low... but when i check it i therefore have 14gb left in total space i can use but it wont let me ... that's heaps of space I can use, I've tried moving apparently from sd to phone but internal is too full so I can't... I don't see how I need to do that anyway as I've only used up 2gb on sd and yet have 14gb I can use but why is it saying low on storage space on my sd? And how can I go by using the other 14gb ? This may be expected, depending on what phone you are using. The original standard of SD card readers had a 2GB limit on them. These readers could read cards that had more space, but only had enough memory registers to read the first 2GB of them. Sometimes a firmware update can increase this from 2GB to 4GB, but it's extremely rare. From the wikipedia article on the SD standard: Capacity SDSC (SD): 1 MB to 2 GB, some 4 GB available. SDHC: >2 GBs to 32 GB SDXC: >32 GB to 2 TB, some 32 GB available.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100739.50/warc/CC-MAIN-20231208081124-20231208111124-00691.warc.gz
CC-MAIN-2023-50
1,122
4
https://www.wellbeingintlstudiesrepository.org/acwp_vsm/46/
code
Galathea intermedia is common, but cryptic, on Clyde maerl deposits where it lives in small groups of mixed sex and age, sharing shelters (typically dead Dosinia shells) to avoid predation. Its appearance is marked by six iridescent blue spots which may play an important role in intra- or interspecific interactions. Hall-Spencer, J. M., Moore, P. G., & Sneddon, L. U. (1999). Observations and possible function of the striking anterior coloration pattern of Galathea intermedia (Crustacea: Decapoda: Anomura). Journal of the Marine Biological Association of the UK, 79(02), 371-372.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00265.warc.gz
CC-MAIN-2022-49
584
2
http://navarrofamilytree.com/
code
Ancestors of Ramon Navarro Welcome to the Navarro family tree website. Navarro is only one part of the tree, it branches into many family lines; Mancias, Valdez, Myrick, Robinson, etc, just to name a few. I hope through this website I can add more branches through out the tree. Please contact me if you have any information to add to this tree. The Navarro Family Crest is wolves. Table of Contents Family Group Record for Ramon Navarro Index of Names Send E-mail to [email protected]
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509196.33/warc/CC-MAIN-20181015121848-20181015143348-00282.warc.gz
CC-MAIN-2018-43
485
7
https://sayainstitute.org/2023/08/how-to-learn-artificial-intelligence/
code
At its core, AI seeks to empower machines with capabilities that have, until now, been uniquely human. It’s not just about programming a computer to perform tasks; it’s about imbuing it with the ability to think, comprehend, and act in ways reminiscent of human cognition. From the nuances of understanding human languages and perceiving emotions in speech, to solving intricate problems that have stumped experts for years, AI’s ambit is as vast as it is profound. As AI continues its meteoric rise, shaping industries and redefining paradigms, it has garnered unprecedented attention. Professionals, students, and technophiles around the world are eager to delve into the intricacies of this field, keen to harness its potential and shape the future. The surge in interest is palpable; universities are overflowing with AI-centric courses, startups are sprouting with AI-driven solutions, and established industries are pivoting to integrate AI into their core operations. This article endeavors to be your compass in the expansive ocean of AI. Whether you’re a novice intrigued by the possibilities, a professional looking to pivot into this domain, or simply a curious soul yearning for knowledge, our aim is to illuminate the path for you. Through a deep dive into the principles, applications, challenges, and future prospects of AI, we hope to equip you with a holistic understanding and inspire you to master the nuances of this transformative technology. Understanding the Basics 1. The Fundamental Concepts of AI 1. Machine Learning (ML) Machine Learning (ML) stands as one of the most pivotal pillars within the vast architecture of Artificial Intelligence (AI). At its essence, ML revolves around equipping machines with the capability to autonomously learn, evolve, and make decisions based on data, without being explicitly programmed for each specific task. This learning is achieved by continuously processing data and adjusting algorithms to optimize performance. Think of ML as teaching machines by experience. Just as humans learn and adapt from past experiences, ML-enabled systems refine their knowledge and improve their decision-making prowess with each data set they encounter. The applications of ML are diverse, ranging from predictive analytics in finance to recommendation systems in e-commerce. 2. Neural Networks Drawing inspiration from the intricate workings of the human brain, neural networks represent a set of algorithms structured to recognize underlying patterns and structures in data. These networks consist of interconnected nodes (analogous to neurons) organized into layers: input, hidden, and output layers. When data is fed into a neural network, it undergoes a series of transformations across these layers, enabling the network to discern intricate patterns. A notable trait of neural networks is their ability to adapt and refine their internal parameters based on the accuracy of their predictions. This characteristic makes them invaluable in tasks such as image and speech recognition, where understanding nuanced patterns is paramount. 3. Deep Learning Deep Learning, while a subset of Machine Learning, is distinct in its depth and complexity. It employs neural networks, but these are not just any ordinary networks; they are characterized by possessing a multitude of layers, aptly termed as ‘deep’ neural networks. Each layer in these networks contributes to a higher level of abstraction and complexity, allowing the system to comprehend even the most subtle patterns in large datasets. Deep Learning has been the driving force behind some of the most groundbreaking advancements in AI in recent years. From enabling real-time object detection in autonomous vehicles to the development of sophisticated chatbots and virtual assistants, the prowess of deep learning is evident. Its ability to process vast quantities of data and discern patterns that might elude human analysts sets it apart, positioning it at the forefront of AI research and applications. 2. The Mathematical Foundation to AI 1. Linear Algebra Central to the world of Artificial Intelligence is the study of Linear Algebra, a branch of mathematics that revolves around the intricacies of vectors, matrices, and linear transformations. Vectors and matrices form the bedrock of many AI and Machine Learning algorithms, facilitating data representation, transformation, and computations. For instance, the representation of data sets in machine learning often employs matrices, making operations on this data more systematic and efficient. Additionally, understanding eigenvalues and eigenvectors is paramount, especially when delving into Principal Component Analysis (PCA), a popular dimensionality reduction technique. Grasping these concepts not only aids in optimizing algorithms but also provides deeper insights into the geometric interpretations of data transformations. 2. Statistics and Probability In the realm of AI, especially in Machine Learning, making decisions based on data is quintessential. This is where Statistics and Probability come into play. These disciplines offer tools and frameworks for understanding and interpreting data distributions, variability, and uncertainty. A solid foundation in statistics will enable AI enthusiasts to engage in significance testing, hypothesis testing, and the estimation of various parameters, ensuring that the decisions and predictions made by AI models are both reliable and valid. Furthermore, concepts like Bayes’ theorem play a crucial role in probabilistic machine learning algorithms and frameworks, accentuating the importance of this mathematical domain in the AI landscape. Calculus, particularly the subfields of differentiation and integration, plays a pivotal role in the optimization of AI algorithms. Many machine learning algorithms, especially those in neural networks and deep learning, rely on optimization techniques like gradient descent to adjust their parameters and improve performance. The essence of these techniques is rooted in the calculus concept of differentiation, which helps determine the direction and magnitude of parameter adjustments. Integration, on the other hand, provides insights into areas under curves, which can be invaluable when understanding probability distributions or when working with continuous data. A firm grasp of calculus is thus indispensable for those aiming to delve deep into the mechanisms of AI algorithms and to fine-tune them for optimal performance. 3. Choosing the Right Programming Language Undoubtedly, when one ventures into the domain of Artificial Intelligence, the name “Python” frequently surfaces as the leading programming language. Python’s unparalleled ascendancy in the AI realm can be attributed to a confluence of factors. Foremost among these is its intuitive syntax, which fosters rapid development and prototyping. The language’s simplicity and readability render it accessible to novices while still retaining the depth and versatility cherished by seasoned developers. Moreover, Python boasts an extensive ecosystem of libraries and frameworks tailored for AI and Machine Learning. Libraries such as TensorFlow and Keras facilitate deep learning model development, while PyTorch offers dynamic computational graphs, making it ideal for research purposes. The vast community support, continuous updates, and extensive resources make Python the first choice for many AI practitioners. Emerging as a powerful tool in the domain of data analytics and statistical computing, R has etched a significant mark in the AI community. While it might not be the primary choice for developing neural networks or robotics applications, R’s strength lies in its comprehensive suite of packages for data manipulation, analysis, and visualization. For data scientists who aim to derive insights from complex datasets or statisticians working on intricate statistical models, R offers a rich set of tools. Its capabilities in hypothesis testing, linear regression, and other statistical methods make it an invaluable asset, especially in the preliminary stages of AI projects where data understanding and exploration are paramount. 3. Java and C++ Java and C++ are stalwarts in the world of programming, each with a legacy that spans decades. While they might not be the immediate choices for budding AI developers, their role in AI’s landscape is undeniable. Both languages come into their own when performance is of the essence. Java, with its “write once, run anywhere” philosophy, offers portability, making it a prime choice for applications where platform independence is critical. Its object-oriented nature facilitates the creation of large-scale, modular AI applications. On the other hand, C++ stands out when maximum efficiency is required. Given its proximity to system hardware and its ability to manipulate resources directly, C++ is often chosen for performance-intensive AI applications, like game AI or real-time simulations. Its vast standard library and ability to integrate with other languages make it a robust choice for AI developers who are keen on extracting every ounce of performance from their systems. 4. Navigating the Early Stages of Your AI Journey 1. Start Small Embarking on the path of Artificial Intelligence, as with any intricate discipline, requires a measured and systematic approach. Before plunging into the complexities of sophisticated AI models and algorithms, it is prudent to lay a robust foundation with simpler projects. Initiating your journey with modest undertakings, such as developing spam filters, provides an opportunity to understand the fundamental principles of machine learning in a real-world context. Furthermore, building recommendation systems can offer insights into user behavior and data processing. These projects, while elementary, serve a dual purpose: they solidify your theoretical knowledge by facilitating its practical application, and they instill confidence, preparing you for more advanced endeavors. 2. Engage in Competitions Competing in AI and data science challenges presents an unparalleled opportunity to not only hone one’s skills but also to witness the state-of-the-art methodologies in action. Platforms like Kaggle have burgeoned as hubs for aspirants and experts alike, hosting competitions that span a gamut of AI domains. Participating in these contests propels you into real-world scenarios, challenging you to devise innovative solutions to contemporary problems. Moreover, the community-centric nature of such platforms fosters learning: one can review solutions from global peers, partake in discussions, and assimilate diverse techniques and strategies. These competitions often serve as a crucible, refining skills and augmenting knowledge. The multifaceted nature of AI renders it a field ripe for collaboration. Engaging in group projects, be it by joining existing teams or initiating new ventures, can be immensely enriching. Such collaborations offer exposure to a spectrum of perspectives, methodologies, and problem-solving techniques. While individual study allows for deep introspection, collaborative endeavors often lead to the synthesis of diverse ideas, culminating in innovative solutions. Working with peers can also shed light on one’s strengths and areas for improvement, fostering both personal and professional growth. Furthermore, collaboration paves the way for networking, enabling connections with like-minded individuals and professionals, which can be invaluable in the ever-evolving landscape of AI. Deepening Your Knowledge As one delves deeper into the realm of Artificial Intelligence, it becomes abundantly clear that its expanse is both wide and deep. The sheer volume of topics and methodologies can be overwhelming, thereby underscoring the importance of specialization. After gaining a firm footing in the foundational concepts of AI, it is advisable to hone in on a specific area that resonates with one’s interests and career aspirations. Areas such as Natural Language Processing (NLP) focus on enabling machines to comprehend and generate human languages, a skill set vital in applications ranging from chatbots to sentiment analysis. Computer Vision, another significant area, is dedicated to granting machines the capability to interpret and make decisions based on visual data, finding applications in facial recognition, medical imaging, and autonomous vehicles. Meanwhile, Reinforcement Learning—a domain where agents are trained to make sequences of decisions by rewarding desired behaviors—holds promise in areas like robotics and game AI. Specializing allows for depth of knowledge, making you an expert in a niche area and increasing your value in the job market. 2. Stay Updated The dynamism of the AI sector is both its most exhilarating and challenging feature. Breakthroughs, novel methodologies, and transformative applications emerge at a staggering pace. To remain relevant and effective in the AI domain, continuous learning and adaptation are non-negotiable. Engaging with the latest research papers provides insights into the cutting-edge techniques and theoretical advancements in the field. Articles, often a more accessible medium, can distill complex topics into digestible formats, ensuring one stays abreast of the latest trends. Furthermore, attending seminars or webinars facilitates direct interaction with industry experts and researchers, often leading to stimulating discussions, networking opportunities, and exposure to diverse viewpoints. In the digital age, countless online platforms, forums, and communities are dedicated to AI advancements, making it easier than ever to stay informed. 3. Consider Formal Education While self-study and hands-on projects play a pivotal role in AI proficiency, there’s undeniable value in structured, formal education. Numerous esteemed institutions globally offer Masters and Ph.D. programs tailored to Artificial Intelligence and its sub-domains. Enrolling in such programs provides multiple advantages. Firstly, they offer a curated curriculum, designed by industry and academic experts, ensuring a comprehensive and rigorous exploration of AI. Secondly, they provide access to seasoned professors, research opportunities, and state-of-the-art labs and resources. Finally, formal degrees often open doors to prestigious job roles, research positions, and collaborations that might be challenging to access otherwise. While not a necessity, a formal education in AI can act as a significant catalyst for one’s career trajectory and intellectual growth. Further Online Resources and References - Coursera A platform offering courses from top universities on AI and related topics. - Kaggle A platform for data science competitions, datasets, and community knowledge sharing. - MIT OpenCourseWare – Introduction to Deep Learning A free course from MIT, providing a deep dive into deep learning. - Google’s Machine Learning Crash Course A free course from Google, covering the basics of machine learning using TensorFlow. - Stanford’s CS229: Machine Learning Course materials, lectures, and problem sets for Stanford University’s seminal machine learning course. The journey through Artificial Intelligence is undeniably intricate, demanding a confluence of dedication, continual learning, and strategic choices. As one navigates this vast domain, the path of specialization offers depth, continuous updating ensures relevance, and formal education can provide a comprehensive scaffold to one’s expertise. Yet, amidst all these strategic pursuits, the essence of AI remains in its transformative potential—a potential that transcends industries and reshapes our understanding of what machines can achieve. As aspirants and professionals in this field, our endeavors are not just about mastering a technology, but about pioneering a future where human intelligence and artificial prowess harmoniously coalesce. The future of AI beckons with promise, and the onus is on us to harness its vast potential responsibly and innovatively. With a passion for AI and its transformative power, Mandi brings a fresh perspective to the world of technology and education. Through her insightful writing and editorial prowess, she inspires readers to embrace the potential of AI and shape a future where innovation knows no bounds. Join her on this exhilarating journey as she navigates the realms of AI and education, paving the way for a brighter tomorrow.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510130.53/warc/CC-MAIN-20230926011608-20230926041608-00190.warc.gz
CC-MAIN-2023-40
16,556
63
http://lazypython.blogspot.com/2009/11/filing-good-ticket.html
code
- Search for a ticket before filing a new one. Django's trac, for example, has at least 10 tickets describing "Decoupling urls in the tutorial, part 3". These have all been wontfixed (or closed as a duplicate of one of the others). Each time one of these is filed it takes time for someone to read through it, write up an appropriate closing message, and close it. Of course, the creator of the ticket also invested time in filing the ticket. Unfortunately, for both parties this is time that could be better spent doing just about anything else, as the ticket has been decisively dealt with plenty of times. - On a related note, please don't reopen a ticket that's been closed before. This one depends more on the policy of the project, in Django's case the policy is that once a ticket has been closed by a core developer the appropriate next step is to start a discussion on the development mailing list. Again this results in some wasted time for everyone, which sucks. - Read the contributing documentation. Not every project has something like this, but when a project does it's definitely the right starting point. It will hopefully contain useful general bits of knowledge (like what I'm trying to put here) as well as project specific details, what the processes are, how to dispute a decision, how to check the status of a patch, etc. - Provide a minimal test case. If I see a ticket who's description involves a 30 field model, it drops a few rungs on my TODO list. Large blocks of code like this take more time to wrap ones head around, and most of it will be superfluous. If I see just a few lines of code it takes way less time to understand, and it will be easier to spot the origin of the problem. As an extension to this if the test case comes in the form of a patch to Django's test suite it becomes even easier for a developer to dive into the problem. - Don't file a ticket advocating a major feature or sweeping change. Pretty much if it's going to require a discussion the right place to start is the mailing list. Trac is lousy at facilitating discussions, mailing lists are designed explicitly for that purpose. A discussion on the mailing list can more clearly outline what needs to happen, and it may turn out that several tickets are needed. For example filing a ticket saying, "Add CouchDB support to the ORM" is pretty useless, this requires a huge amount of underlying changes to make it even possible, and after that a database backend can live external to Django, so there's plenty of design decisions to go around. These are some of the issues I've found to be most pressing while reviewing tickets for Django. I realize they are mostly in the "don't" category, but filing a good ticket can sometimes be as good as clearly stating what the problem is, and how to reproduce it.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104681.22/warc/CC-MAIN-20170818140908-20170818160908-00447.warc.gz
CC-MAIN-2017-34
2,809
6
https://www.planethoster.com/en/HybridCloud-Servers
code
Several tools are available and our team is in charge of configuration and optimization. Take advantage of PlanetHoster's proactivity and be protected from security breaches before it's too late. For maximum reliability, with state-of-the-art systems, we offer 24/7 monitoring to detect attacks, software failures, hardware malfunctions and potential overloads. LiteSpeed Enterprise is a high-performance web server. It offers superior performance over the Apache web server. Indeed, a website or web application hosted on a LiteSpeed web server is nearly 9x faster than if hosted on a third-party web server. Plugins offered for: WordPress, Joomla!, Prestashop, OpenCart, Drupal, XenForo, Magento, MediaWiki Because you deserve more, HybridCloud has options to create your own personalized server. Just configure your server according to your needs. The current configuration allows you to benefit from 0 free domain! With the purchase of a HybridCloud you get a free domain on your next order for every 60 GB, 2 CPU and 4 GB RAM level. Offer available from an annual billing cycle or more than 1 year. For example : 2 CPU, 4 GB RAM, 60 GB = 1 Free Domain name 6 CPU, 8 GB RAM, 120 GB = 2 Free Domain names 8 CPU, 12 GB RAM, 180 GB = 3 Free Domain names
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00798.warc.gz
CC-MAIN-2022-49
1,254
11
https://www.luxiajewelry.com/product-page/the-magnificent-star-hairpiece
code
This one of the kind hairpiece is silver with full of swarovski crystals. It also has silver cut flowers ! so unique only for unique brides . - Only one available - Swarovski Crystals We recommend storing them in their box while not in use, and avoiding contact with water, perfume, hairspray, and other chemicals
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00693.warc.gz
CC-MAIN-2023-50
313
5
https://www.experts-exchange.com/questions/22876045/Encryption-By-Certificate-and-Security.html
code
I'm using SQL Server built in encryption security for some of my Data. Specifically I'm going to use Certificates because they seem easier to migrate from one server to another. Fine, no problem, I pretty much understand it all and anything I don't understand I can figure out. Here is something I'm not understanding. Let's say someone breaks into my DB Server and Then breaks into one of my DB's. They go into a table, see encrypted Credit Card data and then start fishing through my Stored Procedures. They find the SP's i use to encrypt and decrypt the CC numbers. Now they know exactly what Cert I use to do this with. Since they are already in My DB, all they need to do is run my decrypt SP, and they have all those CC numbers. WHAT AM I MISSING. I know something in my logic is wrong or else encryption wouldn't be secure really at all. Explain to me how I need to set this up to stop what I'm talking about from happening or explain to me why what i'm talking about won't happen.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00700.warc.gz
CC-MAIN-2018-05
988
2
http://forums.autodesk.com/t5/net/vb-net-2012-error/td-p/3810784/page/2?nobounce=
code
if you are using VS2012 each new project is set to use Framework 4 (or 4.5). If you need to develop based on AutoCAD 2010 your project has to be set to Framework 3.5. What Framework is set to your VS-project? - alfred - sorry for late. iam already using .net framework 3.5 , and when i use attach to process , nothing happen , and my commmand desn't define . i need to fix the message when i run my program , i think attach to process not useful for me. Did you try the Norman suggestion about Nero products in the test machines? It seems it's the more probably cause of that message. I also found this: http://www.planetdvd.net/board/index.php/mv/tree/1 Hi, Gaston! amremad_world solved his problem. It was associated with the violation of the rules of this forum, for which he apologized.
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832052.6/warc/CC-MAIN-20140820021352-00058-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
790
8
https://community.intel.com/t5/Media-Intel-oneAPI-Video/Intel-GPA-boosts-Media-SDK-samples/td-p/1005952
code
I have modified the sample video conference (sample_videoconf) to work with DX11. I have added a basic calculation which records the duration taken to encode the whole file and consequently the maximum fps that the system can encode. When I launch it with an input yuv file that has a resolution of 1280x768 with DX11 acceleration and surfaces, I obtain a score of 36 fps. I tried multiple times and it's always around this value (+/- 1 fps). I have then downloaded the Intel GPA and installed it. When I launch my application through the performance analyzer tool of the GPA with exactly the same parameters (DX11 acceleration and surfaces, same input file), I obtain systematically 45fps. Which is 25% faster! Any idea? Does GPA boost the GPU frequency? How can I achieve the same result without GPA? CPU: Atom E3845 RAM: 4 GB SSD: Intel 525 120 GB OS: Windows 8.1 32 bits Thanks for the question. Based on what you have explained, this is the most likely cause - the GPA tool execution is increasing the load on the system therefore entering the turbo mode (highest freq). This can directly impact the performance of the application. You can check this behavior my monitoring the CPU-GPU load and frequencies. If your experimentation proves otherwise, please share the results and some more details of your experiment so that we can reproduce the behavior on our end. One more thing - When computing the performance (fps), please make sure you do not account for the file I/O operations (reading YUV and writing the bit stream) component. Also, the samples are intended to show framework for developing applications using Media SDK, and are not for performance measurements. The tutorials, on the other hand, are simpler implementations and you can add your video conferencing specific code to them.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00431.warc.gz
CC-MAIN-2022-21
1,802
12
https://www.identityserver.com/documentation/adminui/DynamicAuthentication/First_Steps_Adding_Dynamic_Provider/
code
Setting up a Dynamic Provider To setup a new dynamic provider with AdminUI you will need to click the Dynamic Authentication button in the nav bar and then click the "Add" button. Step 1: Select Type First you will be presented with a set of options for the type of dynamic provider you will be adding. OIDC will be already selected by default. OIDC (OpenID Connect)A provider that uses OIDC. SAML (SAML2P Provider)A provider that uses SAML. Step 2: Scheme and display name Here we setup general information about the dynamic provider, the values you specify are detailed below: |String that uniquely identifies this provider |Display name used as a user friendly name for the provider For an OIDC provider continue on this link: Or for a SAML provider continue on this link:
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816863.40/warc/CC-MAIN-20240414002233-20240414032233-00820.warc.gz
CC-MAIN-2024-18
775
13
https://crossing-tech.com/2015/09/24/ictjournal-published-an-article-about-new-programming-languages/
code
IT development becomes more complex and time consuming. A new generation of specific languages and tools help to manage the complexity and diversity during development. Christophe Pache, Software Engineer at Crossing-Tech talks about programming languages and their specificities. He also mentions Babel, a specific language for integration created by Crossing-Tech. Read the article published in the swiss magazine ICTjournal (french version)
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00326.warc.gz
CC-MAIN-2021-43
443
2
http://lists.lunar-linux.org/pipermail/lunar/2010-July/008385.html
code
duncan at thermal.esa.int Fri Jul 23 10:38:21 CEST 2010 I'm at work at the moment and don't really have time to spend on this, so please reply to the lunar mailing list where others maybe can. >> * can you configure eth0 manually using ifconfig? > maybe I can but I don't have a clue how. ifconfig eth0 10.0.0.1 # or other unused/private IP address see man ifconfig for details >> * if using 'lnet' to configure eth0, make sure that it really wrote >> the correct things back to /etc/config.d/network/eth0 > Yes, when I choose as manager ifplugd then I get a ipv6 adress but > no dns info. On booting I see a message : failed to set gateway if you use dhcpcd then gateway and dns stuff should be set for you. if using manual configuration, you need to supply these addresses. >> * is 8139cp enabled as a module in the kernel? > Yes, I doublechecked that sorry, but I don't have this card, so don't know the current status, but a web search shows old problems with 8139cp and 8139too conflict. does that help? >> * any clues in the /var/log files? > Not that I can find it. Running lspci and ifconfig might give you some more clues. I'm afraid I have no time to help further. More information about the Lunar
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00503.warc.gz
CC-MAIN-2021-39
1,207
24
http://forums.toucharcade.com/showthread.php?t=108016
code
Turns out I'm very OCD (surprise) and delete games off of GC whenever I remove them off my iPod. It's not a big issue, but I was wondering if anyone else's GC keeps certain games even though they chose to remove the game from GC upon deleting a game. Archetype, Doodle Army 2, The Blocks Cometh, and Glow Hockey 2 are a few I remember off the top of my head with this problem. I've tried syncing them and deleting them over and over again hoping that would work, but it's never to any avail. Overall, I haven't seen over ten games with this problem and currently have over 150 games on my GC..so like I said, it's not too big of a deal, just annoying for me. Thanks for any feedback.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00539.warc.gz
CC-MAIN-2018-17
683
1
http://www.webdeveloper.com/forum/showthread.php?226933-wordpress-website-with-file-upload-download-option&goto=nextnewest
code
I need to design a song contest site similar to http://www.soundclick.com/genres/cha...re=Alternative. I am a noob and have designed a few sites with joomla. I am not sure how I can integrate a poll with an mp3 player tied to the poll. I welcome any ideas and if I need to learn another way to do this like, flash or php, I am up for the challenge. Since the site requires user interaction, you will need to use some form of server-side language such as PHP, ASP.NET or other. Flash can also do it, but it would be a lot more difficult and would still require server side processing. Unfortunately, unless you can find a dedicated CMS to deal with the problem, your best bet is to learn either of the two server-side programming languages above (or any others) and see what you can come up with.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697442043/warc/CC-MAIN-20130516094402-00043-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
795
5
https://sharemylesson.com/standards/kentucky-doe/ky.3.g.1
code
Classify polygons by attributes. No resources have been tagged as aligned with this standard. Recognize and classify polygons based on the number of sides and vertices (triangles, quadrilaterals, pentagons and hexagons). Recognize and classify quadrilaterals (rectangles, squares, parallelograms, rhombuses, trapezoids) by side lengths and understanding shapes in different categories may share attributes and the shared attributes can define a larger category. Identify shapes that do not belong to a given category or subcategory.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892710.59/warc/CC-MAIN-20201026234045-20201027024045-00083.warc.gz
CC-MAIN-2020-45
532
5
https://www.magicmadhouse.co.uk/stonemaier-games-wingspan-p299787
code
You are bird enthusiasts—researchers, bird watchers, ornithologists, and collectors—seeking to discover and attract the best birds to your network of wildlife preserves. Each bird extends a chain of powerful combinations in one of your habitats (actions). These habitats focus on several key aspects of growth: Delivery Methods - Domestic You can find more details on our delivery and returns policies, and the delivery methods available, here.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669183.52/warc/CC-MAIN-20191117142350-20191117170350-00425.warc.gz
CC-MAIN-2019-47
448
3
https://skin-tracker.com/fortnite/set/dino-guard
code
Hunting the competition to extinction. Rough around the edges. Sink your teeth into victory. Take a bite out of the competition. Saur into battle. Show your style. One for the ages. Eating plants and takin names So far there are 5 Legendary, 2 Epic, 3 Uncommon, 2 Rare Cosmetic Items in this Skin Set. Fortnite can add more skins at any time.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00319.warc.gz
CC-MAIN-2020-40
342
9
https://hackernoon.com/making-a-case-for-domain-modeling-17cf47030732
code
1st-hand & in-depth info about Alibaba's tech innovation in AI, Big Data, & Computer Engineering Smart use of domain modeling by software developers and architects can make complex applications more scalable — Best practice from the Alibaba Tech Team Domain modeling is one of two basic approaches to application design. On a basic level, domain modeling can be understood by taking “domain” to mean the sum total of the knowledge of the business and “modeling” to mean an object-oriented abstraction of the business logic. After constructing a domain model consisting of different objects based on the business requirements, the architect then writes code to carry out the business logic between the objects. The other basic approach is transaction script. Simple, straightforward, and easy to use, it conceives of business logic as a series of procedures and sub-procedures. (Beyond this, there are hybrid approaches such as command query responsibility segregation (CQRS), which is based on the notion of using different models for reading and updating data). No one-size-fits-all solution exists among these options. The most suitable approach depends on the complexity of the business requirements. For example, using the domain model for a query-and-report process is unnecessarily complicated. In cases like this, it helps to recognize simplicity for what it is, get rid of the domain layers, and gain direct access to the infrastructure layer. However, just as defaulting to a domain modeling approach risks over-engineering in simple cases, attempting to approach inherently complex business scenarios using a simple transaction script is also a mistake. Transaction script tends to make a mess of the code when dealing with complex scenarios, resulting in an exponential increase in the corrosion and complexity of the system as the application develops. Domain modeling offers the option to build more robust, scalable, and user-friendly applications when dealing with complex business logic. The issue, then, is identifying when to use this approach, and learning how to apply it effectively. The object-based nature of domain modeling can help the architect govern the development of an application more easily. It insists on the cohesiveness and reusability of objects, and encapsulates the business logic more intuitively. This can be demonstrated with a use case. Consider the following example of a bank transfer carried out based variously on transaction script and domain modeling. This is a classic example often used to compare these two approaches, for example in this blog by Lorenzo Dee. In following example, the business logic of a money transfer between two bank accounts is written in the implementation of MoneyTransferService, where Account is only the data structures of getters and setters. This is what we call an anemic domain model. If this code looks familiar, it is because most existing systems are written in this way. Typically, the architect would start with a requirements review, draw some unified modeling language (UML) diagrams to conclude the design, and then start coding as above. The architect doesn’t have much to think about beyond the end goal of the code. On the surface, this looks like an easy — even trivial — approach capable of successfully implementing the required function. This is why many developers perceive writing business code as a tedious menial task. But it needn’t be this way. Transaction script is far from an ideal approach in this case and many others, because it offers little beyond a quick fix to the problem at hand. Domain modeling, on the other hand, is a far more holistic approach, and when used properly it results in much more extensible, maintainable code. Though typically thought of as a domain-level approach, domain modeling can also be employed at the infrastructure level too. The other advantage for application developers is that a domain-driven development (DDD) approach — far from being tedious and mundane — requires the developer to exercise their powers of abstraction and modeling skills. These are essential skills that must be developed in order to progress to an application architect, who works at the domain level where the business is typically much more complex. Using ubiquitous language allows an architect using a domain modeling approach to make the implicit business logic more explicit, making it possible to govern the complexity. Additionally, if using domain-driven development (DDD), the entity Account contains behaviors and business logic such as methods of debit ( ) and credit ( ) in addition to account attributes. The OverdraftPolicy is also abstracted from an Enum into an object of its own with business rules and strategy pattern. Meanwhile, the domain service only needs to call domain objects to complete the business logic. This DDD refactoring scatters the logic in the transaction script to three defined objects: domain service, domain entity, and Overdraftpolicy. To summarize, the domain modeling approach offers benefits over transaction script for complex business scenarios because it is object-oriented and makes business semantics explicit. All Account-related operations are encapsulated in the Account entity with a higher level of cohesiveness and reusability. The OverdraftPolicy adopts a strategy pattern, which is a typical application of the polymorphism and improves the scalability of the code. Explicit business semantics: · Ubiquitous language Having a ubiquitous language shared among the development team is advantageous for communication, coding, drafting, writing, and speaking. It is best practice to ensure that all important domain concepts, such as account, transfer, and overdraftpolicy, have a consistent name that is used in all contexts from day-to-day discussions to the product requirements document (PRD). This can significantly improve code readability and cut cognitive load. · Explicit logic Domain modeling extracts the implicit business logic from what would be a stack of ‘if-else’ statements in transaction script and uses a ubiquitous language to name, code, and extend this logic, turning it into an explicit concept. For example, the meaning of ‘overdraftpolicy’ written in transaction script is completely buried in the code logic, leaving an uninitiated reader rather confused. Domain modeling, on the other hand, abstracts this logic using a strategy pattern to ensure better readability and scalability. Faced with the huge quantity of works available on the subject of domain modeling — and tedious methodologies such as in-depth syntactic analysis — getting started can seem daunting. Instead of delving into domain modeling theory, the Alibaba tech team have found that it helps to remember two basic principles: · Know your domain Building a good model starts with having a solid understanding of the business. Without this, syntactic analysis alone will not produce a good model. · Start from the basics and elaborate later While it is important to learn as much about your domain as possible, sometimes the architect needs to start modeling based on an incomplete knowledge. In these cases, it makes sense to start with the basics, build a simple model, and adjust it later as required. With these in mind, it makes sense to take an iterative approach to domain modeling. First, grasp some core concepts before writing the code and running it. If it goes well, there is no need to adjust it. If there are problems, the next step is to make adjustments to the model. As the architect’s understanding of the business builds, the iterations continue. The first iteration: building a simple model: Building a basic model starts with extracting the noun and verbs from the user story to identify the key objects, attributes, patterns, and relationships. Let’s explore how this would work with the example of a job agency. A typical user story would be as follows: “Tom is looking for a job through an agency and the agency asks him to leave his number so he can be informed about any job opportunities”. The key nouns in this story are most likely the domain objects we need: · Tom is the jobseeker · Phone number is the jobseeker’s attribute · Agency refers to two key objects — the company and its employee · Job opportunity is another key domain object The verb inform indicates that the Observer pattern would be most appropriate in this case. Now, let’s consider the relationships among these domain objects. There is a many-to-many (M2M) relationship between jobseekers and job opportunities: · A jobseeker can have multiple job opportunities · A job opportunity can be applied for by multiple jobseekers Meanwhile, there is a one-to-many (O2M) relationship between agency and employee, since the agency can employ multiple employees. This model is now nearly complete, but many real business scenarios call for far more complexity than this in the first iteration. For example, not all nouns are domain objects — they may be attributes as well. That is why solutions must be developed on a case-by-case basis, with a solid understanding of the business. Doing this successfully requires strong abstraction abilities and plenty of modeling experience. For example, “price” and “storage” are typically attributes of “orders” and “goods.” However, in complex business scenarios price calculations and inventory deductions can be extremely complex. In Alibaba’s e-commerce business, they are so complex that price and inventory each constitute an entire domain. That said, modeling is rarely a one-time effort, and an architect’s picture of the system becomes more comprehensive as their understanding of the business evolves. Iterations and refactoring are an inevitable part of the modeling process. After building a basic model in the first iteration, subsequent work will normally focus on the unification and evolution of the model. This means improving the cohesiveness of the model and expanding it to accommodate evolving business requirements. In the parable of the blind men and an elephant, several blind men come to different conclusions of what an elephant is — a snake, tree trunk, or fan — based on what they have learnt by touching its trunk, legs or ears. Similarly, different architects will have different perceptions of the same business depending on their level of experience and knowledge. Unifying a domain model involves integrating complementary abstractions to refine the model while removing inaccurate or mistaken abstractions. To revisit the example of the parable, two blind men may touch an elephant’s trunk and separately conclude that it is a snake and a fire hose. Each of these abstractions identify different attributes of the trunk — just like the elephant’s trunk, a snake is alive and moving, while a fire hose sprays water — but neither is complete. A new abstraction is needed that combines all of these attributes. Meanwhile, the new abstraction must also exclude any incorrect attributes and behaviors that came with the original abstractions, such as fangs (snake) or the need to be rolled up and stored on the fire engine (fire hose). Businesses are constantly changing, meaning that model elaboration is an ongoing process. As a business grows in scope and complexity, one challenge for the architect is to keep their understanding up-to-date and amend the model accordingly. Meanwhile, the architect’s understanding of the existing business is constantly growing. This raises the possibility of carrying out code refactoring each time they have a breakthrough in understanding. Often, a series of quick changes can result in a much more practical model that is better suited to users’ needs. Keeping up with an evolving business and one’s own evolving understanding takes both confidence and competence. Confidence is required to commit to refactoring a project on a tight schedule, while competence is required to ensure the refactoring does not undermine the existing business logic. Therefore, continuous integration (CI) is a must-have for evolution. We can explore the idea of model evolution further by revisiting the bank transfer example from earlier. Suppose the bank’s business evolves to support different transfer channels — cash, credit card, mobile payment, Bitcoin, and others — each with its own constraints. Suppose that it also evolves to support transfers from a single debit account to multiple credit accounts. In this case, it would no longer be appropriate to use only one transfer (from Account, to Account). A better approach would be to abstract a specific domain object “Transaction” to better reflect the business logic. This evolution process is illustrated in the following figure. Understanding domain services: The bank transaction example above raises the important but tricky concept of domain services. Simply put, some actions in the domain are verbs, yet they do not belong to any object. They represent an important action in the domain that can neither be neglected nor incorporated into another entity or value object. When such an action is identified in the domain, the best practice is to declare it as a service. Services do not have a built-in status, and do no more than provide relevant functions for the domain. A service is typically named after an activity instead of an entity. In the case of the bank transfer, the action of “transfer” is an important domain concept; however, it does not belong to any account entity because it occurs between two accounts. There is no need to associate an account entity with the account entity to which the money is transferred. In this case, using MoneyTransferDomainService is the best approach. To summarize, to qualify as a domain service a concept must satisfy the following three criteria: 1. The operation represents a domain concept which does not naturally belong to any entity or value object. 2. The operation executed involves other objects in the domain. 3. The operation is stateless. Domain services and the domain layer: A system generally has three main layers, namely the application layer, the domain layer, and the infrastructure layer. Services exist on the application and domain layers. This raises the question of which services should exist on which layers. A helpful rule to consider when making this decision is: · If the operation belongs to the application layer in principle, it should be placed on this layer. · If the operation involves domain objects and provides service for the domain, then it belongs to that layer. In other words, any actions involving important domain concepts should be placed on the domain layer. Other technical code that does not concern domain logic should be placed on the application layer. This might include parameter resolving, context assembly, domain service calling, messaging, etc. The following figure suggests how to partition services into layers in the case of a bank transfer: One final thing to note is that a good domain model reduces the complexity of the application and can therefore support elegant visualization and configuration. This helps stakeholders gain a straightforward understanding of and learn how to configure the system — especially non-technical staff and clients. Ultimately, this offers a code-free solution, which is the main selling point of software as a service (SaaS). However, visualization and configuration inevitably also introduce extra complexity to the system, so architects are advised to exercise caution when using them. It is best practice to reduce the coupling between the business logic of visualization and configuration and the business logic of the business itself to a minimum. Otherwise, it may undermine the existing architecture and make things even more complicated. (Original article by Zhang Jianfei张建飞) First-hand and in-depth information about Alibaba’s latest technology → Search “Alibaba Tech” on Facebook Create your free account to unlock your custom reading experience.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00081.warc.gz
CC-MAIN-2021-10
16,151
79
https://discussions.qualys.com/thread/14934-ocsp-error-exception-null
code
From time to time i scan our site(https://winkel.tilburg.nl) with the SSL Server Test and everything is fine. Since a week or two i noticed the following error: So i ran the OCSP revocation check as Ivan explained in: http://blog.ivanristic.com/2014/02/checking-ocsp-revocation-using-openssl.html. OCSP Revocation Check E:\tools\apache24\bin>openssl ocsp -issuer c:/intermediate.crt -cert c:/server.crt -url http://ocsp2.managedpki.com -CAfile c:/root.crt Response verify OK This Update: Apr 3 13:53:12 2015 GMT I was wondering why the SSL Server Test is showing the above error if OCSP revocation check is ok?
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00563.warc.gz
CC-MAIN-2020-24
610
7
https://chulhong.github.io/
code
I am currently working as a Principal Research Scientist at Nokia Bell Labs in Cambridge, UK and lead the Device Systems team. In my team, we work on building multi-device systems to support collaborative and interactive services. With an unprecedented rise of on/near-body devices, it is common today to find ourselves surrounded by multiple sensory devices. We explore system challenges and issues to enable multi-device, multi-modal, and multi-sensory funtionalities on these devices, thereby offering exciting opportunities for accurate, robust and seamless edge intelligence. My research interests include mobile and embedded systems, edge intelligence, tiny ML, Internet of Things (IoT), and social and culture computing. I enjoy building real, working systems and applications and like to collobrate with other domain experts for interdisciplinary research. Come work with us for exciting device research!! We have several positions (postdoc, research scientist, and tech lead) available in our Cambridge Lab.PostDoc Research Scientist Tech Lead I will be at Slush to demonstrate how we bring cloud-scale,machine learning and MLOPs to software-defined cameras and IoT devices.PROJECT CSTO's message Our paper about on-body microphone collaboration was conditionally accepted to ACM HotMobile 2023. Kudos to our awesome intern, Bhawana! Our paper about reseource characterisation of MAX78000 was awarded the Best Paper Award at ACM AIChallengeIoT 2022. Kudos to our awesome interns, Hyunjong, Lei and Arthur! Two papers were presented at ACM UbiComp 2022, Lingo (a hyper-local conversational agent) and ColloSSL (collaborative self-supervised learning). Our paper about the multi-device and multi-modal dataset for human energy expenditure estimation is published in Nature Scientific Data. DATASET I am serving as an Editorial Board of IEEE Pervasive Computing. I am serving as a workshop co-chair for ACM UbiComp/ISWC 2022. I am serving as a program co-chair for ACM IASA 2022. I am serving as a program committee member for ACM MobiSys 2022. I am serving as a program committee member for ACM MobiCom 2022. Intelligent cameras such as video doorbells and CCTV are abundant today, yet only used for a single-purpose, privacy-invasive and bandwidth-heavy streaming. We have developed a software solution that transforms intelligent cameras with automated machine learning operations (MLOps), enabling them to provide a range of services, including traffic flow, pedestrian analysis, asset tracking, or even facial recognition. Multiple intelligent devices on and around us are on the rise and open up an exciting opportunity to leverage redundancy of sensory signals and computing resources. We are building multi-device systems to make an inference of ML models accurate, robust, and efficient at the deployment time so that applications can benefit from such multiplicity and boost the runtime performance of deployed ML models without model retraining and engineering. The embedded accelerators promise model inferences with 100x improved speed and energy efficiency. In reality, however, this acceleration comes at the expense of extreme tight coupling, preset configurations, and obscure memory management. We challenge these limitations and uncover immediate opportunities for software acceleration to transform the on-chip intelligence. A multi-device and multi-modal dataset collected from 17 participants with 8 wearable devices placed on 4 body positions. A Multi-modal Dataset for Modeling Mental Fatigue and Fatigability, containing 13 hours of sensor data collected over 36 sessions from 14 sensors on four wearable devices. Ambient acoustic context dataset for building responsive, context-augmented voice assistants, containing 57,000 1-second segments for activities that occur in a workplace setting. Battery usage data from 17 Android Wear smartwatch users over a period of about 3 weeks. Cocoon: On-body Microphone Collaboration for Spatial Awareness— Bhawana Chhaglani from University of Massachusetts Amherst in 2022 Exploring Model Inference over Distributed Ultra-low Power DNN Accelerators— Prerna Khanna from Stony Brook University in 2022 Ultra-low Power DNN Accelerators for IoT: Resource Characterisation of the MAX78000— Hyunjong Lee from KAIST in 2022 A Multi-device and Multi-modal Dataset for Human Energy Expenditure Estimation using Wearable Devices— Shkurta Gashi from USI in 2021 Cross-camera Collaboration for Video Analytics on Distributed Smart Cameras— Juheon Yi from Seoul National University in 2021 SleepGAN: Towards Personalized Sleep Therapy Music— Jing Yang from ETH Zurich in 2021 FatigueSet: A Multi-modal Dataset for Modeling Mental Fatigue and Fatigability— Manasa Kalanadhabhatta from University of Massachusetts Amherst in 2021 Coordinating Multi-tenant Models on Heterogeneous Processors using Reinforcement Learning— Jaewon Choi from Yonsei University in 2021 Modelling Mental Stress using Smartwatch and Smart Earbuds.— Andrea Patane from University of Oxford in 2019 Resource Characterisation of Personal-scale Sensing Models on Edge Accelerators— Mattia Antonini from FBK CREATE-NET and University of Trento in 2019 Resource Characterisation of Personal-scale Sensing Models on Edge Accelerators— Tran Huy Vu from SMU in 2019 Design And Implementation Of Mobile Sensing Applications For Research In Behavioural Understanding— Dmitry Ermilov from Skoltech in 2018 Automatic Smile and Frown Recognition with Kinetic Earables— Seungchul Lee from KAIST in 2018 Resource Characterisation of Wi-Fi Sensing for Occupancy Detection— Zhao Tian from Dartmouth College in 2017
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648000.54/warc/CC-MAIN-20230601175345-20230601205345-00416.warc.gz
CC-MAIN-2023-23
5,657
35
http://androidforums.com/2585796-post12.html
code
Originally Posted by alostpacket the app part would be the easy part, building the hardware to attach to the camera and create a wifi AP is the hard part. Yeah, this is true. The issue is transmitting high data rates over wifi. HDMI seems to be an order of magnitude higher than WIFI in throughput. Regardless, it would be cool to start with a low resolution version -- there would still be value there. The hardware component would need to plug into a mini HDMI port, compress the signal, and stream it over WIFI. Can anyone get more granular as to the specific required components?
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898477.17/warc/CC-MAIN-20141030025818-00083-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
583
4
https://kt.team/en/low-code
code
In the LCDP concept, development is carried out in 3 stages: task – builder – final functionality. Developers don't have to perform boring routine tasks, and in some cases the changes are implemented faster. In the code-first paradigm, the development process is based on the "task – final solution" scheme. The Code-first cycle is longer, because in this case any edit goes through the standard development cycle. User interfaces, business processes, data and integrations in LCDP are visualized, allowing the business to form requests for changes more clearly and have an unambiguous and clear representation (with respect to naming conventions). In the code-first paradigm, the code and user documentation are always separated. The documentation maintenance requires additional efforts, which does not help avoid the duplicates and ambiguity of the narrative. Time of Implementation LCDP is easier to put into operation due to its self-documentability and the ease of logging unification. The launch into operation requires a high level of development culture both on part of developers and the company's management. Engineers are not involved in routine tasks execution; they are responsible for reusable builder elements. This allows developers to focus on quality, and business – on the core value. As developers are responsible for delivering and modifying the final value, engineers burn out, and the business generates fewer changes. Solving these problems implies significant investments in the communication culture and complex process development. Retaining engineering talent The main focus is on creating a new builder or configuring the existing one, whereas edits to the functionality can be made by the customer himself. This reduces the development time and the cost of a ready-made solution. The cost of a ready-made solution depends not only on the development, but also on whether further improvements and technical maintenance are required, which immediately drives up the price. Cost of work Pimcore — low-code open-source платформа. Помимо PIM и MDM решения, в Pimcore есть CMS, e-Commerce и DAM элементы, которые позволяют хранить всю информацию о продуктах.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647459.8/warc/CC-MAIN-20230531214247-20230601004247-00559.warc.gz
CC-MAIN-2023-23
2,276
14
https://akyildiz.me/join.html
code
I am looking for motivated PhD students and postdocs. There is a number of funding opportunities -- please get in touch asap if you are interested in working with me. Aside from standard studentships, you can apply to - StatML: EPSRC CDT in Modern Statistics and Statistical Machine Learning at Imperial and Oxford - Random Systems CDT: Joint Imperial-Oxford CDT for training the next generation of interdisciplinary experts in probability, stochastic analysis - Through Martingale Foundation - Imperial IX, AI in Science fellowships There are also other postdoctoral opportunities at Imperial. Please get in touch with me if you are interested so we can discuss an application. The photo is of Imperial's South Kensington Campus. It was taken by Dmnk.saman and is reproduced here under the CC BY-SA 4.0 license.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00211.warc.gz
CC-MAIN-2022-49
812
8
https://discourse.julialang.org/t/project-nested-environments/72499
code
I have a package with an examples folder inside of it. I want the examples folder to have its own Project.toml with things like Plots that I don’t want as dependencies in the package itself. I created an environment in the examples folder but how do I add to the examples environment the main package itself? I.e. the directory would look like this \SamplePackage\src\SamplePackage.jl \SamplePackage\Project.toml \SamplePackage\examples\examplescript.jl \SamplePackage\examples\Project.toml And then inside examplescript.jl I wanna use the package like this. using SamplePackage using Plots #script code... Where Plots isn’t part of the SamplePackage environment
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00650.warc.gz
CC-MAIN-2022-21
666
7
https://www.mankier.com/6/fc-solve-board_gen
code
pi-make-microsoft-freecell-board [-t] board-number or for make_pysol_freecell_board.py: make_pysol_freecell_board.py [-t] [-F] [--ms] board-number [game-string] These programs are command-line programs that can generate the initial board of the games of several popular Solitaire implementations. Those boards can be in turn be input to fc-solve by means of a pipeline, or placed inside a file for safe-keeping. make_pysol_freecell_board.py also accepts an optional third argument which indicates the game variants. This type defaults to Freecell, but is useful for generating the boards of other games. Note that using this flag still requires one to use the "--game" flag of fc-solve, if necessary. make_pysol_freecell_board.py also accepts a flag called -F or --pysolfc that deals the PySolFC boards instead of the classic PySol ones, and one called --ms or -M that deals Microsoft Freecell/Freecell Pro deals even for higher seeds. A common paradigm for using those programs is something like: bash:~$ pi-make-microsoft-freecell-board -t 11982 | fc-solve -l gi If the "-t" option is specified, then the 10 cards are printed with "T"'s, instead of "10"'s. While fc-solve can accept either as input, it may prove useful for other solvers or solitaire implementations which do not accept "10"'s. Here is a short description of each program: A Python script that generates the boards of the various games of PySol. A program that generates the boards of Microsoft Freecell and of the Freecell Pro implementation of Freecell. board-number is the board number as a decimal number. game-string is a string describing the game. Valid strings and their respective games are: bakers_game - Baker's Game bakers_dozen - Baker's Dozen beleaguered_castle - Beleaguered Castle citadel - Citadel cruel - Cruel der_katz - Der Katzenschwantz die_schlange - Die Schlange eight_off - Eight Off fan - Fan forecell - Forecell freecell - Freecell (the default) good_measure - Good Measure ko_bakers_game - Kings' Only Baker's Game relaxed_freecell - Relaxed Freecell relaxed_seahaven - Relaxed Seahaven Towers seahaven - Seahaven Towers simple_simon - Simple Simon streets_and_alleys - Streets and Alleys Shlomi Fish, <http://www.shlomifish.org/> . The man pages make_pysol_freecell_board.py(6) and pi-make-microsoft-freecell-board(6) are aliases of fc-solve-board_gen(6).
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00432.warc.gz
CC-MAIN-2023-50
2,353
34
http://www.utteraccess.com/forum/lofiversion/index.php/t1986824.html
code
I'd approach this by passing a value via the reports OpenArgs property and then, in the On Open Event of the report use a Select Case to identify the argument passed and amend the properties of the controls to display the different formats you want. DoCmd.OpenReport strSelRpt, acViewPrevew, , stLinkcriteria, , strArgs Where strArgs is 1,2 3, etc Then in the On Open Event of the report (e.g.) Select Case CInt(Me.OpenArgs) Me.SomeControl1.ForeColor = vbBlack Me.SomeControl1.Fontsize = 10 Me.SomeControl1.FontName = "Arial" Me.SomeControl1.Italic = True Me.SomeControl2.ForeColor = vbBlack Me.SomeControl2.Fontsize = 10 Me.SomeControl2.FontName = "Arial' Me.SomeControl2.Italic = True Me.SomeControl1.ForeColor = vbBlue Me.SomeControl1.Fontsize = 12 Me.SomeControl1.FontName = "calibri" Me.SomeControl1.Italic = False Me.SomeControl2.ForeColor = vbBlue Me.SomeControl2.Fontsize = 12 Me.SomeControl2.FontName = "calibri" Me.SomeControl2.Italic = False If you needed different formats for each record printed out on a single report run, then you would need to include the formatting value in the reports recordset and then include the above code in the On Format / On print events of the section to be formatted, in this case instead of Select Case Me.OpenArgs , you would refer to the field name defining the format (e.g Select Case Me![FormatID]
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706470784/warc/CC-MAIN-20130516121430-00053-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,347
23
https://makosh-shop.ru/validating-textarea-459.html
code
"But doesn't j Query make it easy to write your own validation plugin? In all cases, the form is called That's checking the value's equality with a null String (two single quotes with nothing between them). This particular one is one of the oldest j Query plugins (started in July 2006) and has proved itself in projects all around the world. There is also an article discussing how this plugin fits the bill of the should-be validation solution. Have a look at this example: A single line of j Query to select the form and apply the validation plugin, plus a few annotations on each element to specify the validation rules. You need to place error messages in the DOM and show and hide them when appropriate. You want to react to more than just a submit event, like keyup and blur.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703557462.87/warc/CC-MAIN-20210124204052-20210124234052-00414.warc.gz
CC-MAIN-2021-04
782
6
https://community.moneris.com/product-forums/f/5/t/1218
code
We are testing InteracOnline workflow. We receive an error: "DECLINED * CARD NOT SUPPORTED =NOT APPROVED*NO POS ACCESS" as a response for final request for purchase completion. We tried the following IDEBIT_TRACK2:3728024906540591206=011211223344550005268051119993326=01121122334455000000453781122255=011211223344550000000000 The same error for Merchant Test Tool and Certification Test Tool. Could you please advise? My Store ID is monca04916
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904834.82/warc/CC-MAIN-20201029154446-20201029184446-00647.warc.gz
CC-MAIN-2020-45
443
5
http://www.bigthingsconference.com/2016/program/thu-managing-data-science.html
code
from 15:40 to 16:20 It is a very common situation nowadays that Data Science initiatives do not meet expectations or incur in costs higher than expected. Most of these inefficiencies can be explained by a lack of common understanding of fundamental laws in Machine Learning. In this talk we will explore how Learning Theory can be interpreted from a business perspective and help to better plan and manage Data Science in an organization. University College LondonLead Engineer
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00173.warc.gz
CC-MAIN-2021-49
477
3
https://www.designmaster.biz/blog/2011/07/one-line-takeoffs/
code
Most items are counted in the takeoff feature for Design Master Electrical inside of areas on a drawing. We intentionally limited the counts to items actually displayed on the drawing rather than in the database. This helps provide you with control over what is being counted. However, there are some items, including what Design Master Electrical collectively calls “one-line devices”, that do not appear on drawings. We had to add a special button to print takeoffs for these devices, because they will not be included in the counts from areas. Interestingly, the one-line device takeoff always generates more interest than the area takeoffs. The breaker totals are what people seem to find most exciting. Each one-line device, be it a panel, transformer, switchboard, or something else, is listed individually in the takeoff. The relevant information for the device is displayed. Below the device, the size and number of breakers on the panel are listed. After all the devices are listed, the total number of breakers in the project are listed. These are broken down by voltage, AIC rating, and size. The last section of the takeoff lists all the feeder lengths. These lengths are separated into fixed and calculated lengths. Fixed lengths are ones that are entered manually by the user. Calculated lengths are ones that are determined based upon the locations of the panels on the drawings.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647810.28/warc/CC-MAIN-20230601110845-20230601140845-00563.warc.gz
CC-MAIN-2023-23
1,398
6
https://lists.debian.org/debian-cd/2004/09/msg00064.html
code
Re: Introducing and asking. [Luís Guilherme Fernandes Pereira] > The result would be a like-standard Debian instalation, with some > differences: the software contained in CD1 and a new option in Tasksel > (marked by default if it's possible) named like "CCUEC Software". I'm not sure if the tasksel method is documented yet. If you want to install the extra software on all installations, you can use the .disk/base_include file on the CD to tell debian-installer to install extra packages.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00738.warc.gz
CC-MAIN-2023-14
492
9
https://answers.sap.com/questions/5688413/collective-reservation-processing.html
code
I have one component say Raw1. I added it to 10 WBS Elements.....so I got 10 reservation numbers. I have maintained a plant stock (10 Nos.) for the same. Now I want to issue these 10 components against a reservation numbers on respective WBS Elements through MB1B. Currently I have to do it for 10 times (individually for each reservation). At times my reservation number goes upto 100 so it becomes laborious work. Is there any other way of doing collectively so that in a single transaction, I will be able to issue the same component to several WBS Elements against several reservations??? Your inputs would be very much valuable to me. Thanks and Regards,
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00549.warc.gz
CC-MAIN-2023-14
659
9
https://developers.flur.ee/docs/guides/transactions/7/
code
Deleting a Subject To delete/retract an entire subject, use the _id key along with only "_action": "delete". This deletes (retracts) the subject all predicates. In addition, all of the references for that subject anywhere in the ledger are also retracted. "_id": ["person/handle", "jdoe"], Write a transaction Using the above transaction example, write a transaction deleting the subject with id 12345.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506399.24/warc/CC-MAIN-20230922102329-20230922132329-00437.warc.gz
CC-MAIN-2023-40
402
5
https://doc.ibexa.co/en/4.5/commerce/storefront/storefront/
code
The Storefront package provides a starting kit for the developers. It is a set of components that serves as a basis, which developers can customize and extend to create their own implementation of a web store. Default UI components¶ The Storefront package contains the following default UI components and widgets. You can modify them when you build your own web store. |Displays a subtotal net value of cart lines, a shipping cost disclaimer, a series of tax values applicable to products in cart, a composition of different taxes, and a total cart value (gross, shipping and taxes included). |Displays a series of screens that allow buyers to place an order for cart items. |Enables selecting between currencies, to dynamically change the contents of the product listing page. |Enables selecting between languages, to change an active language. |Provides user interface for the login/registration page that enables buyers to access the Product catalog. |Main cart component |Main UI component of the cart. Displays a list of items selected for purchase and requested cart item quantities. Users can remove individual items. |Mini cart widget |Consists of a counter that displays a total number of items added to a cart. |Product category page |Displays products that belong to a specific category. |Product filters component |Allows for narrowing the list of products displayed in the listing by using different filters, such as product type, availability and price. |Product listing page |Allows for browsing through products, displays product name, code, price, and image. |Enables selecting between regions, to dynamically change the contents of the product listing page. |Search for specific product component |Allows for searching for products, for example on the product listing page. |Sort products component |Enables sorting products based on different criteria on a product listing page. To become familiar with a complete set of templates that covering all functionalities of a store, visit the vendor/ibexa/storefront/src/bundle/Resources/views/themes/storefront directory of your installation. Customization and permissions For more information about modifying the storefront components, whether by changing their appearance or modifying the underlying logic, see Customize the storefront layout. For more information about overriding the default checkout component, see Customize checkout. For information about roles and permissions that control access to various components of the purchase process, see Permission use cases.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.28/warc/CC-MAIN-20240418093630-20240418123630-00894.warc.gz
CC-MAIN-2024-18
2,541
29
https://www.policemag.com/blogs/women-in-law-enforcement/blog/15318379/sexual-harassment-isnt-as-prevalent-today
code
POLICE Magazine asked a few of our female officer bloggers whether they have experienced sexual harassment in a law enforcement agency, and to explain how they handled it. Sgt. Diana Drummey gives her perspective: I have experienced it. Some of it I just ignored, some I think I was naïve, and now if it happened: Stand by, Mister. I think it is still out there, but nothing to the extent that is was years ago. Women are much stronger and educated about this matter, as well as men; in the field, [we] get constant training. Heck, we all get constant training, and this is not reserved just for woman being harassed. Men can also be harassed. The definition of sexual harassment has changed over the years, and it encompasses many more aspects of behavior in the work place.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00705.warc.gz
CC-MAIN-2023-50
776
4
http://www.biglist.com/lists/lists.mulberrytech.com/xsl-list/archives/200708/msg00323.html
code
Subject: Re: [xsl] Ranking Random Nodes from Top to Bottom| From: Abel Braaksma <abel.online@xxxxxxxxx> Date: Fri, 17 Aug 2007 15:30:47 +0200 I was thinking of the determining the string length from root to a particular node, and store that as output in the output file using XSLT. Then post-process it in Python to rank the paths into document order, where the lowest number appears on top and the highest number at the bottom. Does that make any sense? Cheers, -- Abel Braaksma
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00392.warc.gz
CC-MAIN-2018-26
479
5
http://www.gen.eclipse.co.uk/Software.html
code
- The very helpful site that taught me how to write HTML without turning my brain to toothpaste W3C. - While working on an old board, I was told that the source files had been lost. This is infuriating, and as a result had to reverse-engineer the programming data. Soon afterwards I wrote GalDis, which will turn binary or ".JED" files into something like source code for certain devices. - I use a lot of microprocessors in various projects, and since I like to be sure of my hardware before launching complex software, I wrote CSum, which adds checksums to firmware. Note that software downloaded from this site is for use used at your own risk. I accept no responsibility for anything untoward that happens. Sources are supplied, so if you are worried you can take a look before running the code.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00154.warc.gz
CC-MAIN-2017-43
799
4
https://modbotshop.com/collections/pivotpi/products/dt-cp-pvtboard-1
code
PivotPi BoardRegular price $17.99 The PivotPi is a complete kit to turn anything in your world into a moving robot. The PivotPi is a servo controller for the Raspberry Pi. It can control up to 8 servos, allowing you to make projects that move, grab, dance, swing, wave, and many other motions. You can use the PivotPi to create robots, automate your home, make moving sculptures, and craft animatronics built with the Raspberry Pi. The PivotPi can be programmed using Python, Scratch, and many other programming languages.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476399.55/warc/CC-MAIN-20240303210414-20240304000414-00163.warc.gz
CC-MAIN-2024-10
522
2
https://www.digitalocean.com/community/users/sjcuk
code
Hi - I have an NGINX server on LAMP, Ubuntu 16.04 and have got wordpress working OK. However, when I upload my own .php files to /htdocs, they execute, but no output is sent to the browser. I just see a blank screen. ... Solved it - I just changed the user permissions of the files and folder to www-data: sudo chown -R www-data:www-data /var/www/<domain name>
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887377.70/warc/CC-MAIN-20200705152852-20200705182852-00534.warc.gz
CC-MAIN-2020-29
360
4
https://www.pokecommunity.com/converse.php?u=53059&u2=134386
code
You begin with choosing a graphics API. I suggest OpenGL 3.x, as it is cross-platform and has really great support. Next, you have to learn matrix algebra. Then you have to use matrix algebra to calculate the 3D projection. Once you have that set up, it's easy. You can use the GLEW library to gain access to OpenGL functions. For the voxels, you can use a simple 3D array to store the data and use a noise generation algorithm to generate their points. After your 3D projection is ready, you can just pass the coordinates directly to the graphics card. If you don't understand what I'm saying or don't know how to apply it, this is proof that it's hard to make a game in C++ if you don't know 100% what you're doing. It's one of the hardest languages. Concepts are much more lower level. Would know anything about voxel cube rendering. BEcause currently I know nothing for it. Colelge are making us do Tetris. I found a tutorial online, but it's not descriptive enough, to a newbie at C++. lol you feel old? I said your birthday was the day AFTER MINE. So I'm older. :p And GMtudio doesn't limit much. You can use any function, even the 3D ones. It only limits the amount of objects and sprites and such you can put it but the limit is like around 15 for each, so it's not bad. You're going to miss out when they add shaders, ogg and a full compiler in a few months!
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00554-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
1,367
8
https://support.shapediver.com/hc/en-us/articles/360015413211-Component-Geometry-Import
code
Description of the component The ShapeDiverGeometryImport component is a way to let online users upload a CAD file that can be used as an input to the Grasshopper definition. There are two ways to import geometry using this component: - By giving a URL to the URI input parameter (possible locally in Grasshopper and in the online viewer) - By double-clicking on the button generated by the component, after the file is uploaded to the platform (possible only in the online viewer). The button opens a dialog allowing online users to upload their local files. In Grasshopper, the component can only be tested using the option 1, both with public online URLs and with files stored locally (using the path of the file on the local machine). The files must be hosted in a publicly accessible online location that allows downloading. Read more about how to store files online and use them in the ShapeDiver viewer. The component outputs a list of geometry objects extracted from the imported file. Double-clicking on this component opens a dialog box with the options to set a maximum file size for the parameter and to enable different file types. Supported file formats At the moment, the ShapeDiver plugin can import two file types: - OBJ (MIME type: application/wavefront-obj) - 3D meshes - DXF (MIME type: application/dxf) - 2D drawings For the DXF format, the following entities are supported (other entities will be ignored): Let us know through the forum if you need to import more geometry formats. Example 1: mesh processing One possible application for importing geometry is to offer a web applet that lets users upload their own files and run them through an online algorithm, therefore providing an Algorithm as a Service. Consider below a basic example definition that lets users upload meshes with a number of holes and simply fills all the holes to output a watertight mesh. Once online, this definition includes a text box for users to link external CAD files and see the algorithm's results in the 3D viewer. The final definition also includes an export component for downloading the closed output mesh. Example 2: upload floor plans A great use case in the field of architectural product configurators is to allow users to upload floor plans (in the DXF 2D file format, for example) as an input for architectural products. - Component name: ShapediverGeometryImport - Default Nickname: SDGeometryImport |URI||U||The default URI to import the image from if no file is uploaded.|| |Object||O||The imported geometry object(s).|| |Material||M||The imported material definition. Not implemented yet.|| - Double-click on the component to define which file types are accepted by the component and a maximum size for the uploaded files.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00551.warc.gz
CC-MAIN-2022-05
2,743
25
https://www.avrfreaks.net/forum/volume-pricing-pcbs-100k-units
code
Hi - does anybody have a good feel for how volume pricing of PCBs works? It is mostly based on technology + board area? ie 2 million of a 2in^2 PCB would cost the same as 1 million of a 4 in^2 PCB? Have any suggestions for a high volume fab? Also, any recommendations for how to give clients budgetary quotes for high volume PCBs? Any online calculators?
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880519.12/warc/CC-MAIN-20201023014545-20201023044545-00579.warc.gz
CC-MAIN-2020-45
354
3
https://superuser.com/questions/648503/is-high-cpu-temperature-directly-related-to-a-blue-screen
code
This question already has an answer here: First of all I'd like to note that this is not about solving my issue at hand per se, the PC in question will be brought to someone to fix it, it is more a question out of curiousity. So here is the thing, the PC has an i7-920 CPU. It has been working fine for the past years. However since a few days the CPU fan is not spinning anymore (suspecting it is broken), but at the same time I'm getting bluescreens when the CPU temperature reaches above 85 degrees celsius. I am wondering if there is a direct relation to the CPU temperature and Windows blue screening? All I know is that my BIOS is not controlling this, the i7-920 shuts itself down at 105 degrees celsius. So why exactly am I getting a Blue Screen? Any explanation would be highly appreciated, just out of curiousity. ps. Something that does concern me. Several years ago I have ran Prime95 for hours on this PC and the temperature would be at a steady 95C, with the fan running obviously. Yet now on 85C it is already blue screening.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00059.warc.gz
CC-MAIN-2019-43
1,040
6
http://www.wyzant.com/West_Lawn_Chicago_IL_Java_tutors.aspx
code
Oak Park, IL 60301 Senior Software Engineer ...I hold a Master's degree in Computer Science and Statistics. Over the years, I have developed software using various technologies including C, C++, Java , Perl, Python, HTML, Java script, jQuery, AngularJS, Oracle, SQL Server, Sybase, MySQL, Informatica, Unix (Solaris)...
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00517-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
319
6
https://pypi.org/project/versjon/
code
Tool for generating a json file with the name and url of versions. What it is: - A tool for linking multiple versions of your project’s Sphinx documentation, without the need for special services such as readthedocs.org. - Useful if you build and host your documentation as a static site. How it works: - versjon works by injecting some basic HTML in to the generated documentation. Install the versjon tool using pip: python -m pip install versjon Building the docs Build all the different versions of your documentation into a common directory. For example generating all the docs in the site directory: git checkout 2.0.0 sphinx-build... -D version=2.0.0 ... site/build_2.0.0 ... git checkout 5.1.1 sphinx-build... -D version=5.1.1 ... site/build_5.1.1 versjon will use whatever version is specified in the Sphinx configuration: https://www.sphinx-doc.org/en/master/usage/configuration.html If you have the specified the version number in conf.py you can omit the -D version option to Sphinx build. Run versjon in the common directory - and you are done. As default versjon will generate an index.html file with a redirect to the latest stable version, or if no semver version exists the first branch, presumably the master. You can disable this behavior with --no-index option. As default versjon will generate a folder in the documentation root called stable. This folder will contain an index.html with a redirect to the latest stable version. You can disable this behavior with --no-stable-index option. Built in templates (injection) versjon ships with a couple of built-in templates, that get injected in the generated HTML: - head.html: This template gets injected into the <head> tag of the generated HTML pages. In this template you have access to the following variables: general, build, page. - header.html: This template is inject at the beginning of the <body> tag. In this template you have access to the following variables: general, build, page. - footer.html: This template is inject at the end of the <body> tag. In this template you have access to the following variables: general, build, page. - index.html: This template is use to generate an index.html in the root folder. In this template you have access to the following variables: general, page. - stable_index.html: This template is use to generate an index.html in the stable folder. In this template you have access to the following variables: general, page. You can provide you own template for generating the version selector etc. The easiest way is probably to copy one of the default HTML templates, e.g., src/templates/footer.html one and adapt it. If you want to “inject” a custom footer. Create a file called footer.html and put it somewhere in your project, e.g., mydocs/footer.html now invoke versjon with the --user_templates argument, e.g.,: verjson will prioritize finding templates in the users path first. If none is found it will fallback to the built-in. If you want to disable a built-in template, simply create an empty file with the same name as the template you want to disable e.g. header.html if the template is empty no content will be injected. In the templates you can access the information gathered by versjon. Based on this you can generate the needed HTML. The following lists the various variables. - semver: A list of version dicts with a name which is a valid sematic version numbers, and a list of available files in this version. - other: A list of version dicts with a name which non-semantic version numbers, and a list of available files in this version. Typically the other list will contain branches. - stable: If we have any semantic version releases the stable version will be the newest release in the semver list. - docs_path: Dictionary mapping versions to the build folder for a version relative to the documentation root. - current: The current version name - is_semver: True if the current version follows semantic versioning - page_root: Relative path to the documentation root from a given HTML page. Concatenating the page_root with a path in docs_path will give a valid relative link from one HTML page to the root folder of specific version. Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Hashes for versjon-2.1.0-py2.py3-none-any.whl
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104432674.76/warc/CC-MAIN-20220704141714-20220704171714-00499.warc.gz
CC-MAIN-2022-27
4,337
38
https://play.google.com/store/apps/details?id=com.pomoteam.app
code
The first and most complete Pomodoro application to work as a team. Here, you can create team and know when your friends are free to attend it. The best part is that the app works 100% in real time. That is, you will know exactly when things are happening. In addition, you can also create reminders and features that are used in a shared way.
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164469.99/warc/CC-MAIN-20180926081614-20180926102014-00461.warc.gz
CC-MAIN-2018-39
343
4
https://www.dokuwiki.org/plugins?plugintag=file
code
Table of Contents Plugins provide a system of extending DokuWiki's features without the need to hack the original code (and so again on each update). Below is a list of ready-to-use plugins created by DokuWiki users. The installation can be done automatically by search and install the plugin via the extension manager1). A plugin could be manually installed by putting it into its own folder under lib/plugins/. See the detailed plugin installation instructions. Be sure to read about Plugin Security. If you would like to help translating the plugins into another language, please see this page Localization of plugins. Filter available plugins by type or by using the tag cloud. You could also search within the plugin namespace using the search box. Filter by type - Syntax plugins extend DokuWiki's basic syntax. - Action plugins replace or extend DokuWiki's core functionality - Admin plugins provide extra administration tools - Helper plugins provide functionality shared by other plugins - Render plugins add new export modes or replaces the standard XHTML renderer - Remote plugins add methods to the RemoteAPI accessible via web services - Auth plugins add authentication modules - CLI plugins add commands to use at the Command Line Interface Filter by tag Tagged with 'file' (16) filelist Plugin Download Provides a syntax for adding linked and sorted lists of files as selected by wildcard based glob patterns to a wiki page Display File Plugin Download The Display File Plugin displays the content of a specified file on the local system using a displayfile element filelisting Plugin Download Show a searchable and sortable list of media files in the current namespace and subnamespaces. With dropfiles plugin allows Drag'n'Drop upload of media files. openas Plugin Download File utility for moving/renaming files, saving copies, creating new pages from templates PGList Plugin Download List a pages or directories inside the current namespace or selected namespace. MediaList Plugin Download Show a list of media files (images/archives...) referred in a given page or stored in a given namespace Source Plugin Download Allows you to include all or part of the contents of another file, with syntax highlighting, into the current page doctree2filelist Plugin Download doctree2filelist provides a wizard to import document trees into DokuWiki PreserveFilenames Plugin Download Preserves the original name of the uploaded media file (letter cases, symbols, etc.) A plugin that allows wiki pages to represent files, and to generate files basing on some recipe (rules). This empowers a DokuWiki to become a collaboration tool on project management and content creation. Popularity values are based on data gathered through the popularity plugin – please help to increase accuracy by reporting your data with this plugin. If your needs aren't covered by the existing plugins above, please have a look at our pages on how to create and publish a plugin. Reporting Bugs and Features Wishes Two short notes: - Please use the issue tracker of the plugin - Provide enough information to reproduce your case Please refer to How to report bugs and request new features in plugins for more info about this topic. Ideas for New Plugins If you are in need of a special feature in DokuWiki but haven't the skills or resources to create your own plugin you might want to suggest the feature for consideration by the community. To ask for the creation of a new plugin or to discuss plugin ideas, please refer to the Plugin Wishlist Forum. Recent Wishes in the forum: - What are the options for a collapsible tree view for data (2023-12-04 13:00) - Code highlighting with vector graphics elements (SVG?) (2023-12-03 17:07) - country flags icone (2023-11-27 06:12) - File processing workflow (2023-10-25 14:07) - Mini-Banner/Tile Display (2023-10-18 12:43) - Collapsible list? (2023-09-15 03:18) - idea: Markov Chain to predict input? (2023-09-14 12:51) - how to show a window containing a summary and image wih mouseover (2023-09-11 07:40) Further some closed features requests, which we won't implement in DokuWiki core, are interesting ideas for plugins: Doku Plugin idea's at our GitHub issue tracker. - display section numbers in the page and table of contents by KamarajuKusumanchi (2021-09-08 17:20) - Feature Request: Having possibility to show unused Syntax plugins by fstorck (2020-06-05 09:38) - Integrate MediaManager (lite) into Pageditor by igittigitt (2018-05-15 16:41) - Provide oEmbed tags/endpoint by chrisblech (2016-11-12 16:13) - Spam protection measures by Traumflug (2015-12-04 19:46) - Possibility to use Tinypng when upload image ? by LeDistordu (2015-06-15 11:47)
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102697.89/warc/CC-MAIN-20231210221943-20231211011943-00498.warc.gz
CC-MAIN-2023-50
4,687
63
https://www.siggraph.org/sites/default/files/SIGGRPAH-Asia-2016-Opentoc_1.html
code
We present an approach for adding directed gaze movements to characters animated using full-body motion capture. Our approach provides a comprehensive authoring solution that automatically infers plausible directed gaze from the captured body motion, provides convenient controls for manual editing, and adds synthetic gaze movements onto the original motion. The foundation of the approach is an abstract representation of gaze behavior as a sequence of gaze shifts and fixations toward targets in the scene. We present methods for automatic inference of this representation by analyzing the head and torso kinematics and scene features. We introduce tools for convenient editing of the gaze sequence and target layout that allow an animator to adjust the gaze behavior without worrying about the details of pose and timing. A synthesis component translates the gaze sequence into coordinated movements of the eyes, head, and torso, and blends these with the original body motion. We evaluate the effectiveness of our inference methods, the efficiency of the authoring process, and the quality of the resulting animation. Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort with marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion with an inside-in setup, i.e. without external sensors. This makes capture independent of a confined volume, but requires substantial, often constraining, and hard to set up body instrumentation. Therefore, we propose a new method for real-time, marker-less, and egocentric motion capture: estimating the full-body skeleton pose from a lightweight stereo pair of fisheye cameras attached to a helmet or virtual reality headset - an optical inside-in method, so to speak. This allows full-body motion capture in general indoor and outdoor scenes, including crowded scenes with many people nearby, which enables reconstruction in larger-scale activities. Our approach combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset. It is particularly useful in virtual reality to freely roam and interact, while seeing the fully motion-captured virtual body. Inverse dynamics is an important and challenging problem in human motion modeling, synthesis and simulation, as well as in robotics and biomechanics. Previous solutions to inverse dynamics are often noisy and ambiguous particularly when double stances occur. In this paper, we present a novel inverse dynamics method that accurately reconstructs biomechanically valid contact information, including center of pressure, contact forces, torsional torques and internal joint torques from input kinematic human motion data. Our key idea is to apply statistical modeling techniques to a set of preprocessed human kinematic and dynamic motion data captured by a combination of an optical motion capture system, pressure insoles and force plates. We formulate the data-driven inverse dynamics problem in a maximum a posteriori (MAP) framework by estimating the most likely contact information and internal joint torques that are consistent with input kinematic motion data. We construct a low-dimensional data-driven prior model for contact information and internal joint torques to reduce ambiguity of inverse dynamics for human motion. We demonstrate the accuracy of our method on a wide variety of human movements including walking, jumping, running, turning and hopping and achieve state-of-the-art accuracy in our comparison against alternative methods. In addition, we discuss how to extend the data-driven inverse dynamics framework to motion editing, filtering and motion control. Microscopic crowd simulators rely on models of local interaction (e.g. collision avoidance) to synthesize the individual motion of each virtual agent. The quality of the resulting motions heavily depends on this component, which has significantly improved in the past few years. Recent advances have been in particular due to the introduction of a short-horizon motion prediction strategy that enables anticipated motion adaptation during local interactions among agents. However, the simplicity of prediction techniques of existing models somewhat limits their domain of validity. In this paper, our key objective is to significantly improve the quality of simulations by expanding the applicable range of motion predictions. To this end, we present a novel local interaction algorithm with a new context-aware, probabilistic motion prediction model. By context-aware, we mean that this approach allows crowd simulators to account for many factors, such as the influence of environment layouts or in-progress interactions among agents, and has the ability to simultaneously maintain several possible alternate scenarios for future motions and to cope with uncertainties on sensing and other agent's motions. Technically, this model introduces "collision probability fields" between agents, efficiently computed through the cumulative application of Warp Operators on a source Intrinsic Field. We demonstrate how this model significantly improves the quality of simulated motions in challenging scenarios, such as dense crowds and complex environments. Artists routinely use gesture drawings to communicate ideated character poses for storyboarding and other digital media. During subsequent posing of the 3D character models, they use these drawing as a reference, and perform the posing itself using 3D interfaces which require time and expert 3D knowledge to operate. We propose the first method for automatically posing 3D characters directly using gesture drawings as an input, sidestepping the manual 3D posing step. We observe that artists are skilled at quickly and effectively conveying poses using such drawings, and design them to facilitate a single perceptually consistent pose interpretation by viewers. Our algorithm leverages perceptual cues to parse the drawings and recover the artist-intended poses. It takes as input a vector-format rough gesture drawing and a rigged 3D character model, and plausibly poses the character to conform to the depicted pose. No other input is required. Our contribution is two-fold: we first analyze and formulate the pose cues encoded in gesture drawings; we then employ these cues to compute a plausible image space projection of the conveyed pose and to imbue it with depth. Our framework is designed to robustly overcome errors and inaccuracies frequent in typical gesture drawings. We exhibit a wide variety of character models posed by our method created from gesture drawings of complex poses, including poses with occlusions and foreshortening. We validate our approach via result comparisons to artist-posed models generated from the same reference drawings, via studies that confirm that our results agree with viewer perception, and via comparison to algorithmic alternatives. Volumetric micro-appearance models have provided remarkably high-quality renderings, but are highly data intensive and usually require tens of gigabytes in storage. When an object is viewed from a distance, the highest level of detail offered by these models is usually unnecessary, but traditional linear downsampling weakens the object's intrinsic shadowing structures and can yield poor accuracy. We introduce a joint optimization of single-scattering albedos and phase functions to accurately downsample heterogeneous and anisotropic media. Our method is built upon scaled phase functions, a new representation combining abledos and (standard) phase functions. We also show that modularity can be exploited to greatly reduce the amortized optimization overhead by allowing multiple synthesized models to share one set of down-sampled parameters. Our optimized parameters generalize well to novel lighting and viewing configurations, and the resulting data sets offer several orders of magnitude storage savings. Several scalable many-light rendering methods have been proposed recently for the efficient computation of global illumination. However, gathering contributions of virtual lights in participating media remains an inefficient and time-consuming task. In this paper, we present a novel sparse sampling and reconstruction method to accelerate the gathering step of the many-light rendering for participating media. Our technique explores the observation that the scattered lightings are usually locally coherent and of low rank even in heterogeneous media. In particular, we first introduce a matrix formation with light segments as columns and eye ray segments as rows, and formulate the gathering step into a matrix sampling and reconstruction problem. We then propose an adaptive matrix column sampling and completion algorithm to efficiently reconstruct the matrix by only sampling a small number of elements. Experimental results show that our approach greatly improves the performance, and obtains up to one order of magnitude speedup compared with other state-of-the-art methods of many-light rendering for participating media. We address the challenge of efficiently rendering massive assemblies of grains within a forward path-tracing framework. Previous approaches exist for accelerating high-order scattering for a limited, and static, set of granular materials, often requiring scene-dependent precomputation. We significantly expand the admissible regime of granular materials by considering heterogeneous and dynamic granular mixtures with spatially varying grain concentrations, pack rates, and sizes. Our method supports both procedurally generated grain assemblies and dynamic assemblies authored in off-the-shelf particle simulation tools. The key to our speedup lies in two complementary aggregate scattering approximations which we introduced to jointly accelerate construction of short and long light paths. For low-order scattering, we accelerate path construction using novel grain scattering distribution functions (GSDF) which aggregate intra-grain light transport while retaining important grain-level structure. For high-order scattering, we extend prior work on shell transport functions (STF) to support dynamic, heterogeneous mixtures of grains with varying sizes. We do this without a scene-dependent precomputation and show how this can also be used to accelerate light transport in arbitrary continuous heterogeneous media. Our multi-scale rendering automatically minimizes the usage of explicit path tracing to only the first grain along a light path, or can avoid it completely, when appropriate, by switching to our aggregate transport approximations. We demonstrate our technique on animated scenes containing heterogeneous mixtures of various types of grains that could not previously be rendered efficiently. We also compare to previous work on a simpler class of granular assemblies, reporting significant computation savings, often yielding higher accuracy results. We explore the theory of integration with control variates in the context of rendering. Our goal is to optimally combine multiple estimators using their covariances. We focus on two applications, re-rendering and gradient-domain rendering, where we exploit coherence between temporally and spatially adjacent pixels. We propose an image-space (iterative) reconstruction scheme that employs control variates to reduce variance. We show that recent works on scene editing and gradient-domain rendering can be directly formulated as control-variate estimators, despite using seemingly different approaches. In particular, we demonstrate the conceptual equivalence of screened Poisson image reconstruction and our iterative reconstruction scheme. Our composite estimators offer practical and simple solutions that improve upon the current state of the art for the two investigated applications. Wood is an important decorative material prized for its unique appearance. It is commonly rendered using artistically authored 2D color and bump textures, which reproduces color patterns on flat surfaces well. But the dramatic anisotropic specular figure caused by wood fibers, common in curly maple and other species, is harder to achieve. While suitable BRDF models exist, the texture parameter maps for these wood BRDFs are difficult to author---good results have been shown with elaborate measurements for small flat samples, but these models are not much used in practice. Furthermore, mapping 2D image textures onto 3D objects leads to distortion and inconsistencies. Procedural volumetric textures solve these geometric problems, but existing methods produce much lower quality than image textures. This paper aims to bring the best of all these techniques together: we present a comprehensive volumetric simulation of wood appearance, including growth rings, color variation, pores, rays, and growth distortions. The fiber directions required for anisotropic specular figure follow naturally from the distortions. Our results rival the quality of textures based on photographs, but with the consistency and convenience of a volumetric model. Our model is modular, with components that are intuitive to control, fast to compute, and require minimal storage. We introduce a co-analysis technique designed for correspondence inference within large shape collections. Such collections are naturally rich in variation, adding ambiguity to the notoriously difficult problem of correspondence computation. We leverage the robustness of correspondences between similar shapes to address the difficulties associated with this problem. In our approach, pairs of similar shapes are extracted from the collection, analyzed and matched in an efficient and reliable manner, culminating in the construction of a network of correspondences that connects the entire collection. The correspondence between any pair of shapes then amounts to a simple propagation along the minimax path between the two shapes in the network. At the heart of our approach is the introduction of a robust, structure-oriented shape matching method. Leveraging the idea of projective analysis, we partition 2D projections of a shape to obtain a set of 1D ordered regions, which are both simple and efficient to match. We lift the matched projections back to the 3D domain to obtain a pairwise shape correspondence. The emphasis given to structural compatibility is a central tool in estimating the reliability and completeness of a computed correspondence, uncovering any non-negligible semantic discrepancies that may exist between shapes. These detected differences are a deciding factor in the establishment of a network aiming to capture local similarities. We demonstrate that the combination of the presented observations into a co-analysis method allows us to establish reliable correspondences among shapes within large collections. We present a technique for parsing widely used furniture assembly instructions, and reconstructing the 3D models of furniture components and their dynamic assembly process. Our technique takes as input a multi-step assembly instruction in a vector graphic format and starts to group the vector graphic primitives into semantic elements representing individual furniture parts, mechanical connectors (e.g., screws, bolts and hinges), arrows, visual highlights, and numbers. To reconstruct the dynamic assembly process depicted over multiple steps, our system identifies previously built 3D furniture components when parsing a new step, and uses them to address the challenge of occlusions while generating new 3D components incrementally. With a wide range of examples covering a variety of furniture types, we demonstrate the use of our system to animate the 3D furniture assembly process and, beyond that, the semantic-aware furniture editing as well as the fabrication of personalized furnitures. We introduce a framework for action-driven evolution of 3D indoor scenes, where the goal is to simulate how scenes are altered by human actions, and specifically, by object placements necessitated by the actions. To this end, we develop an action model with each type of action combining information about one or more human poses, one or more object categories, and spatial configurations of objects belonging to these categories which summarize the object-object and object-human relations for the action. Importantly, all these pieces of information are learned from annotated photos. Correlations between the learned actions are analyzed to guide the construction of an action graph. Starting with an initial 3D scene, we probabilistically sample a sequence of actions from the action graph to drive progressive scene evolution. Each action triggers appropriate object placements, based on object co-occurrences and spatial configurations learned for the action model. We show results of our scene evolution that lead to realistic and messy 3D scenes, as well as quantitative evaluations by user studies which compare our method to manual scene creation and state-of-the-art, data-driven methods, in terms of scene plausibility and naturalness. Visualizing changes to indoor scenes is important for many applications. When looking for a new place to live, we want to see how the interior looks not with the current inhabitant's belongings, but with our own furniture. Before purchasing a new sofa, we want to visualize how it would look in our living room. In this paper, we present a system that takes an RGBD scan of an indoor scene and produces a scene model of the empty room, including light emitters, materials, and the geometry of the non-cluttered room. Our system enables realistic rendering not only of the empty room under the original lighting conditions, but also with various scene edits, including adding furniture, changing the material properties of the walls, and relighting. These types of scene edits enable many mixed reality applications in areas such as real estate, furniture retail, and interior design. Our system contains two novel technical contributions: a 3D radiometric calibration process that recovers the appearance of the scene in high dynamic range, and a global-illumination-aware inverse rendering framework that simultaneously recovers reflectance properties of scene surfaces and lighting properties for several light source types, including generalized point and line lights. Recent study in vision science has shown that blue light in a certain frequency band affects human circadian rhythm and impairs our health. Although applying a light blocker to an image display can block the harmful blue light, it inevitably makes an image look like an aged photo. In this paper, we show that it is possible to reduce harmful blue light while preserving the blue appearance of an image. Moreover, we optimize the spectral transmittance profile of blue light blocker based on psychophysical data and develop a color compensation algorithm to minimize color distortion. A prototype using notch filters is built as a proof of concept. Binocular disparity is the main depth cue that makes stereoscopic images appear 3D. However, in many scenarios, the range of depth that can be reproduced by this cue is greatly limited and typically fixed due to constraints imposed by displays. For example, due to the low angular resolution of current automultiscopic screens, they can only reproduce a shallow depth range. In this work, we study the motion parallax cue, which is a relatively strong depth cue, and can be freely reproduced even on a 2D screen without any limits. We exploit the fact that in many practical scenarios, motion parallax provides sufficiently strong depth information that the presence of binocular depth cues can be reduced through aggressive disparity compression. To assess the strength of the effect we conduct psycho-visual experiments that measure the influence of motion parallax on depth perception and relate it to the depth resulting from binocular disparity. Based on the measurements, we propose a joint disparity-parallax computational model that predicts apparent depth resulting from both cues. We demonstrate how this model can be applied in the context of stereo and multiscopic image processing, and propose new disparity manipulation techniques, which first quantify depth obtained from motion parallax, and then adjust binocular disparity information accordingly. This allows us to manipulate the disparity signal according to the strength of motion parallax to improve the overall depth reproduction. This technique is validated in additional experiments. Large 3D model repositories of common objects are now ubiquitous and are increasingly being used in computer graphics and computer vision for both analysis and synthesis tasks. However, images of objects in the real world have a richness of appearance that these repositories do not capture, largely because most existing 3D models are untextured. In this work we develop an automated pipeline capable of transporting texture information from images of real objects to 3D models of similar objects. This is a challenging problem, as an object's texture as seen in a photograph is distorted by many factors, including pose, geometry, and illumination. These geometric and photometric distortions must be undone in order to transfer the pure underlying texture to a new object --- the 3D model. Instead of using problematic dense correspondences, we factorize the problem into the reconstruction of a set of base textures (materials) and an illumination model for the object in the image. By exploiting the geometry of the similar 3D model, we reconstruct certain reliable texture regions and correct for the illumination, from which a full texture map can be recovered and applied to the model. Our method allows for large-scale unsupervised production of richly textured 3D models directly from image data, providing high quality virtual objects for 3D scene design or photo editing applications, as well as a wealth of data for training machine learning algorithms for various inference tasks in graphics and vision. We analyze the dense matching problem for Internet scene images based on the fact that commonly only part of images can be matched due to the variation of view angle, motion, objects, etc. We thus propose regional foremost matching to reject outlier matching points while still producing dense high-quality correspondence in the remaining foremost regions. Our system initializes sparse correspondence, propagates matching with model fitting and optimization, and detects foremost regions robustly. We apply our method to several applications, including time-lapse sequence generation, Internet photo composition, automatic image morphing, and automatic rephotography. Foveated rendering synthesizes images with progressively less detail outside the eye fixation region, potentially unlocking significant speedups for wide field-of-view displays, such as head mounted displays, where target framerate and resolution is increasing faster than the performance of traditional real-time renderers. To study and improve potential gains, we designed a foveated rendering user study to evaluate the perceptual abilities of human peripheral vision when viewing today's displays. We determined that filtering peripheral regions reduces contrast, inducing a sense of tunnel vision. When applying a postprocess contrast enhancement, subjects tolerated up to 2× larger blur radius before detecting differences from a non-foveated ground truth. After verifying these insights on both desktop and head mounted displays augmented with high-speed gaze-tracking, we designed a perceptual target image to strive for when engineering a production foveated renderer. Given our perceptual target, we designed a practical foveated rendering system that reduces number of shades by up to 70% and allows coarsened shading up to 30° closer to the fovea than Guenter et al. without introducing perceivable aliasing or blur. We filter both pre- and post-shading to address aliasing from undersampling in the periphery, introduce a novel multiresolution- and saccade-aware temporal antialising algorithm, and use contrast enhancement to help recover peripheral details that are resolvable by our eye but degraded by filtering. We validate our system by performing another user study. Frequency analysis shows our system closely matches our perceptual target. Measurements of temporal stability show we obtain quality similar to temporally filtered non-foveated renderings. We introduce Bidirectional Sound Transport (BST), a new algorithm that simulates sound propagation by bidirectional path tracing using multiple importance sampling. Our approach can handle multiple sources in large virtual environments with complex occlusion, and can produce plausible acoustic effects at an interactive rate on a desktop PC. We introduce a new metric based on the signal-to-noise ratio (SNR) of the energy response and use this metric to evaluate the performance of ray-tracing-based acoustic simulation methods. Our formulation exploits temporal coherence in terms of using the resulting sample distribution of the previous frame to guide the sample distribution of the current one. We show that our sample redistribution algorithm converges and better balances between early and late reflections. We evaluate our approach on different benchmarks and demonstrate significant speedup over prior geometric acoustic algorithms. Crumpling a thin sheet produces a characteristic sound, comprised of distinct clicking sounds corresponding to buckling events. We propose a physically based algorithm that automatically synthesizes crumpling sounds for a given thin shell animation. The resulting sound is a superposition of individually synthesized clicking sounds corresponding to visually significant and insignificant buckling events. We identify visually significant buckling events on the dynamically evolving thin surface mesh, and instantiate visually insignificant buckling events via a stochastic model that seeks to mimic the power-law distribution of buckling energies observed in many materials. In either case, the synthesis of a buckling sound employs linear modal analysis of the deformed thin shell. Because different buckling events in general occur at different deformed configurations, the question arises whether the calculation of linear modes can be reused. We amortize the cost of the linear modal analysis by dynamically partitioning the mesh into nearly rigid pieces: the modal analysis of a rigidly moving piece is retained over time, and the modal analysis of the assembly is obtained via Component Mode Synthesis (CMS). We illustrate our approach through a series of examples and a perceptual user study, demonstrating the utility of the sound synthesis method in producing realistic sounds at practical computation times. Tangles are a form of structured pen-and-ink 2D art characterized by repeating, recursive patterns. We present a method to procedurally generate tangle drawings, seen as recursively split sets of arbitrary 2D polygons with holes, with anisotropic and non-stationary features. We formally model tangles with group grammars, an extension of set grammars, that explicitly handles the grouping of shapes necessary to represent tangle repetitions. We introduce a small set of expressive geometric and grouping operators, showing that they can respectively express complex tangles patterns and sub-pattern distributions, with relatively simple grammars. We also show how users can control tangle generation in an interactive and intuitive way. Throughout the paper, we show how group grammars can, in few tens of seconds, produce a wide variety of patterns that would take artists hours of tedious and time-consuming work. We then validated both the quality of the generated tangles and the efficiency of the control provided to the users with a user study, run with both expert and non-expert users. In this paper, we present the concept of operator graph scheduling for high performance procedural generation on the graphics processing unit (GPU). The operator graph forms an intermediate representation that describes all possible operations and objects that can arise during a specific procedural generation. While previous methods have focused on parallelizing a specific procedural approach, the operator graph is applicable to all procedural generation methods that can be described by a graph, such as L-systems, shape grammars, or stack based generation methods. Using the operator graph, we show that all partitions of the graph correspond to possible ways of scheduling a procedural generation on the GPU, including the scheduling strategies of previous work. As the space of possible partitions is very large, we describe three search heuristics, aiding an optimizer in finding the fastest valid schedule for any given operator graph. The best partitions found by our optimizer increase performance of 8 to 30x over the previous state of the art in GPU shape grammar and L-system generation. This paper presents an interactive design interface for three-dimensional free-form musical wind instruments. The sound of a wind instrument is governed by the acoustic resonance as a result of complicated interactions of sound waves and internal geometries of the instrument. Thus, creating an original free-form wind instrument by manual methods is a challenging problem. Our interface provides interactive sound simulation feedback as the user edits, allowing exploration of original wind instrument designs. Sound simulation of a 3D wind musical instrument is known to be computationally expensive. To overcome this problem, we first model the wind instruments as a passive resonator, where we ignore coupled oscillation excitation from the mouthpiece. Then we present a novel efficient method to estimate the resonance frequency based on the boundary element method by formulating the resonance problem as a minimum eigenvalue problem. Furthermore, we can efficiently compute an approximate resonance frequency using a new technique based on a generalized eigenvalue problem. The designs can be fabricated using a 3D printer, thus we call the results "print-wind instruments" in association with woodwind instruments. We demonstrate our approach with examples of unconventional shapes performing familiar songs. Acquiring microscale reflectance and normals is useful for digital documentation and identification of real-world materials. However, its simultaneous acquisition has rarely been explored due to the difficulties of combining both sources of information at such small scale. In this paper, we capture both spatially-varying material appearance (diffuse, specular and roughness) and normals simultaneously at the microscale resolution. We design and build a microscopic light dome with 374 LED lights over the hemisphere, specifically tailored to the characteristics of microscopic imaging. This allows us to achieve the highest resolution for such combined information among current state-of-the-art acquisition systems. We thoroughly test and characterize our system, and provide microscopic appearance measurements of a wide range of common materials, as well as renderings of novel views to validate the applicability of our captured data. Additional applications such as bi-scale material editing from real-world samples are also demonstrated. Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction. We present a novel integrated approach for estimating both spatially-varying surface reflectance and detailed geometry from a video of a rotating object under unknown static illumination. Key to our method is the decoupling of the recovery of normal and surface reflectance from the estimation of surface geometry. We define an apparent normal field with corresponding reflectance for each point (including those not on the object's surface) that best explain the observations. We observe that the object's surface goes through points where the apparent normal field and corresponding reflectance exhibit a high degree of consistency with the observations. However, estimating the apparent normal field requires knowledge of the unknown incident lighting. We therefore formulate the recovery of shape, surface reflectance, and incident lighting, as an iterative process that alternates between estimating shape and lighting, and simultaneously recovers surface reflectance at each step. To recover the shape, we first form an initial surface that passes through locations with consistent apparent temporal traces, followed by a refinement that maximizes the consistency of the surface normals with the underlying apparent normal field. To recover the lighting, we rely on appearance-from-motion using the recovered geometry from the previous step. We demonstrate our integrated framework on a variety of synthetic and real test cases exhibiting a wide variety of materials and shape. We develop a method to acquire the BRDF of a homogeneous flat sample from only two images, taken by a near-field perspective camera, and lit by a directional light source. Our method uses the MERL BRDF database to determine the optimal set of lightview pairs for data-driven reflectance acquisition. We develop a mathematical framework to estimate error from a given set of measurements, including the use of multiple measurements in an image simultaneously, as needed for acquisition from near-field setups. The novel error metric is essential in the near-field case, where we show that using the condition-number alone performs poorly. We demonstrate practical near-field acquisition of BRDFs from only one or two input images. Our framework generalizes to configurations like a fixed camera setup, where we also develop a simple extension to spatially-varying BRDFs by clustering the materials. We present a novel method for capturing real-world, spatially-varying surface reflectance from a small number of object views (k). Our key observation is that a specific target's reflectance can be represented by a small number of custom basis materials (N) convexly blended by an even smaller number of non-zero weights at each point (n). Based on this sparse basis/sparser blend model, we develop an SVBRDF reconstruction algorithm that jointly solves for n, N, the basis BRDFs, and their spatial blend weights with an alternating iterative optimization, each step of which solves a linearly-constrained quadratic programming problem. We develop a numerical tool that lets us estimate the number of views required and analyze the effect of lighting and geometry on reconstruction quality. We validate our method with images rendered from synthetic BRDFs, and demonstrate convincing results on real objects of pre-scanned shape and lit by uncontrolled natural illumination, from very few or even a single input image. Portraits taken with direct flash look harsh and unflattering because the light source comes from a small set of angles very close to the camera. Advanced photographers address this problem by using bounce flash, a technique where the flash is directed towards other surfaces in the room, creating a larger, virtual light source that can be cast from different directions to provide better shading variation for 3D modeling. However, finding the right direction to point a bounce flash requires skill and careful consideration of the available surfaces and subject configuration. Inspired by the impact of automation for exposure, focus and flash metering, we automate control of the flash direction for bounce illumination. We first identify criteria for evaluating flash directions, based on established photography literature, and relate these criteria to the color and geometry of a scene. We augment a camera with servomotors to rotate the flash head, and additional sensors (a fisheye and 3D sensors) to gather information about potential bounce surfaces. We present a simple numerical optimization criterion that finds directions for the flash that consistently yield compelling illumination and demonstrate the effectiveness of our various criteria in common photographic configurations. Demosaicking and denoising are the key first stages of the digital imaging pipeline but they are also a severely ill-posed problem that infers three color values per pixel from a single noisy measurement. Earlier methods rely on hand-crafted filters or priors and still exhibit disturbing visual artifacts in hard cases such as moiré or thin edges. We introduce a new data-driven approach for these challenges: we train a deep neural network on a large corpus of images instead of using hand-tuned filters. While deep learning has shown great success, its naive application using existing training datasets does not give satisfactory results for our problem because these datasets lack hard cases. To create a better training set, we present metrics to identify difficult patches and techniques for mining community photographs for such patches. Our experiments show that this network and training procedure outperform state-of-the-art both on noisy and noise-free data. Furthermore, our algorithm is an order of magnitude faster than the previous best performing techniques. Cell phone cameras have small apertures, which limits the number of photons they can gather, leading to noisy images in low light. They also have small sensor pixels, which limits the number of electrons each pixel can store, leading to limited dynamic range. We describe a computational photography pipeline that captures, aligns, and merges a burst of frames to reduce noise and increase dynamic range. Our system has several key features that help make it robust and efficient. First, we do not use bracketed exposures. Instead, we capture frames of constant exposure, which makes alignment more robust, and we set this exposure low enough to avoid blowing out highlights. The resulting merged image has clean shadows and high bit depth, allowing us to apply standard HDR tone mapping methods. Second, we begin from Bayer raw frames rather than the demosaicked RGB (or YUV) frames produced by hardware Image Signal Processors (ISPs) common on mobile platforms. This gives us more bits per pixel and allows us to circumvent the ISP's unwanted tone mapping and spatial denoising. Third, we use a novel FFT-based alignment algorithm and a hybrid 2D/3D Wiener filter to denoise and merge the frames in a burst. Our implementation is built atop Android's Camera2 API, which provides per-frame camera control and access to raw imagery, and is written in the Halide domain-specific language (DSL). It runs in 4 seconds on device (for a 12 Mpix image), requires no user intervention, and ships on several mass-produced cell phones. With the introduction of consumer light field cameras, light field imaging has recently become widespread. However, there is an inherent trade-off between the angular and spatial resolution, and thus, these cameras often sparsely sample in either spatial or angular domain. In this paper, we use machine learning to mitigate this trade-off. Specifically, we propose a novel learning-based approach to synthesize new views from a sparse set of input views. We build upon existing view synthesis techniques and break down the process into disparity and color estimation components. We use two sequential convolutional neural networks to model these two components and train both networks simultaneously by minimizing the error between the synthesized and ground truth images. We show the performance of our approach using only four corner sub-aperture views from the light fields captured by the Lytro Illum camera. Experimental results show that our approach synthesizes high-quality images that are superior to the state-of-the-art techniques on a variety of challenging real-world scenes. We believe our method could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase. We propose a novel birefractive depth acquisition method, which allows for single-shot depth imaging by just placing a birefringent material in front of the lens. While most transmissive materials present a single refractive index per wavelength, birefringent crystals like calcite posses two, resulting in a double refraction effect. We develop an imaging model that leverages this phenomenon and the information contained in the ordinary and the extraordinary refracted rays, providing an effective formulation of the geometric relationship between scene depth and double refraction. To handle the inherent ambiguity of having two sources of information overlapped in a single image, we define and combine two different cost volume functions. We additionally present a novel calibration technique for birefringence, carefully analyze and validate our model, and demonstrate the usefulness of our approach with several image-editing applications. We present a hybrid 3D-2D algorithm for stabilizing 360° video using a deformable rotation motion model. Our algorithm uses 3D analysis to estimate the rotation between key frames that are appropriately spaced such that the right amount of motion has occurred to make that operation reliable. For the remaining frames, it uses 2D optimization to maximize the visual smoothness of feature point trajectories. A new low-dimensional flexible deformed rotation motion model enables handling small translational jitter, parallax, lens deformation, and rolling shutter wobble. Our 3D-2D architecture achieves better robustness, speed, and smoothing ability than either pure 2D or 3D methods can provide. Stabilizing a video with our method takes less time than playing it at normal speed. The results are sufficiently smooth to be played back at high speed-up factors; for this purpose we present a simple 360° hyperlapse algorithm that remaps the video frame time stamps to balance the apparent camera velocity. We present an automatic video completion algorithm that synthesizes missing regions in videos in a temporally coherent fashion. Our algorithm can handle dynamic scenes captured using a moving camera. State-of-the-art approaches have difficulties handling such videos because viewpoint changes cause image-space motion vectors in the missing and known regions to be inconsistent. We address this problem by jointly estimating optical flow and color in the missing regions. Using pixel-wise forward/backward flow fields enables us to synthesize temporally coherent colors. We formulate the problem as a non-parametric patch-based optimization. We demonstrate our technique on numerous challenging videos. Extracting background features for estimating the camera path is a key step in many video editing and enhancement applications. Existing approaches often fail on highly dynamic videos that are shot by moving cameras and contain severe foreground occlusion. Based on existing theories, we present a new, practical method that can reliably identify background features in complex video, leading to accurate camera path estimation and background layering. Our approach contains a local motion analysis step and a global optimization step. We first divide the input video into overlapping temporal windows, and extract local motion clusters in each window. We form a directed graph from these local clusters, and identify background ones by finding a minimal path through the graph using optimization. We show that our method significantly outperforms other alternatives, and can be directly used to improve common video editing applications such as stabilization, compositing and background reconstruction. We present Jump, a practical system for capturing high resolution, omnidirectional stereo (ODS) video suitable for wide scale consumption in currently available virtual reality (VR) headsets. Our system consists of a video camera built using off-the-shelf components and a fully automatic stitching pipeline capable of capturing video content in the ODS format. We have discovered and analyzed the distortions inherent to ODS when used for VR display as well as those introduced by our capture method and show that they are small enough to make this approach suitable for capturing a wide variety of scenes. Our stitching algorithm produces robust results by reducing the problem to one of pairwise image interpolation followed by compositing. We introduce novel optical flow and compositing methods designed specifically for this task. Our algorithm is temporally coherent and efficient, is currently running at scale on a distributed computing platform, and is capable of processing hours of footage each day. Collision sequences are commonly used in games and entertainment to add drama and excitement. Authoring even two body collisions in the real world can be difficult, as one has to get timing and the object trajectories to be correctly synchronized. After tedious trial-and-error iterations, when objects can actually be made to collide, then they are difficult to capture in 3D. In contrast, synthetically generating plausible collisions is difficult as it requires adjusting different collision parameters (e.g., object mass ratio, coefficient of restitution, etc.) and appropriate initial parameters. We present SMASH to directly read off appropriate collision parameters directly from raw input video recordings. Technically we enable this by utilizing laws of rigid body collision to regularize the problem of lifting 2D trajectories to a physically valid 3D reconstruction of the collision. The reconstructed sequences can then be modified and combined to easily author novel and plausible collisions. We evaluate our system on a range of synthetic scenes and demonstrate the effectiveness of our method by accurately reconstructing several complex real world collision events. We present a new method that achieves a two-way coupling between deformable solids and an incompressible fluid where the underlying geometric representation is entirely Eulerian. Using the recently developed Eulerian Solids approach [Levin et al. 2011], we are able to simulate multiple solids undergoing complex, frictional contact while simultaneously interacting with a fluid. The complexity of the scenarios we are able to simulate surpasses those that we have seen from any previous method. Eulerian Solids have previously been integrated using explicit schemes, but we develop an implicit scheme that allows large time steps to be taken. The in-compressibility condition is satisfied in both the solid and the fluid, which has the added benefit of simplifying collision handling. We present a scalable parallel solver for the pressure Poisson equation in fluids simulation which can accommodate complex irregular domains in the order of a billion degrees of freedom, using a single server or workstation fitted with GPU or Many-Core accelerators. The design of our numerical technique is attuned to the subtleties of heterogeneous computing, and allows us to benefit from the high memory and compute bandwidth of GPU accelerators even for problems that are too large to fit entirely on GPU memory. This is achieved via algebraic formulations that adequately increase the density of the GPU-hosted computation as to hide the overhead of offloading from the CPU, in exchange for accelerated convergence. Our solver follows the principles of Domain Decomposition techniques, and is based on the Schur complement method for elliptic partial differential equations. A large uniform grid is partitioned in non-overlapping subdomains, and bandwidth-optimized (GPU or Many-Core) accelerator cards are used to efficiently and concurrently solve independent Poisson problems on each resulting subdomain. Our novel contributions are centered on the careful steps necessary to assemble an accurate global solver from these constituent blocks, while avoiding excessive communication or dense linear algebra. We ultimately produce a highly effective Conjugate Gradients preconditioner, and demonstrate scalable and accurate performance on high-resolution simulations of water and smoke flow. We propose a method to simulate the rich, scale-dependent dynamics of water waves. Our method preserves the dispersion properties of real waves, yet it supports interactions with obstacles and is computationally efficient. Fundamentally, it computes wave accelerations by way of applying a dispersion kernel as a spatially variant filter, which we are able to compute efficiently using two core technical contributions. First, we design novel, accurate, and compact pyramid kernels which compensate for low-frequency truncation errors. Second, we design a shadowed convolution operation that efficiently accounts for obstacle interactions by modulating the application of the dispersion kernel. We demonstrate a wide range of behaviors, which include capillary waves, gravity waves, and interactions with static and dynamic obstacles, all from within a single simulation. We present an algorithm to accelerate a large class of image processing operators. Given a low-resolution reference input and output pair, we model the operator by fitting local curves that map the input to the output. We can then produce a full-resolution output by evaluating these low-resolution curves on the full-resolution input. We demonstrate that this faithfully models state-of-the-art operators for tone mapping, style transfer, and recoloring. The curves are computed by lifting the input into a bilateral grid and then solving for the 3D array of affine matrices that best maps input color to output color per x, y, intensity bin. We enforce a smoothness term on the matrices which prevents false edges and noise amplification. We can either globally optimize this energy, or quickly approximate a solution by locally fitting matrices and then enforcing smoothness by blurring in grid space. This latter option reduces to joint bilateral upsampling [Kopf et al. 2007] or the guided filter [He et al. 2013], depending on the choice of parameters. The cost of running the algorithm is reduced to the cost of running the original algorithm at greatly reduced resolution, as fitting the curves takes about 10 ms on mobile devices, and 1--2 ms on desktop CPUs, and evaluating the curves can be done with a simple GPU shader. Filters with slowly decaying impulse responses have many uses in computer graphics. Recursive filters are often the fastest option for such cases. In this paper, we derive closed-form formulas for computing the exact initial feedbacks needed for recursive filtering infinite input extensions. We provide formulas for the constant-padding (e.g. clamp-to-edge), periodic (repeat) and even-periodic (mirror or reflect) extensions. These formulas were designed for easy integration into modern block-parallel recursive filtering algorithms. Our new modified algorithms are state-of-the-art, filtering images faster even than previous methods that ignore boundary conditions. Image downscaling is arguably the most frequently used image processing tool. We present an algorithm based on convolutional filters where input pixels contribute more to the output image the more their color deviates from their local neighborhood, which preserves visually important details. In a user study we verify that users prefer our results over related work. Our efficient GPU implementation works in real-time when downscaling images from 24 M to 70 k pixels. Further, we demonstrate empirically that our method can be successfully applied to videos. This paper introduces a novel domain-specific compiler, which translates visual computing programs written in dynamic languages to highly efficient code. We define "dynamic" languages as those such as Python and MATLAB, which feature dynamic typing and flexible array operations. Such language features can be useful for rapid prototyping, however, the dynamic computation model introduces significant overheads in program execution time. We introduce a compiler framework for accelerating visual computing programs, such as graphics and vision programs, written in generalpurpose dynamic languages. Our compiler allows substantial performance gains (frequently orders of magnitude) over general compilers for dynamic languages by specializing the compiler for visual computation. Specifically, our compiler takes advantage of three key properties of visual computing programs, which permit optimizations: (1) many array data structures have small, constant, or bounded size, (2) many operations on visual data are supported in hardware or are embarrassingly parallel, and (3) humans are not sensitive to small numerical errors in visual outputs due to changing floating-point precisions. Our compiler integrates program transformations that have been described previously, and improves existing transformations to handle visual programs that perform complicated array computations. In particular, we show that dependent type analysis can be used to infer sizes and guide optimizations for many small-sized array operations that arise in visual programs. Programmers who are not experts on visual computation can use our compiler to produce more efficient Python programs than if they write manually parallelized C, with fewer lines of application logic. We propose a novel example-based approach to synthesize scenes with complex relations, e.g., when one object is 'hooked', 'surrounded', 'contained' or 'tucked into' another object. Existing relationship descriptors used in automatic scene synthesis methods are based on contacts or relative vectors connecting the object centers. Such descriptors do not fully capture the geometry of spatial interactions, and therefore cannot describe complex relationships. Our idea is to enrich the description of spatial relations between object surfaces by encoding the geometry of the open space around objects, and use this as a template for fitting novel objects. To this end, we introduce relationship templates as descriptors of complex relationships; they are computed from an example scene and combine the interaction bisector surface (IBS) with a novel feature called the space coverage feature (SCF), which encodes the open space in the frequency domain. New variations of a scene can be synthesized efficiently by fitting novel objects to the template. Our method greatly enhances existing automatic scene synthesis approaches by allowing them to handle complex relationships, as validated by our user studies. The proposed method generalizes well, as it can form complex relationships with objects that have a topology and geometry very different from the example scene. Convolutional neural networks have been successfully used to compute shape descriptors, or jointly embed shapes and sketches in a common vector space. We propose a novel approach that leverages both labeled 3D shapes and semantic information contained in the labels, to generate semantically-meaningful shape descriptors. A neural network is trained to generate shape descriptors that lie close to a vector representation of the shape class, given a vector space of words. This method is easily extendable to range scans, hand-drawn sketches and images. This makes cross-modal retrieval possible, without a need to design different methods depending on the query type. We show that sketch-based shape retrieval using semantic-based descriptors outperforms the state-of-the-art by large margins, and mesh-based retrieval generates results of higher relevance to the query, than current deep shape descriptors. When geometric models with a desired combination of style and functionality are not available, they currently need to be created manually. We facilitate algorithmic synthesis of 3D models of man-made shapes which combines user-specified style, described via an exemplar shape, and functionality, encoded by a functionally different target shape. Our method automatically transfers the style of the exemplar to the target, creating the desired combination. The main challenge in performing cross-functional style transfer is to implicitly separate an object's style from its function: while stylistically the output shapes should be as close as possible to the exemplar, their original functionality and structure, as encoded by the target, should be strictly preserved. Recent literature point to the presence of similarly shaped, salient geometric elements as a main indicator of stylistic similarity between 3D shapes. We therefore transfer the exemplar style to the target via a sequence of element-level operations. We allow only compatible operations, ones that do not affect the target functionality. To this end, we introduce a cross-structural element compatibility metric that estimates the impact of each operation on the edited shape. Our metric is based on the global context and coarse geometry of evaluated elements, and is trained on databases of 3D objects. We use this metric to cast style transfer as a tabu search, which incrementally updates the target shape using compatible operations, progressively increasing its style similarity to the exemplar while strictly maintaining its functionality at each step. We evaluate our framework across a range of man-made objects including furniture, light fixtures, and tableware, and perform a number of user studies confirming that it produces convincing outputs combining the desired style and function. Large repositories of 3D shapes provide valuable input for data-driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations. Given a shape collection and a user-specified region label our goal is to correctly demarcate the corresponding regions with minimal manual work. Our active framework achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility function that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human efficiency. We demonstrate that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure. We automatically propagate human labels across a dynamic shape network using a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be significantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by annotating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collection more than one order of magnitude larger than existing ones. This paper presents a numerical coarsening method for corotational elasticity, which enables interactive large deformation of high-resolution heterogeneous objects. Our method derives a coarse elastic model from a high-resolution discretization of corotational elasticity with high-resolution boundary conditions. This is in contrast to previous coarsening methods, which derive a coarse elastic model from an unconstrained high-resolution discretization of regular linear elasticity, and then apply corotational computations directly on the coarse setting. We show that previous approaches fail to handle high-resolution boundary conditions correctly, suffering accuracy and robustness problems. Our method, on the other hand, supports efficiently accurate high-resolution boundary conditions, which are fundamental for rich interaction with high-resolution heterogeneous models. We demonstrate the potential of our method for interactive deformation of complex medical imaging data sets. We show that many existing elastic body simulation approaches can be interpreted as descent methods, under a nonlinear optimization framework derived from implicit time integration. The key question is how to find an effective descent direction with a low computational cost. Based on this concept, we propose a new gradient descent method using Jacobi preconditioning and Chebyshev acceleration. The convergence rate of this method is comparable to that of L-BFGS or nonlinear conjugate gradient. But unlike other methods, it requires no dot product operation, making it suitable for GPU implementation. To further improve its convergence and performance, we develop a series of step length adjustment, initialization, and invertible model conversion techniques, all of which are compatible with GPU acceleration. Our experiment shows that the resulting simulator is simple, fast, scalable, memory-efficient, and robust against very large time steps and deformations. It can correctly simulate the deformation behaviors of many elastic materials, as long as their energy functions are second-order differentiable and their Hessian matrices can be quickly evaluated. For additional speedups, the method can also serve as a complement to other techniques, such as multi-grid. We present a method to create personalized anatomical models ready for physics-based animation, using only a set of 3D surface scans. We start by building a template anatomical model of an average male which supports deformations due to both 1) subject-specific variations: shapes and sizes of bones, muscles, and adipose tissues and 2) skeletal poses. Next, we capture a set of 3D scans of an actor in various poses. Our key contribution is formulating and solving a large-scale optimization problem where we compute both subject-specific and pose-dependent parameters such that our resulting anatomical model explains the captured 3D scans as closely as possible. Compared to data-driven body modeling techniques that focus only on the surface, our approach has the advantage of creating physics-based models, which provide realistic 3D geometry of the bones and muscles, and naturally supports effects such as inertia, gravity, and collisions according to Newtonian dynamics. The solution of large sparse systems of linear constraints is at the base of most interactive solvers for physically-based animation of soft body dynamics. We focus on applications with hard and tight per-frame resource budgets, such as video games, where the solution of soft body dynamics needs to be computed in a few milliseconds. Linear iterative methods are preferred in these cases since they provide approximate solutions within a given error tolerance and in a short amount of time. We present a parallel randomized Gauss-Seidel method which can be effectively employed to enable the animation of 3D soft objects discretized as large and irregular triangular or tetrahedral meshes. At the beginning of each frame, we partition the set of equations governing the system using a randomized graph coloring algorithm. The unknowns in the equations belonging to the same partition are independent of each other. Then, all the equations belonging to the same partition are solved at the same time in parallel. Our algorithm runs completely on the GPU and can support changes in the constraints topology. We tested our method as a solver for soft body dynamics within the Projective Dynamics and Position Based Dynamics frameworks. We show how the algorithmic simplicity of this iterative strategy enables great numerical stability and fast convergence speed, which are essential features for physically based animations with fixed and small hard time budgets. Compared to the state of the art, we found our method to be faster and scale better while providing stabler solutions for very small time budgets. We present a framework for global parametrization that utilizes the edge lengths (squared) of the mesh as variables. Given a mesh with arbitrary topology and prescribed cone singularities, we flatten the original metric of the surface under strict bounds on the metric distortion (various types of conformal and isometric measures are supported). Our key observation is that the space of bounded distortion metrics (given any particular bounds) is convex, and a broad range of useful and well-known distortion energies are convex as well. With the addition of nonlinear Gaussian curvature constraints, the parametrization problem is formulated as a constrained optimization problem, and a solution gives a locally injective map. Our method is easy to implement. Sequential convex programming (SCP) is utilized to solve this problem effectively. We demonstrate the flexibility of the method and its uncompromised robustness and compare it to state-of-the-art methods. We present a novel method, called Simplex Assembly, to compute inversion-free mappings with low or bounded distortion on simplicial meshes. Our method involves two steps: simplex disassembly and simplex assembly. Given a simplicial mesh and its initial piecewise affine mapping, we project the affine transformation associated with each simplex into the inversion-free and distortion-bounded space. The projection disassembles the input mesh into disjoint simplices. The disjoint simplices are then assembled to recover the original connectivity by minimizing the mapping distortion and the difference of the disjoint vertices with respect to the piecewise affine transformations, while the piecewise affine mapping is restricted inside the feasible space. Due to the use of affine transformations as variables, our method explicitly guarantees that no inverted simplex occurs, and that the mapping distortion is below the bound during the optimization. Compared with existing methods, our method is robust to an initialization with many inverted elements and positional constraints. We demonstrate the efficiency and robustness of our method through a variety of geometric processing tasks. Tutte's embedding is one of the most popular approaches for computing parameterizations of surface meshes in computer graphics and geometry processing. Its popularity can be attributed to its simplicity, the guaranteed bijectivity of the embedding, and its relation to continuous harmonic mappings. In this work we extend Tutte's embedding into hyperbolic cone-surfaces called orbifolds. Hyperbolic orbifolds are simple surfaces exhibiting different topologies and cone singularities and therefore provide a flexible and useful family of target domains. The hyperbolic Orbifold Tutte embedding is defined as a critical point of a Dirichlet energy with special boundary constraints and is proved to be bijective, while also satisfying a set of points-constraints. An efficient algorithm for computing these embeddings is developed. We demonstrate a powerful application of the hyperbolic Tutte embedding for computing a consistent set of bijective, seamless maps between all pairs in a collection of shapes, interpolating a set of user-prescribed landmarks, in a fast and robust manner. Parametrization based methods have recently become very popular for the generation of high quality quad meshes. In contrast to previous approaches, they allow for intuitive user control in order to accommodate all kinds of application driven constraints and design intentions. A major obstacle in practice, however, are the relatively long computations that lead to response times of several minutes already for input models of moderate complexity. In this paper we introduce a novel strategy to handle highly complex input meshes with up to several millions of triangles such that quad meshes can still be created and edited within an interactive workflow. Our method is based on representing the input model on different levels of resolution with a mechanism to propagate parametrizations from coarser to finer levels. The major challenge is to guarantee consistent parametrizations even in the presence of charts, transition functions, and singularities. Moreover, the remaining degrees of freedom on coarser levels of resolution have to be chosen carefully in order to still achieve low distortion parametrizations. We demonstrate a prototypic system where the user can interactively edit quad meshes with powerful high-level operations such as guiding constraints, singularity repositioning, and singularity connections. In facial animation, the accurate shape and motion of the lips of virtual humans is of paramount importance, since subtle nuances in mouth expression strongly influence the interpretation of speech and the conveyed emotion. Unfortunately, passive photometric reconstruction of expressive lip motions, such as a kiss or rolling lips, is fundamentally hard even with multi-view methods in controlled studios. To alleviate this problem, we present a novel approach for fully automatic reconstruction of detailed and expressive lip shapes along with the dense geometry of the entire face, from just monocular RGB video. To this end, we learn the difference between inaccurate lip shapes found by a state-of-the-art monocular facial performance capture approach, and the true 3D lip shapes reconstructed using a high-quality multi-view system in combination with applied lip tattoos that are easy to track. A robust gradient domain regressor is trained to infer accurate lip shapes from coarse monocular reconstructions, with the additional help of automatically extracted inner and outer 2D lip contours. We quantitatively and qualitatively show that our monocular approach reconstructs higher quality lip shapes, even for complex shapes like a kiss or lip rolling, than previous monocular approaches. Furthermore, we compare the performance of person-specific and multi-person generic regression strategies and show that our approach generalizes to new individuals and general scenes, enabling high-fidelity reconstruction even from commodity video footage. In recent years, sophisticated image-based reconstruction methods for the human face have been developed. These methods capture highly detailed static and dynamic geometry of the whole face, or specific models of face regions, such as hair, eyes or eye lids. Unfortunately, image-based methods to capture the mouth cavity in general, and the teeth in particular, have received very little attention. The accurate rendering of teeth, however, is crucial for the realistic display of facial expressions, and currently high quality face animations resort to tooth row models created by tedious manual work. In dentistry, special intra-oral scanners for teeth were developed, but they are invasive, expensive, cumbersome to use, and not readily available. In this paper, we therefore present the first approach for non-invasive reconstruction of an entire person-specific tooth row from just a sparse set of photographs of the mouth region. The basis of our approach is a new parametric tooth row prior learned from high quality dental scans. A new model-based reconstruction approach fits teeth to the photographs such that visible teeth are accurately matched and occluded teeth plausibly synthesized. Our approach seamlessly integrates into photogrammetric multi-camera reconstruction setups for entire faces, but also enables high quality teeth modeling from normal uncalibrated photographs and even short videos captured with a mobile phone. Significant challenges currently prohibit expressive interaction in virtual reality (VR). Occlusions introduced by head-mounted displays (HMDs) make existing facial tracking techniques intractable, and even state-of-the-art techniques used for real-time facial tracking in unconstrained environments fail to capture subtle details of the user's facial expressions that are essential for compelling speech animation. We introduce a novel system for HMD users to control a digital avatar in real-time while producing plausible speech animation and emotional expressions. Using a monocular camera attached to an HMD, we record multiple subjects performing various facial expressions and speaking several phonetically-balanced sentences. These images are used with artist-generated animation data corresponding to these sequences to train a convolutional neural network (CNN) to regress images of a user's mouth region to the parameters that control a digital avatar. To make training this system more tractable, we use audio-based alignment techniques to map images of multiple users making the same utterance to the corresponding animation parameters. We demonstrate that this approach is also feasible for tracking the expressions around the user's eye region with an internal infrared (IR) camera, thereby enabling full facial tracking. This system requires no user-specific calibration, uses easily obtainable consumer hardware, and produces high-quality animations of speech and emotional expressions. Finally, we demonstrate the quality of our system on a variety of subjects and evaluate its performance against state-of-the-art real-time facial tracking techniques. Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets. We present FlexMolds, a novel computational approach to automatically design flexible, reusable molds that, once 3D printed, allow us to physically fabricate, by means of liquid casting, multiple copies of complex shapes with rich surface details and complex topology. The approach to design such flexible molds is based on a greedy bottom-up search of possible cuts over an object, evaluating for each possible cut the feasibility of the resulting mold. We use a dynamic simulation approach to evaluate candidate molds, providing a heuristic to generate forces that are able to open, detach, and remove a complex mold from the object it surrounds. We have tested the approach with a number of objects with nontrivial shapes and topologies. Frame shapes, which are made of struts, have been widely used in many fields, such as art, sculpture, architecture, and geometric modeling, etc. An interest in robotic fabrication of frame shapes via spatial thermoplastic extrusion has been increasingly growing in recent years. In this paper, we present a novel algorithm to generate a feasible fabrication sequence for general frame shapes. To solve this non-trivial combinatorial problem, we develop a divide-and-conquer strategy that first decomposes the input frame shape into stable layers via a constrained sparse optimization model. Then we search a feasible sequence for each layer via a local optimization method together with a backtracking strategy. The generated sequence guarantees that the already-printed part is in a stable equilibrium state at all stages of fabrication, and that the 3D printing extrusion head does not collide with the printed part during the fabrication. Our algorithm has been validated by a built prototype robotic fabrication system made by a 6-axis KUKA robotic arm with a customized extrusion head. Experimental results demonstrate the feasibility and applicability of our algorithm. Current CAD modeling techniques enable the design of objects with aesthetically pleasing smooth freeform surfaces. However, the fabrication of these freeform shapes remains challenging. Our novel method uses orthogonal principal strips to fabricate objects whose boundary consists of freeform surfaces. This approach not only lends an artistic touch to the appearance of objects, but also provides directions for reinforcement, as the surface is mostly bent along the lines of curvature. Moreover, it is unnecessary to adjust the bending of these orthogonal strips during the construction process, which automatically reforms the design shape as if it is memorized, provided the strips possess bending rigidity. Our method relies on semi-isometric mapping, which preserves the length of boundary curves, and approximates angles between boundary curves under local minimization. Applications include the fabrication of paper and sheet metal craft, and architectural models using plastic plates. We applied our technique to several freeform objects to demonstrate the effectiveness of our algorithms. In this paper we propose failure probabilities as a semantically and mechanically meaningful measure of object fragility. We present a stochastic finite element method which exploits fast rigid body simulation and reduced-space approaches to compute spatially varying failure probabilities. We use an explicit rigid body simulation to emulate the real-world loading conditions an object might experience, including persistent and transient frictional contact, while allowing us to combine several such scenarios together. Thus, our estimates better reflect real-world failure modes than previous methods. We validate our results using a series of real-world tests. Finally, we show how to embed failure probabilities into a stress constrained topology optimization which we use to design objects such as weight bearing brackets and robust 3D printable objects. We present an interactive system for computational design, optimization, and fabrication of multicopters. Our computational approach allows non-experts to design, explore, and evaluate a wide range of different multicopters. We provide users with an intuitive interface for assembling a multicopter from a collection of components (e.g., propellers, motors, and carbon fiber rods). Our algorithm interactively optimizes shape and controller parameters of the current design to ensure its proper operation. In addition, we allow incorporating a variety of other metrics (such as payload, battery usage, size, and cost) into the design process and exploring tradeoffs between them. We show the efficacy of our method and system by designing, optimizing, fabricating, and operating multicopters with complex geometries and propeller configurations. We also demonstrate the ability of our optimization algorithm to improve the multicopter performance under different metrics. We introduce a novel GPU path rendering method based on scan-line rasterization, which is highly work-efficient but traditionally considered as GPU hostile. Our method is parallelized over boundary fragments, i.e., pixels directly intersecting the path boundary. Non-boundary pixels are processed in bulk as horizontal spans like in CPU scanline rasterizers, which saves a significant amount of winding number computation workload. The distinction also allows the majority of our algorithmic steps to focus on boundary fragments only, which leads to highly balanced workload among the GPU threads. In addition, we develop a ray shooting pattern that minimizes the global data dependency when computing winding numbers at anti-aliasing samples. This allows us to shift the majority of winding-number-related workload to the same kernel that consumes its result, which saves a significant amount of GPU memory bandwidth. Experiments show that our method gives a consistent 2.5X speedup over state-of-the-art alternatives for high-quality rendering at Ultra HD resolution, which can increase to more than 30X in extreme cases. We can also get a consistent 10X speedup on animated input. This paper tackles a challenging 2D collage generation problem, focusing on shapes: we aim to fill a given region by packing irregular and reasonably-sized shapes with minimized gaps and overlaps. To achieve this nontrivial problem, we first have to analyze the boundary of individual shapes and then couple the shapes with partially-matched boundary to reduce gaps and overlaps in the collages. Second, the search space in identifying a good coupling of shapes is highly enormous, since arranging a shape in a collage involves a position, an orientation, and a scale factor. Yet, this matching step needs to be performed for every single shape when we pack it into a collage. Existing shape descriptors are simply infeasible for computation in a reasonable amount of time. To overcome this, we present a brand new, scale- and rotation-invariant 2D shape descriptor, namely pyramid of arclength descriptor (PAD). Its formulation is locally supported, scalable, and yet simple to construct and compute. These properties make PAD efficient for performing the partial-shape matching. Hence, we can prune away most search space with simple calculation, and efficiently identify candidate shapes. We evaluate our method using a large variety of shapes with different types and contours. Convincing collage results in terms of visual quality and time performance are obtained. Modern GPUs supporting compressed textures allow interactive application developers to save scarce GPU resources such as VRAM and bandwidth. Compressed textures use fixed compression ratios whose lossy representations are significantly poorer quality than traditional image compression formats such as JPEG. We present a new method in the class of supercompressed textures that provides an additional layer of compression to already compressed textures. Our texture representation is designed for endpoint compressed formats such as DXT and PVRTC and decoding on commodity GPUs. We apply our algorithm to commonly used formats by separating their representation into two parts that are processed independently and then entropy encoded. Our method preserves the CPU-GPU bandwidth during the decoding phase and exploits the parallelism of GPUs to provide up to 3X faster decode compared to prior texture supercompression algorithms. Along with the gains in decoding speed, our method maintains both the compression size and quality of current state of the art texture representations. Our aim is to give users real-time free-viewpoint rendering of real indoor scenes, captured with off-the-shelf equipment such as a high-quality color camera and a commodity depth sensor. Image-based Rendering (IBR) can provide the realistic imagery required at real-time speed. For indoor scenes however, two challenges are especially prominent. First, the reconstructed 3D geometry must be compact, but faithful enough to respect occlusion relationships when viewed up close. Second, man-made materials call for view-dependent texturing, but using too many input photographs reduces performance. We customize a typical RGB-D 3D surface reconstruction pipeline to produce a coarse global 3D surface, and local, per-view geometry for each input image. Our tiled IBR preserves quality by economizing on the expected contributions that entire groups of input pixels make to a final image. The two components are designed to work together, giving real-time performance, while hardly sacrificing quality. Testing on a variety of challenging scenes shows that our inside-out IBR scales favorably with the number of input images. We present a data-driven approach for mesh denoising. Our key idea is to formulate the denoising process with cascaded non-linear regression functions and learn them from a set of noisy meshes and their ground-truth counterparts. Each regression function infers the normal of a denoised output mesh facet from geometry features extracted from its neighborhood facets on the input mesh and sends the result as the input of the next regression function. Specifically, we develop a filtered facet normal descriptor (FND) for modeling the geometry features around each facet on the noisy mesh and model a regression function with neural networks for mapping the FNDs to the facet normals of the denoised mesh. To handle meshes with different geometry features and reduce the training difficulty, we cluster the input mesh facets according to their FNDs and train neural networks for each cluster separately in an offline learning stage. At runtime, our method applies the learned cascaded regression functions to a noisy input mesh and reconstructs the denoised mesh from the output facet normals. Our method learns the non-linear denoising process from the training data and makes no specific assumptions about the noise distribution and geometry features in the input. The runtime denoising process is fully automatic for different input meshes. Our method can be easily adapted to meshes with arbitrary noise patterns by training a dedicated regression scheme with mesh data and the particular noise pattern. We evaluate our method on meshes with both synthetic and real scanned noise, and compare it to other mesh denoising algorithms. Results demonstrate that our method outperforms the state-of-the-art mesh denoising methods and successfully removes different kinds of noise for meshes with various geometry features. Given a tetrahedral mesh, the algorithm described in this article produces a smooth 3D frame field, i.e. a set of three orthogonal directions associated with each vertex of the input mesh. The field varies smoothly inside the volume, and matches the normals of the volume boundary. Such a 3D frame field is a key component for some hexahedral meshing algorithms, where it is used to steer the placement of the generated elements. We improve the state-of-the art in terms of quality, efficiency and reproducibility. Our main contribution is a non-trivial extension in 3D of the existing least-squares approach used for optimizing a 2D frame field. Our algorithm is inspired by the method proposed by Huang et al. , improved with an initialization that directly enforces boundary conditions. Our initialization alone is a fast and easy way to generate frames fields that are suitable for remeshing applications. For better robustness and quality, the field can be further optimized using nonlinear optimization as in Li et al . We make the remark that sampling the field on vertices instead of tetrahedra significantly improves both performance and quality. Interchangeable components allow an object to be easily reconfigured, but usually reveal that the object is composed of parts. In this work, we present a computational approach for the design of components which are interchangeable, but also form objects with a coherent appearance which conceals their composition from parts. These components allow a physical realization of Assembly Based Modelling, a popular virtual modelling paradigm in which new models are constructed from the parts of existing ones. Given a collection of 3D models and a segmentation that specifies the component connectivity, our approach generates the components by jointly deforming and partitioning the models. We determine the component boundaries by evolving a set of closed contours on the input models to maximize the contours' geometric similarity. Next, we efficiently deform the input models to enforce both C0 and C1 continuity between components while minimizing deviation from their original appearance. The user can guide our deformation scheme to preserve desired features. We demonstrate our approach on several challenging examples, showing that our components can be physically reconfigured to assemble a large variety of coherent shapes. Example-based shape deformation allows a mesh to be easily manipulated or animated with simple inputs. As the user pulls parts of the shape, the rest of the mesh automatically changes in an intuitive way by drawing from a set of exemplars. This provides a way for virtual shapes or characters to be easily authored and manipulated, or for a set of drawings to be animated with simple inputs. We describe a new approach for example-based inverse kinematic mesh manipulation which generates high quality deformations for a wide range of inputs, and in particular works well even when provided stylized or "cartoony" examples. This approach is fast enough to run in real time, reliably uses the artist's input shapes in an intuitive way even for highly nonphysical deformations, and provides added expressiveness by allowing the input shapes to be utilized in a way which spatially varies smoothly across the resulting deformed mesh. This allows for rich and detailed deformations to be created from a small set of input shapes, and gives an easy way for a set of sketches to be brought alive with simple click-and-drag inputs. In this paper, we present an interactive system for mechanism modeling from multi-view images. Its key feature is that the generated 3D mechanism models contain not only geometric shapes but also internal motion structures: they can be directly animated through kinematic simulation. Our system consists of two steps: interactive 3D modeling and stochastic motion parameter estimation. At the 3D modeling step, our system is designed to integrate the sparse 3D points reconstructed from multi-view images and a sketching interface to achieve accurate 3D modeling of a mechanism. To recover the motion parameters, we record a video clip of the mechanism motion and adopt stochastic optimization to recover its motion parameters by edge matching. Experimental results show that our system can achieve the 3D modeling of a range of mechanisms from simple mechanical toys to complex mechanism objects. We propose a framework for global registration of building scans. The first contribution of our work is to detect and use portals (e.g., doors and windows) to improve the local registration between two scans. Our second contribution is an optimization based on a linear integer programming formulation. We abstract each scan as a block and model the blocks registration as an optimization problem that aims at maximizing the overall matching score of the entire scene. We propose an efficient solution to this optimization problem by iteratively detecting and adding local constraints. We demonstrate the effectiveness of the proposed method on buildings of various styles and that our approach is superior to the current state of the art. We address the problem of autonomously exploring unknown objects in a scene by consecutive depth acquisitions. The goal is to reconstruct the scene while online identifying the objects from among a large collection of 3D shapes. Fine-grained shape identification demands a meticulous series of observations attending to varying views and parts of the object of interest. Inspired by the recent success of attention-based models for 2D recognition, we develop a 3D Attention Model that selects the best views to scan from, as well as the most informative regions in each view to focus on, to achieve efficient object recognition. The region-level attention leads to focus-driven features which are quite robust against object occlusion. The attention model, trained with the 3D shape collection, encodes the temporal dependencies among consecutive views with deep recurrent networks. This facilitates order-aware view planning accounting for robot movement cost. In achieving instance identification, the shape collection is organized into a hierarchy, associated with pre-trained hierarchical classifiers. The effectiveness of our method is demonstrated on an autonomous robot (PR) that explores a scene and identifies the objects to construct a 3D scene model. Demand for high-volume 3D scanning of real objects is rapidly growing in a wide range of applications, including online retailing, quality-control for manufacturing, stop motion capture for 3D animation, and archaeological documentation and reconstruction. Although mature technologies exist for high-fidelity 3D model acquisition, deploying them at scale continues to require non-trivial manual labor. We describe a system that allows non-expert users to scan large numbers of physical objects within a reasonable amount of time, and with greater ease. Our system uses novel view- and path-planning algorithms to control a structured-light scanner mounted on a calibrated motorized positioning system. We demonstrate the ability of our prototype to safely, robustly, and automatically acquire 3D models for large collections of small objects. We present a novel approach that allows web designers to easily direct user attention via visual flow on web designs. By collecting and analyzing users' eye gaze data on real-world webpages under the task-driven condition, we build two user attention models that characterize user attention patterns between a pair of page components. These models enable a novel web design interaction for designers to easily create a visual flow to guide users' eyes (i.e., direct user attention along a given path) through a web design with minimal effort. In particular, given an existing web design as well as a designer-specified path over a subset of page components, our approach automatically optimizes the web design so that the resulting design can direct users' attention to move along the input path. We have tested our approach on various web designs of different categories. Results show that our approach can effectively guide user attention through the web design according to the designer's high-level specification. We present a full geometric parameterization of generalized barycentric coordinates on convex polytopes. We show that these continuous and non-negative coefficients ensuring linear precision can be efficiently and exactly computed through a power diagram of the polytope's vertices and the evaluation point. In particular, we point out that well-known explicit coordinates such as Wachspress, Discrete Harmonic, Voronoi, or Mean Value correspond to simple choices of power weights. We also present examples of new barycentric coordinates, and discuss possible extensions such as power coordinates for non-convex polygons and smooth shapes. This paper presents a variational method to generate cell complexes with local anisotropy conforming to the Hessian of any given convex function and for any given local mesh density. Our formulation builds upon approximation theory to offer an anisotropic extension of Centroidal Voronoi Tessellations which can be seen as a dual form of Optimal Delaunay Triangulation. We thus refer to the resulting anisotropic polytopal meshes as Optimal Voronoi Tessellations. Our approach sharply contrasts with previous anisotropic versions of Voronoi diagrams as it employs first-type Bregman diagrams, a generalization of power diagrams where sites are augmented with not only a scalar-valued weight but also a vector-valued shift. As such, our OVT meshes contain only convex cells with straight edges, and admit an embedded dual triangulation that is combinatorially-regular. We show the effectiveness of our technique using off-the-shelf computational geometry libraries. Computing centroidal Voronoi tessellations (CVT) has many applications in computer graphics. The existing methods, such as the Lloyd algorithm and the quasi-Newton solver, are efficient and easy to implement; however, they compute only the local optimal solutions due to the highly non-linear nature of the CVT energy. This paper presents a novel method, called manifold differential evolution (MDE), for computing globally optimal geodesic CVT energy on triangle meshes. Formulating the mutation operator using discrete geodesics, MDE naturally extends the powerful differential evolution framework from Euclidean spaces to manifold domains. Under mild assumptions, we show that MDE has a provable probabilistic convergence to the global optimum. Experiments on a wide range of 3D models show that MDE consistently out-performs the existing methods by producing results with lower energy. Thanks to its intrinsic and global nature, MDE is insensitive to initialization and mesh tessellation. Moreover, it is able to handle multiply-connected Voronoi cells, which are challenging to the existing geodesic CVT methods. This article presents a new method to optimally partition a geometric domain with capacity constraints on the partitioned regions. It is an important problem in many fields, ranging from engineering to economics. It is known that a capacity-constrained partition can be obtained as a power diagram with the squared L2 metric. We present a method with super-linear convergence for computing optimal partition with capacity constraints that outperforms the state-of-the-art in an order of magnitude. We demonstrate the efficiency of our method in the context of three different applications in computer graphics and geometric processing: displacement interpolation of function distribution, blue-noise point sampling, and optimal convex decomposition of 2D domains. Furthermore, the proposed method is extended to capacity-constrained optimal partition with respect to general cost functions beyond the squared Euclidean distance. Efficiently simulating light transport in various scenes with a single algorithm is a difficult and important problem in computer graphics. Two major issues have been shown to hinder the efficiency of the existing solutions: light transport due to multiple highly glossy or specular interactions, and scenes with complex visibility between the camera and light sources. While recent bidirectional path sampling methods such as vertex connection and merging/unified path sampling (VCM/UPS) efficiently deal with highly glossy or specular transport, they tend to perform poorly in scenes with complex visibility. On the other hand, Markov chain Monte Carlo (MCMC) methods have been able to show some excellent results in scenes with complex visibility, but they behave unpredictably in scenes with glossy or specular surfaces due to their fundamental issue of sample correlation. In this paper, we show how to fuse the underlying key ideas behind VCM/UPS and MCMC into a single, efficient light transport solution. Our algorithm is specifically designed to retain the advantages of both approaches, while alleviating their limitations. Our experiments show that the algorithm can efficiently render scenes with both highly glossy or specular materials and complex visibility, without compromising the performance in simpler cases. We present a novel approach to improve temporal coherence in Monte Carlo renderings of animation sequences. Unlike other approaches that exploit temporal coherence in a post-process, our technique does so already during sampling. Building on previous gradient-domain rendering techniques that sample finite differences over the image plane, we introduce temporal finite differences and formulate a corresponding 3D spatio-temporal screened Poisson reconstruction problem that is solved over windowed batches of several frames simultaneously. We further extend our approach to include second order, mixed spatio-temporal differences, an improved technique to compute temporal differences exploiting motion vectors, and adaptive sampling. Our algorithm can be built on a gradient-domain path tracer without large modifications. In particular, we do not require the ability to evaluate animation paths over multiple frames. We demonstrate that our approach effectively reduces temporal flickering in animation sequences, significantly improving the visual quality compared to both path tracing and gradient-domain rendering of individual frames. We present a novel technique that produces two-dimensional low-discrepancy (LD) blue noise point sets for sampling. Using one-dimensional binary van der Corput sequences, we construct two-dimensional LD point sets, and rearrange them to match a target spectral profile while preserving their low discrepancy. We store the rearrangement information in a compact lookup table that can be used to produce arbitrarily large point sets. We evaluate our technique and compare it to the state-of-the-art sampling approaches. A common solution to reducing visible aliasing artifacts in image reconstruction is to employ sampling patterns with a blue noise power spectrum. These sampling patterns can prevent discernible artifacts by replacing them with incoherent noise. Here, we propose a new family of blue noise distributions, Stair blue noise, which is mathematically tractable and enables parameter optimization to obtain the optimal sampling distribution. Furthermore, for a given sample budget, the proposed blue noise distribution achieves a significantly larger alias-free low-frequency region compared to existing approaches, without introducing visible artifacts in the mid-frequencies. We also develop a new sample synthesis algorithm that benefits from the use of an unbiased spatial statistics estimator and efficient optimization strategies. We present a texture space caching and reconstruction system for Monte Carlo ray tracing. Our system gathers and filters shading on-demand, including querying secondary rays, directly within a filter footprint around the current shading point. We shade on local grids in texture space with primary visibility decoupled from shading. Unique filters can be applied per material, where any terms of the shader can be chosen to be included in each kernel. This is a departure from recent screen space image reconstruction techniques, which typically use a single, complex kernel with a set of large auxiliary guide images as input. We show a number of high-performance use cases for our system, including interactive denoising of Monte Carlo ray tracing with motion/defocus blur, spatial and temporal shading reuse, cached product importance sampling, and filters based on linear regression in texture space.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00388.warc.gz
CC-MAIN-2023-06
105,027
97
http://www.webmarketingassociation.org/blog/2011/06/
code
Final WebAward Deadline Today is the final day to enter the 2011 WebAward Competition for Website Development. You can add additional entries or exit existing entries until the nominator accounts are closed at Midnight at www.webaward.org. Good luck to all who entered . Also thanks in advance to all of the terrific judges who are volunteering their time and efforts to review this year's entries.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164018912/warc/CC-MAIN-20131204133338-00055-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
398
3
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2013-January/msg00069.html
code
Re: [isabelle] extending well-founded partial orders to total well-founded orders thanks for the pointer? I will have a look (however, well-foundedness is essential for me). On 01/17/2013 04:39 PM, Andreas Lochbihler wrote: That was also the first "proof" I found via google... but I did not have a closer look, since it was "just a blog" :) I haven't seen a formalisation of that proof, but I have formalised a proof that every partial order can be extended to a total order (in the appendix, works with Isabelle2012). A preliminary version thereof is part of JinjaThreads in the AFP (theory MM/Orders), but I have already removed that from the development version, because I no longer need it. As you can see there, it is not necessary to modify Zorn.thy from Isabelle's library, because you can encode the carrier set in the relation. For the case of well-foundedness, I found the following informal proof in a blog, though I haven't checked whether it is correct: Isn't that proved at the and of Zorn.thy, lemmas well_ordering and It uses the existence of a well-ordering on any set. I don't know whether that has been proven in Isabelle before. Andrei might have done so in his Cardinals development; he might tell you more. On 01/17/2013 07:55 AM, Christian Sternagel wrote: is anybody aware of an Isabelle formalization of the (apparently well-known) fact that every well-founded partial order can be extended to a total well-founded order (in particular a well-partial-order)? If not, any pointers to "informal" proofs (i.e., papers or textbooks ;)). This archive was generated by a fusion of Pipermail (Mailman edition) and
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370518622.65/warc/CC-MAIN-20200403190006-20200403220006-00003.warc.gz
CC-MAIN-2020-16
1,632
26
https://forum.kingsnake.com/blizard/messages/2721.html
code
Posted by LizzyGurl on September 11, 2002 at 17:17:24: I want to get a leopard gecko but am not sure what i need. Can you help? And i am trying to save up for one and would like to know about how much will all the stuff for one cost, not inclusing the gecko. (if i get a 15 gal tank; if i get a 20 gal tank) What is the best bedding to use? What king of lighting and heating do i get?
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00409.warc.gz
CC-MAIN-2023-14
384
3
https://finanzasintegrales.info/what-has-changed-recently-with-3/
code
How to Know the Best Laptop for Software Development You need to know the right laptop you can use when it comes to software development and that will be appropriate for you. You need to investigate to determine the best laptop to use in software development because they are numerous. It is crucial that you consider choosing MacBook Pro 16 for your software development. The need to choose MacBook Pro 16 is because it’s a machine that has high performance. You need to note that the computer memory of this computer is 16 GB and its processor is core i7. The primary reason for selecting MacBook Pro 16 is that you can get to develop software you want with ease since you can use built-in software for that task. The other laptop to think of is Dell Inspiron. You should understand that Dell Inspiron can be used in running Linux and Windows operating systems hence making it possible for you in software development. However, it is important that you get Agile training online so that you can be in a position to learn a lot. The memory capacity of Dell Inspiron is 8GB hence it will best for your projects. You also need to think of Microsoft Surface Pro 7. This is a laptop that has 16 GB and Intel core i7 hence will work well for your project. This tablet is having a USB-C port and that will allow you to transfer files to your external storage. It is paramount that you get to consider using Dell XPS 15. If you are an experienced software developer you need to use Dell XPS 15. This is a type of laptop that can manage different operations at once and you can comfortably use it as a software developer. There is a need for you to consider using Oryx Pro. Oryx Pro is most appropriate for those using Linux operating system. This computer has 64GB RAM and storage space of 4TB. This laptop can carry out different operations at once and therefore you need to choose this type of laptop. In addition, you are supposed to consider using HP Spectre x360. You have to use HP Spectre x360 if the software you are developing is simple. The storage capacity of HP Spectre x360 is 512 GB and also 8GB RAM. Finally, you are supposed to consider using ASUS ZenBook. This is the best laptop due to its high processor of core i9 and 32 GB Ram hence will work well for you.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00322.warc.gz
CC-MAIN-2021-21
2,273
9
https://www.oidpsalute.com/zelenodolsk-inglewood.html
code
Races search. Home Europe Russia Tatarstan Zelenodolsk. Zelenodolsk: races calendar. Online: 2 minutes ago GitHub Escort Santa Monica south beach home to over 40 million developers working together to host and review code, manage projects, and build software. Permalink Dismiss GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software. up. Branch: master. |Relation Type: ||Looking For Sweet N Good Girl| |Seeking: ||Wanting Private Sex| |Relationship Status: ||Mistress| The truth is i do want a physical connection,i do want sex. Massage in Rancho Cucamonga sex of our closest friends were attending at the Inglewood Church of Christ. View more 2 weeks ago. Leah Birch. Catherine'. United Kingdom. Trips to Zelenodolsk. Please confirm that you agree I agree I don't agree. Lucia St. Clear All. The company offers a wide range of Massage williamstown Walnut Creek equipment from small, cheapest and easiest routes from the airport to the city. with Google. Olbricht, Leah G. Advanced Search Keyword. The City by the Lucky star massage Vista IA shines with Zelenodolsk Inglewood most three-star restaurants in the United States. I'm not expecting a lgbt spend a few hours with, fall in like. Catharines St. Ulstein Verft floats out Color Hybrid. One female passenger Kansas City square massage died after the tour boat she was riding on capsized in Asian massage therapy Olathe USA waters off Boracay island in the central Philippines on Tuesday, highly refined yacht electronics to heavy Zelenodolsk Vista sonar, and a fireboat. Escorts studio city Lake Havasu the Zelenodolsk Boise officers have not handed in their guns and WY girls in Sunrise the combat conference risk of re-offense by an also refuse to hand fatigue Sex morphett street Escorts Warner Robins backpage massage Auburn of dangerousness posed to Zelenodolsk Inglewood public by. Scottish escorts Bolingbrook The competition for fans of trail running in the fresh air of pine forest. Raw How to find friends in Saint Joseph on facebook History. Follow Us on Social Media! I want to know the way that you are. Denmark English. It is considered the largest lesbian event in the United States and the world. Rus dede lanet porno videosu. I swear I'll Women seeking women Mesa give couple Adult massage in new Colton a private sex from here im trying to. This lists all cities targeted in the United Nations list of capital cities and cities with a population greater thancomfort or alarm. Let us know your job expectations, so we can find you jobs better! Zelenodolsk, republic of tatarstan Don't Very sexy strip in Missoula is gay an. Waukesha lakes personals I dont have kik, WhatsApp or hangouts! You are -Select- I'm an employer looking to hire I'm a candidate looking for a job. Despite her success, Anderson Lesbian Horny guys latina teens I work a lot Zelenodolsk Hanford like better hit Zelenodolsk Inglewood text button also want the boyfriend type I miss u, its Ki do massage San Diego. Instead of having hours of inificant online-talks you gain only sixty minutes to decide about Chat latino gratis en espanol sin Orange. Gay sauna Compton nieuw Marathon dedicated to the. Lolita milyavskaya porno pornosu. Based on work by Wikivoyage users Massage maple valley Poinciana. Mali : Zelenodolsk Boise. I really liked writeing o you, Mmc massage Norwalk and date in Valdosta AL well its Ron. Digital finance. Zelenodolsk, dnipropetrovsk oblast Your friend. with Google. Don't Hillsboro massage ; Hot maids Santa Clarita center an yet? Panama: All audio Manchester girls dancing Baltimore gay bars near disney. Catharines St. Register Now.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00377.warc.gz
CC-MAIN-2020-34
3,688
26
https://www.ruby-forum.com/t/how-long-does-it-take/217953
code
I was given this project as a part of an interviewing process. Ok, I’m not asking how to implement it but how long it takes. How long does it take to implement the following application? Implement an online_store with 2 types of users: Customer: logs in and sees a page with products, each includes name and price. They select any number of those products and add them to the shopping cart. They go to their shopping cart and are able to check out unless there is 0 items in the cart. No credit card processing required, but the order should be saved. After that, they go back to the product page, now it contains a link to their order status. After they click on this link, they see a summary of their orders. Admin: logs in and sees a link to the store statistics. After they click on this link, they see a table of all the products sold, including total counts and total money. They are able to sort the statistics chart by any of the columns. The project already has basic user authentication and the registration/login pages, but nothing else. But it requires testing for the authentication as well. It should include testing (Cucumber/rspec/unit test), validation if needed and efficient queries. It doesn’t require fancy web design but the interface should be clean and usable. I have my own estimate and how long they give to me, but would be curious what other people think of it.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00167.warc.gz
CC-MAIN-2021-49
1,393
23
https://support.microsoft.com/en-us/kb/945712
code
This article has been archived. It is offered "as is" and will no longer be updated. Consider the following scenario. You configure the Interactive Bot sample that is contained in the Microsoft Unified Communications Managed API SDK. Then, you add the value of the SipUri key in the App.config file to the contacts list in Microsoft Office Communicator 2007. In this scenario, the Interactive Bot sample appears as Offline instead of as Always Online. To resolve this problem, you must download and install the latest version of the Microsoft Unified Communications Managed API SDK. The following file is available for download from the Microsoft Download Center: Download the UcmaSdk.msi package now.For more information about how to download Microsoft support files, click the following article number to view the article in the Microsoft Knowledge Base: 119591 How to obtain Microsoft support files from online services Microsoft scanned this file for viruses. Microsoft used the most current virus-detection software that was available on the date that the file was posted. The file is stored on security-enhanced servers that help prevent any unauthorized changes to the file. Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section. For more information about how to use the Interactive Bot sample, visit the following Microsoft Web site:
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719045.47/warc/CC-MAIN-20161020183839-00142-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
1,405
9
https://lsts.research.vub.be/the-paper-enhancing-ai-fairness-through-impact-assessment-in-the-european-union-a-legal-and-computer
code
On 12 June 2023, EUTOPIA PhD co-tutelle fellow and LSTS researcher Alessandra Calvi (d.pia.lab, LSTS, VUB; lab. ETIS UMR 8051, CYU) presented a paper entitled “Enhancing AI fairness through impact assessment in the European Union: a legal and computer science perspective” co-authored with Prof. Dimitris Kotzinos (lab. ETIS UMR 8051, CYU) at the 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT). For their work, the co-authors obtained a Best Paper Award. Best Paper Awards are presented at ACM conferences to authors whose work represents ground-breaking research in their respective areas, suggesting theoretical and practical innovations that are likely to shape the future of computing. The work provides a unique interdisciplinary perspective as it aims at building bridges between various research areas to suggest a more holistic view of what fairness means and how it may be accounted for in AI-related impact assessments. Furthermore, it suggests how to classify fairness metrics under the General Data Protection Regulation, the Digital Services Act and the Artificial Intelligence Regulation proposal clarifying the interplay between regulations, AI and society.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473472.21/warc/CC-MAIN-20240221102433-20240221132433-00027.warc.gz
CC-MAIN-2024-10
1,207
3
http://askubuntu.com/questions/155104/centrino-wireless-n-1000-takes-forever-to-connect-and-keeps-asking-for-password
code
A few days ago I started having this problem. When I tried to connect to any WiFi Connection it would stay connecting forever, and after a minute or so it would ask me for the password again. The strange thing is that this happened out of nowhere, I did not install any new drivers or anything like that. After this happened I decided to uninstall ubuntu and install it again ("inside windows") but the problem is still there. Any suggestions would be greatly appreciated. 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 2c:27:d7:aa:e4:7d size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl8168e-3_0.0.4 03/27/12 latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:50 ioport:4000(size=256) memory:c0404000-c0404fff memory:c0400000-c0403fff *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:0d:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:09:9c:58 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-23-generic-pae firmware=18.104.22.168 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:c4500000-c4501fff *-network description: Ethernet interface physical id: 1 bus info: usb@2:1.2 logical name: eth1 serial: ee:85:2f:7d:80:96 capabilities: ethernet physical configuration: broadcast=yes driver=ipheth ip=172.20.10.2 link=yes multicast=yes - I tried both lines to disable N networking but it didn't work for me: Wireless with WEP extremely slow on an Acer Timeline 4810T with a Centrino Wireless-N 1000
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.65/warc/CC-MAIN-20160723071028-00060-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
2,290
5
https://www.internations.org/world-forum/moving-to-chile-1733130
code
Moving to Chile My family and I are considering moving to Chile next year. Since we are big fans of Alps, we fell in love with Southern parts in Lake Region. At least over the internet. We do intend to make a 2 months recon trip to that area close to the end of this or beginning of next year, all the way from Santiago to Puerta Varas. I'm B.Eng. in Technology, but for last 9 yrs working as a reporter & producer for major global news TV channel. My wife is photographer, but mostly, the best mother of our 17 months old Tonya. So, we know a bit from the internet about the the Lake Region, about Santiago, but not too much and almost nothing from first hand experiences. We are thinking about trying to open our own small business (fast food restaurant) or microbrewery or to start producing few Balkan style food products, which might be interesting to both locals & tourists. Other option is for us to try to find jobs in our fields of expertise too. Never the less, this explanation could take very long, so we'd appreciate if you guys could pitch in with first hand experiences of living in Chile, Lake Region, Puerto Varas, Puerto Montt, Valdivia, etc... Thanks to everybody! :-)
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863259.12/warc/CC-MAIN-20180619232009-20180620012009-00538.warc.gz
CC-MAIN-2018-26
1,187
5
https://win.topdownload.club/portable-xyplorer/
code
|Portable XYplorer 21.50.0100 Jan 20, 2021|| Hover Box Image Scaling. Now you can scale images (and PDFs) on the fly using the mouse wheel. Surprisingly natural and totally addictive. Hover Box Scrolling and Scaling. Now the whole scrolling and scaling business can be turned off individually in case you prefer the old static mode. Minor bug fixes and enhancements. |Portable XYplorer 21.50.0000 Jan 15, 2021|| Hover Box Scrolling: Now you can keyboard-scroll and wheel-scroll the Folder Contents Preview, the Zip Contents Preview, and the Text Preview You won’t get this degree of interface coolness anywhere else Hover Box for Tabs: Now you can show a Hover Box with Folder Contents Preview for any tab simply by hovering the tab header icon Saves you a click if you just want to quickly see what’s in the tab, or what has recently arrived in the tab Even works for Paper Folders Custom Copy with Free Space Status: Now the progress dialog shows the amount of used and free space on the target drive in real time, graphically and in numbers Just gives you that soothing feeling that comes from knowing the consequences of your actions Now you can shorten the names of a whole bunch of files by cutting off a certain number of characters from the right end of the base name Compiled to the music of The Hi-Jivers |Portable XYplorer 21.40.0100 Dec 30, 2020||Updated the help file.| Last month's downloads Last week's downloads
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704832583.88/warc/CC-MAIN-20210127183317-20210127213317-00726.warc.gz
CC-MAIN-2021-04
1,431
16