query_id
stringlengths 32
32
| query
stringlengths 6
3.9k
| positive_passages
listlengths 1
21
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
27dac0c258f29d4df61994f2c885af9c
|
Does an owner of a bond etf get an income even if he sells before the day of distribution?
|
[
{
"docid": "360b618f715186825da5a27f9163b026",
"text": "Your ETF will return the interest as dividends. If you hold the ETF on the day before the Ex-Dividend date, you will get the dividend. If you sell before that, you will not. Note that at least one other answer to this question is wrong. You do NOT need to hold on the Record date. There is usually 2 days (or so) between the ex-date and the record date, which corresponds to the number of days it takes for your trade to settle. See the rules as published by the SEC: http://www.sec.gov/answers/dividen.htm",
"title": ""
},
{
"docid": "48c24049376a347959f8f744d9e66517",
"text": "Bond ETFs are traded like normal stock. It just so happens to be that the underlying fund (for which you own shares) is invested in bonds. Such funds will typically own many bonds and have them laddered so that they are constantly maturing. Such funds may also trade bonds on the OTC market. Note that with bond ETFs you're able to lose money as well as gain depending on the situation with the bond market. The issuer of the bond does not need to default in order for this to happen. The value of a bond (and thus the value of the bond fund which holds the bonds) is, much like a stock, determined based on factors like supply/demand, interest rates, credit ratings, news, etc.",
"title": ""
},
{
"docid": "b106aa78f608ac6f263c770c8b0d13f0",
"text": "There are two 'dates' relevant to your question: Ex-Dividend and Record. To find out these dates for a specific security visit Dividend.Com. You have to purchase the security prior to the Ex-Dividend date, hold it at least until the Record Date. After the Record Date you can sell the security and still receive the dividend for that quarter. ---- edit - - - - I was wrong. If you sell the security after the Ex-div date but before the date of record you still get the dividend. http://www.investopedia.com/articles/02/110802.asp",
"title": ""
}
] |
[
{
"docid": "ce25b1830452e713b8ff2b84a9d71f11",
"text": "\"Mutual funds generally make distributions once a year in December with the exact date (and the estimated amount) usually being made public in late October or November. Generally, the estimated amounts can get updated as time goes on, but the date does not change. Some funds (money market, bond funds, GNMA funds etc) distribute dividends on the last business day of each month, and the amounts are rarely made available beforehand. Capital gains are usually distributed once a year as per the general statement above. Some funds (e.g. S&P 500 index funds) distribute dividends towards the end of each quarter or on the last business day of the quarter, and capital gains once a year as per the general statement above. Some funds make semi-annual distributions but not necessarily at six-month intervals. Vanguard's Health Care Fund has distributed dividends and capital gains in March and December for as long as I have held it. VDIGX claims to make semi-annual distributions but made distributions three times in 2014 (March, June, December) and has made/will make two distributions this year already (March is done, June is pending -- the fund has gone ex-dividend with re-investment today and payment on 22nd). You can, as Chris Rea suggests, call the fund company directly, but in my experience, they are reluctant to divulge the date of the distribution (\"\"The fund manager has not made the date public as yet\"\") let alone an estimated amount. Even getting a \"\"Yes, the fund intends to make a distribution later this month\"\" was difficult to get from my \"\"Personal Representative\"\" in early March, and he had to put me on hold to talk to someone at the fund before he was willing to say so.\"",
"title": ""
},
{
"docid": "08c3f5e83dd7e845ab352290781bcd70",
"text": "Dividends are not paid immediately upon reception from the companies owned by an ETF. In the case of SPY, they have been paid inconsistently but now presumably quarterly.",
"title": ""
},
{
"docid": "d3758f89694c049210e7beac9efa2c3a",
"text": "The trend in ETFs is total return: where the ETF automatically reinvests dividends. This philosophy is undoubtedly influenced by that trend. The rich and retired receive nearly all income from interest, dividends, and capital gains; therefore, one who receives income exclusively from dividends and capital gains must fund by withdrawing dividends and/or liquidating holdings. For a total return ETF, the situation is even more limiting: income can only be funded by liquidation. The expected profit is lost for the dividend as well as liquidating since the dividend can merely be converted back into securities new or pre-existing. In this regard, dividends and investments are equal. One who withdraws dividends and liquidates holdings should be careful not to liquidate faster than the rate of growth.",
"title": ""
},
{
"docid": "c3f5aa8893ae0fea90232779fcb22b47",
"text": "Yes, if you want income and are willing to commit to hold a bond to maturity, you can hold the bond, get the scheduled payments, and get your principal returned at the end. US Savings Bonds are non-marketable (you cannot trade them, but can redeem early) bonds designed for this purpose. The value of a marketable bond will vary over its lifetime as interest rates change and the bond matures. If you buy a 30 year US Treasury bond at par value (100) on September 1, 2011, it yielded 3.51%. If rates fall, the value of your bond will increase over 100. If rates rise, the value will decrease below 100. How much the value changes depends on the type of bond and the demand for it. But if your goal is to buy and hold, you don't need to worry about it.",
"title": ""
},
{
"docid": "468f1945e30dd4d58e90a92d1a6d3953",
"text": "\"The way the post is worded, coca cola wouldn't count towards either, although it's not entirely clear. If the dividends are considered under capital gains (which isn't technically an appropriate term) he's earning only 500Million a year from his stake in coca cola. If he sold his shares, he'd receive capital gains of ~15Billion, which would probably outpace his operations business. The best graph would probably be something like \"\"net worth of operations vs net worth of equity in other companies\"\"\"",
"title": ""
},
{
"docid": "8e37a0bedf04922bb9fa43fd2c0e00b4",
"text": "The tax is only payable on the gain you make i.e the difference between the price you paid and the price you sold at. In your cse no tax is payable if you sell at the same price you bought at",
"title": ""
},
{
"docid": "4b673df4129fb2dab004b655c4a601aa",
"text": "No. As a rule, the dividends you see in the distribution table are what you'll receive before paying any taxes. Tax rates differ between qualified and unqualified/ordinary dividends, so the distribution can't include taxes because tax rates may differ between investors. In my case I hold it in an Israeli account but the tax treaty between our countries still specifies 25% withheld tax This is another example of why tax rates differ between investors. If I hold SPY too, my tax rate will be very different because I don't hold it in an account like yours, so the listed dividend couldn't include taxes.",
"title": ""
},
{
"docid": "3f2195b1e5cbd163326130ce19f688aa",
"text": "\"Not a bond holder, but when we get dividends we usually just buy up a benchmark index tracking ETF unless/until we're ready to rebalance our portfolio. Most of the trades in the day are earmarked with the reason \"\"spending cash\"\". I'd assume it's similar for bond holders and coupons.\"",
"title": ""
},
{
"docid": "95c2adec4356b3c197307f57a31ce4a5",
"text": "Brokerage firms must settle funds promptly, but there's no explicit definition for this in U.S. federal law. See for example, this article on settling trades in three days. Wikipedia also has a good write-up on T+3. It is common practice, however. It takes approximately three days for the funds to be available to me, in my Canadian brokerage account. That said, the software itself prevents me from using funds which are not available, and I'm rather surprised yours does not. You want to be careful not to be labelled a pattern day trader, if that is not your intention. Others can better fill you in on the consequences of this. I believe it will not apply to you unless you are using a margin account. All but certainly, the terms of service that you agreed to with this brokerage will specify the conditions under which they can lock you out of your account, and when they can charge interest. If they are selling your stock at times you have not authorised (via explicit instruction or via a stop-loss order), you should file a complaint with the S.E.C. and with sufficient documentation. You will need to ensure your cancel-stop-loss order actually went through, though, and the stock was sold anyway. It could simply be that it takes a full business day to cancel such an order.",
"title": ""
},
{
"docid": "2dc4fec57148f221da98f849fa2699b5",
"text": "\"....causes loses [sic] to others. Someone sells you a stock. The seller receives cash. You receive a stock certificate. This doesn't imply a loss by either party especially if the seller sold the stock for more than his purchase price. A day trading robot can make money off of the price changes of a stock only if there are buyers and sellers of the stock at certain prices. There are always two parties in any stock transaction: a buyer and a seller. The day trading robot can make money off of an investment for 20 years and you could still make money if the investment goes up over the 20 years. The day trading robot doesn't \"\"rob\"\" you of any profit.\"",
"title": ""
},
{
"docid": "a96d94c22d193385c82351f53d90af2a",
"text": "\"Your return from a bond fund corresponds to the return on the underlying bonds (minus fees) during your holding period. So you can buy AND sell at any time. Some funds charge a penalty of 2% or whatever if you sell your fund shares within 30 or 60 days of buying it. There are two basic ways to profit from a bond fund. 1) you get dividends from the interest paid on the bonds. 2) you have a capital gain (or loss) on the bonds themselves. 1) is likely to happen. MOST (not all) bonds pay interest on time, and on a regular basis. This component of returns is ALMOST guaranteed. 2) There are no guarantees on what the \"\"market\"\" will pay for bonds at any given time, so this component of bonds is NOT AT ALL guaranteed. Your \"\"total return is the sum of 1) and 2) (minus fees). Since 2) is uncertain, your \"\"total return\"\" is uncertain.\"",
"title": ""
},
{
"docid": "c7cb9fb148b3e388eb95cfe98ac96a8d",
"text": "Your understanding is incorrect. The date of record is when you have to own the stock by. The ex-dividend date is calculated so that transaction before that date settles in time to get you listed as owner by the date of record. If you buy the stock before the ex-dividend date, you get the dividend. If you buy it on or after the ex-dividend date, the seller gets the dividend.",
"title": ""
},
{
"docid": "24edd62c7ed2bda08884eda0e9dcf42b",
"text": "\"In the US, and in most other countries, dividends are considered income when paid, and capital gains/losses are considered income/loss when realized. This is called, in accounting, \"\"recognition\"\". We recognize income when cash reaches our pocket, for tax purposes. So for dividends - it is when they're paid, and for gains - when you actually sell. Assuming the price of that fund never changes, you have this math do to when you sell: Of course, the capital loss/gain may change by the time you actually sell and realize it, but assuming the only price change is due to the dividends payout - it's a wash.\"",
"title": ""
},
{
"docid": "5a9de080444de75c710b8e60527623c7",
"text": "\"I'm trying to understand how an ETF manager optimized it's own revenue. Here's an example that I'm trying to figure out. ETF firm has an agreement with GS for blocks of IBM. They have agreed on daily VWAP + 1% for execution price. Further, there is a commission schedule for 5 mils with GS. Come month end, ETF firm has to do a monthly rebalance. As such must buy 100,000 shares at IBM which goes for about $100 The commission for the trade is 100,000 * 5 mils = $500 in commission for that trade. I assume all of this is covered in the expense ratio. Such that if VWAP for the day was 100, then each share got executed to the ETF at 101 (VWAP+ %1) + .0005 (5 mils per share) = for a resultant 101.0005 cost basis The ETF then turns around and takes out (let's say) 1% as the expense ratio ($1.01005 per share) I think everything so far is pretty straight forward. Let me know if I missed something to this point. Now, this is what I'm trying to get my head around. ETF firm has a revenue sharing agreement as well as other \"\"relations\"\" with GS. One of which is 50% back on commissions as soft dollars. On top of that GS has a program where if you do a set amount of \"\"VWAP +\"\" trades you are eligible for their corporate well-being programs and other \"\"sponsorship\"\" of ETF's interests including helping to pay for marketing, rent, computers, etc. Does that happen? Do these disclosures exist somewhere?\"",
"title": ""
},
{
"docid": "efb66dcd4b165d602a86a88e6d70d4de",
"text": "You only have to hold the shares at the opening of the ex-dividend date to get the dividends. So you can actually sell the shares on ex-dividend date and still get the dividends. Ex-dividend date occurs before the record date and payment date, so you will get the dividend even if you sold before the record date.",
"title": ""
}
] |
fiqa
|
4e7b18df8936133d34b122b165662f33
|
How much power does a CEO have over a public company?
|
[
{
"docid": "fe6c62e0a4a3b86b3c7b77beb28cbd57",
"text": "The shareholders elect the board of directors who in turn appoint a CEO. The CEO is responsible for the overall running of the company. To answer your specific questions: Yes, Steve Jobs could make decisions that are harmful to the well-being of the company. However, it's the responsibility of the board of directors to keep his decisions and behavior in check. They will remove him from his position if they feel he could be a danger to the company.",
"title": ""
},
{
"docid": "8dd380987c8875e3144da0a56ae22f67",
"text": "If Steve Jobs [Tim Cook] were to decide to try to kill Apple, does he have the power to do so? Yes. But he would be held accountable. In addition to the other answers, the CEO is a fiduciary of the corporation. That means his/her actions must be in good faith and look out for the well-being of the company. Otherwise, he could be sued and held liable for civil damages and even criminally prosecuted for malfeasance.",
"title": ""
},
{
"docid": "2746b78eb02f9a09265196c4bd9e288a",
"text": "This is a very good question and is at the core of corporate governance. The CEO is a very powerful figure indeed. But always remember that he heads the firm's management only. He is appointed by the board of directors and is accountable to them. The board on the other hand is accountable to the firm's shareholders and creditors. The CEO is required to disclose his ownership of the firm as well. Ideally, you (as a shareholder) would want the board of directors to be as independent of the management as it is possible. U.S. regulations require, among other things, the board of directors to disclose any material relationship they may have with the firm's employees, ex-employees, or their families. Such disclosures can be found in annual filings of a company. If the board of directors acts independently of the management then it acts to protect the shareholder's interests over the firm management's interest and take seemingly hard decisions (like dismissing a CEO) when they become necessary to protect the franchise and shareholder wealth.",
"title": ""
},
{
"docid": "3f66d5baa80fec1f570bf779849b435e",
"text": "Also keep note - some companies have a combined CEO/Chairman of the board role. While he/she would not be allowed to negotiate contracts or stock plans, some corporate governance analysts advocate for the separation of the roles to remove any opportunity for the CEO to unduly influence the board. This could be the case for dysfunctional boards. However, the alternate camps will say that the combined role has no negative effect on shareholder returns. SEC regulations require companies to disclose negotiations between the board and CEO (as well as other named executives) for contracts, employee stock plans, and related information. Sometimes reading the proxy statement to find out, for example, how many times the board meets a year, how many other boards a director serves on, and if the CEO sits on any other board (usually discouraged to serve on more than 2) will provide some insight into a well-run (or not well-run) board.",
"title": ""
}
] |
[
{
"docid": "77f89971a7a2ffed46917caca5dd0e33",
"text": "\"a lot of companies will \"\"class\"\" their shares and the founders will hold on to the A class shares so that they can distribute more than 50% but still retain the majority of control over company decisions. A lot of this stuff is set out in the underwriting.\"",
"title": ""
},
{
"docid": "0c25bc39b09256017b3426e5f5fcb448",
"text": "CEOs have multiple fiduciary duties, which fall into three broad categories: care, loyalty, and disclosure. You are probably referring to the duty of loyalty: to act, in good faith, in the best interests of shareholders, putting shareholders' interests above the CEOs' own personal interests.",
"title": ""
},
{
"docid": "a3b13c092ffbc8a95cfafc6ba4275c30",
"text": "Yes and no. Courts do understand the idea of tyranny of the majority. Specifically, actions that hurt the company for personal gain is still theft against the minority shareholders. It's common misconception that this fiduciary duty means that a CEO's job is to raise the price of their stock. The truth is, stock price is a number that has an extremely tenuous relation to actual company health. So, it's entirely possible for a shareholder lawsuit to happen. It's just typically cheaper and less hassle to sell the stocks for a loss and get out while they can.",
"title": ""
},
{
"docid": "88ab9f9eb83e88b5b691d94aa1f7100e",
"text": "Many CEOs I have heard of earn a lot more than 200k. In fact a lot earn more than 1M and then get bonuses as well. Many wealthy people increase there wealth by investing in property, the stock market, businesses and other assets that will produce them good capital growth. Oh yeh, and luck usually has very little to do with their success.",
"title": ""
},
{
"docid": "b85a2f8355082ec269db017ff3da7393",
"text": "Yes. I can by all means start my own company and name myself CEO. If Bill Gates wanted to hire me, I'll take the offer and still be CEO of my own company. Now, whether or not my company makes money and survives is another question. This is the basis of self-employed individuals who contract out their services.",
"title": ""
},
{
"docid": "b6467e804b2819ebdf69bc967a7c1f66",
"text": "At any given time there are buy orders and there are sell orders. Typically there is a little bit of space between the lowest sell order and the highest buy order, this is known as the bid/ask spread. As an example say person A will sell for $10.10 but person B will only buy at $10.00. If you have a billion shares outstanding just the space between the bid and ask prices represents $100,000,000 of market cap. Now imagine that the CEO is in the news related to some embezzlement investigation. A number of buyers cancel their orders. Now the highest buy order is $7. There isn't money involved, that's just the highest offer to buy at the time; but that's a drop from $10 to $7. That's a change in market cap of $3,000,000,000. Some seller thinks the stock will continue to fall, and some buyer thinks the stock has reached a fair enterprise value at $7 billion ($7 per share). Whether or not the seller lost money depends on where the seller bought the stock. Maybe they bought when it was an IPO for $1. Even at $7 they made $6 per share. Value is changing, not money. Though it would be fun, there's no money bonfire at the NYSE.",
"title": ""
},
{
"docid": "5c5b6590026b326732665a2758d4c3ef",
"text": "OTOH if you look at automobile purchases I don't know if anyone could tell you who the CEO of say, GM or Toyota or BMW is. Those purchases tend to be more emotional than anything else and not directly related to corporate or CEO behavior.",
"title": ""
},
{
"docid": "b0d37a12b0ea81470660693086bfb85c",
"text": "If you don't have any voting rights then you don't have much say in the direction of the company. Of course, if the majority of voting rights are held by 1 or 2 people/institutions then you probably don't have much say regardless. That said, 0.1% isn't a whole lot of a voice anyway.",
"title": ""
},
{
"docid": "4f2886f849780145584d7943a9172176",
"text": "http://www.catalyst.org/publication/271/women-ceos-of-the-fortune-1000 There are only 40 women CEO's out of the top 1000 companies. Looks like Virginia Rometty is viewed as a star at IBM. And you have Ursula Burns of Xerox. (Those are considered tech right?) Ursula Burns is kind of completely amazing actually. But there aren't a lot of high profile female CEO's because there aren't a lot of female CEO's period. You can look over the list for the tech ones. Also high profile is usually linked with something sexy and interesting to the public. I don't know many male CEOs. Apple, Amazon. That's about it. I don't even know who's in charge of google.",
"title": ""
},
{
"docid": "928598067c978d7ba6b404631e154c70",
"text": "The person holding the majority of shares can influence the decisions of the company. Even though the shareholder holds majority of the shares,the Board of Directors appointed by the shareholders in the Annual General Meeting will run the company. As said in the characteristics of the company,the owners and the administrators of the company are different. The shareholder holding majority of the shares can influence the business decisions like appointing the auditor,director etc. and any other business decisions(not taken in the ordinary business) that are taken in the Annual General Meeting.",
"title": ""
},
{
"docid": "0c18165ab9300dbfec22589dae0279d2",
"text": "Bullshit, I'm guessing you don't know many CEOs and what they provide for a company, do you? Also, your idea about private management is meaningless. The shareholders manage the company. End of story. They are also the owners.",
"title": ""
},
{
"docid": "a1c98ccc768243eed86cf029e1f1b71b",
"text": "Warren Buffet also isn't the CEO of a major company - or at least one that matters in this context. He is the CEO of Berkshire Hathaway. That is a holding company that owns a handful of other companies. It doesn't have customers, it doesn't sell a product. It owns companies that do those things, some of which directly rely on technology and need their CEO to have a strong understanding of technology. The things is though, that each of those companies? They have their own CEO - not Warren Buffett.",
"title": ""
},
{
"docid": "03de8137410bc7bd6ff8c85e0da1af97",
"text": "\"Trump called it \"\"controls\"\" rather than owns. He is firmly remaining as the CEO and is the largest shareholder so that's a moot point. That is still $85 billion in shares. If Trump wipes off only 10% in stock price with his constant threats of taxes and breaking up a monopoly, that would cost Bezos $8.5 billion. If Trump does break up Amazon then Bezos may lose much more. Trump explained to Fox News, \"\"This is owned as a toy by Jeff Bezos who controls Amazon. Amazon is getting away with murder tax-wise. He's using the Washington Post for power so that the politicians in Washington don't tax Amazon like they should be taxed,\"\" Trump said. Trump added that he read somewhere that Bezos was worried Trump would go after him for anti-trust violations.\"",
"title": ""
},
{
"docid": "560638c8ad7f70d280c4a628437bee49",
"text": "\"I hate to point this out, but have you heard of this guy Trump, or Warren Buffet (although his son seems to be very competent and grounded, to some degree). The US is also plagued with this problem where family companies remain so through leadership, they also tend to fail at greater rates than our publicly ran companies. I suppose Samsung is public company, but why having stock on the open KRX doesn’t lead to better leadership is beyond me to understand? EDIT:My bad for bringing Trump into this, it was meant as an example of wealth distribution which translates into capacity for business options, and he's well known. However you guys need to do some more research before throwing shade, Howard Buffet has taken over Berkshire Hathaway in a non-executive role, while also holding board positions on a multitude of companies in which BH own significant portions including coca-cola. I wasn't pointing out Warren is incompetent in any way, just he passed the reins off to family too in many ways. \"\"In December 2011, Warren Buffett told CBS News that he would like his son Howard to succeed him as Berkshire Hathaway's non-executive chairman.\"\" Apologies for lack of clarity in my statement.\"",
"title": ""
},
{
"docid": "eb7012fb5d54d8691f293657b1f463d5",
"text": "> Board members that have fiduciary responsibilities to investors (this includes the potential for personal liability). How often do lawsuits surrounding this even happen? How can shareholders prove in court that the CEO or whomever they are suing wasn't acting in the best interest of the company and actually have a meaningful case? There are countless badly run public companies out there and they don't get sued.",
"title": ""
}
] |
fiqa
|
f0aea1ed08b2d40ef22ba475084cbec8
|
Can I buy only 4 shares of a company?
|
[
{
"docid": "8d22309c11ce6d096acf4c779c7ab65b",
"text": "Yes you can. it's called Odd Lot",
"title": ""
},
{
"docid": "86c8b880726a6d9d088b0f3a56861f70",
"text": "\"Simple answer: Yes A better question to ask might be \"\"Should I invest all my savings to buy 4 shares of a single stock.\"\" My answer to that would be \"\"probably not\"\". If this is your first venture into the world of owning publicly traded companies, then you're better off starting with some sort of mutual fund or ETF. This will start your portfolio with some amount of diversification so you don't have all your eggs in one basket. If you really want to get into the world of picking individual stocks, a good rule of thumb to follow is to invest $1 in some sort of indexed fund for every $1 you invest in an individual stock. This gives you some diversification while still enabling you to scratch that itch of owning a part of Apple or whatever other company you think is going in the right direction.\"",
"title": ""
},
{
"docid": "8c0a103b711ecf32e3c82c69669e9b6f",
"text": "I'm not sure it is the best idea, but you can buy only 4 stocks generally. As you alluded to, you should take notice of the fees. Also note that many stocks trade at significantly lower prices than Apple's per shares, so you might want to factor that into your decision. You could probably get a better feel for transactions if you bought say 50 shares of a $30 stock; then it might be easier to see what it's like to sell some, etc. Note that specific trading sites might have various limits in place that would pose as barriers to this sort of behavior though.",
"title": ""
},
{
"docid": "184b63bf1790b8e69ca079b62aebdbb5",
"text": "Open an account with a US discount online broker, or with a European broker with access to the US market. I think ETRADE allow non-resident accounts, for instance, amongst others. The brokerage will be about $10, and there is no annual fee. (So you're ~1% down out of the gate, but that's not so much.) Brokers may have a minimum transaction value but very few exchanges care about the number of shares anymore, and there is no per-share fee. As lecrank notes, putting all your savings into a single company is not prudent, but having a flutter with fun money on Apple is harmless. Paul is correct that dividend cheques may be a slight problem for non-residents. Apple don't pay dividends so there's no problem in this specific case. More generally your broker will give you a cash account into which the dividends can go. You may have to deal with US tax which is more of an annoyance than a cost.",
"title": ""
},
{
"docid": "82fd28a1365ba647adc6c8d74dc38fe2",
"text": "The least expensive way to buy such small amounts is through ING's Sharebuilder service. You can perform a real-time trade for $9, or you can add a one-time trade to their investment schedule for $4 (transaction will be processed on the next upcoming Tuesday morning). They also allow you to purchase fractional shares.",
"title": ""
},
{
"docid": "92f0b60388d535a8b24ec5ee5eac7417",
"text": "\"Take a look at FolioFN - they let you buy small numbers of shares and fractional shares too. There is an annual fee on the order of US$100/year. You can trade with no fees at two \"\"windows\"\" per day, or at any time for a $15 fee. You are better off leaving the stock in broker's name, especially if you live overseas. Otherwise you will receive your dividends in the form of cheques that might be expensive to try to cash. There is also usually a fee charged by the broker to obtain share certificates instead of shares in your account.\"",
"title": ""
},
{
"docid": "715832a0ce5dd6bfc23d850927768807",
"text": "One of my university professors suggested doing this systematically to get access to shareholder meetings where there is typically a nice dinner involved. As long as the stock price + commission is less than the price of a nice restaurant it's actually not a bad idea.",
"title": ""
},
{
"docid": "feed602014f1bbfa0c3741fda68b2e55",
"text": "I have done this last year. Just open an account with an online brocker and buy a couple of Apple shares (6 I think, for 190$ each or something like that :) ). If this is just to test how stock exchange works, I think this is a good idea. I am also in Europe (France), and you'r right the charge to buy on NasDaq are quite expensive but still reasonnable. Hope this helps.",
"title": ""
}
] |
[
{
"docid": "06238bcde4f209948bd74386f6b222c0",
"text": "\"I've bought ISO stock over they years -- in NYSE traded companies. Every time I've done so, they've done what's called \"\"sell-to-cover\"\". And the gubmint treats the difference between FMV and purchase price as if it's part of your salary. And for me, they've sold some stock extra to pay estimated taxes. So, if I got this right... 20,000 shares at $3 costs you 60,000 to buy them. In my sell-to-cover at 5 scenario: did I get that right? Keeping only 4,000 shares out of 20,000 doesn't feel right. Maybe because I've always sold at a much ratio between strike price and FMV. Note I made some assumptions: first is that the company will sell some of the stock to pay the taxes for you. Second is your marginal tax rate. Before you do anything check these. Is there some reason to exercise immediately? I'd wait, personally.\"",
"title": ""
},
{
"docid": "4a7cb335aa2cfc013f8504d25232875e",
"text": "\"It is not clear when you mean \"\"company's directors\"\" are they also majority owners. There are several reasons for Buy; Similarly there are enough reasons for sell; Quite often the exact reasons for Buy or Sell are not known and hence blindly following that strategy is not useful. It can be one of the inputs to make a decision.\"",
"title": ""
},
{
"docid": "8fc999cb123d1fab5d8a6d8bcfd798b5",
"text": "I believe you can easily make tresholds for what constitute a partial owner. If I work for GE and buy a share, I'm not exactly a partial owner. Anything under 10% for companies making, I don't know, less than 50 Mil in revenue, you're an employee. Larger companies, 5%. That's just an idea, could be refined but, yeah...",
"title": ""
},
{
"docid": "c751b95b4ee8057c8e7f576a1724ba4c",
"text": "Your maximum risk is 100%. If you buy the stock 15% off and your company goes bankrupt tomorrow, you've lost everything. It also sounds like you have foreign exchange risk. One can debate how much risk this is in terms of expected outcomes, but that was not your question. However, if you purchase the company stock and buy put options at the same time, you can lock in a sale price ahead of time and absolutely limit your risk. Depending on the amount of stock we're talking about, you can buy currency futures as well to hedge the exchange risk. You don't necessarily have to buy the break-even strikes, you can buy the ones that guarantee a positive return. These are probably fairly cheap. Note that a lot of companies have policies that prohibit beneficiaries from shorting the company stocks, in which case you might not be able to hedge yourself with put options.",
"title": ""
},
{
"docid": "035d6bea1dd42ac71c51671df2da59f4",
"text": "\"Read the book, \"\"Slicing Pie: Fund Your Company Without Funds\"\". You can be given 5% over four years and in four years, they hire someone and give him twice as much as you, for working a month and not sacrificing his salary at all. Over the four years, the idiot who offered you the deal will waste investors money on obvious, stupid things because he doesn't know anything about how to build what he's asking you to build, causing the need for more investment and the dilution of your equity. I'm speaking from personal experience. Don't even do this. Start your own company if you're working for free, and tell the idiot who offered you 5% you'll offer him 2% for four years of him working for you for free.\"",
"title": ""
},
{
"docid": "3c367ad374da420b8a8c5cb6d2191b80",
"text": "Your strategy of longing company(a) and shorting company(b) is flawed as the prices of company(a) and company(b) can both increase and though you are right , you will lose money due to the shorting strategy. You should not engage in pair trading , which is normally used for arbitrage purposes You should just buy company(a) since you believed its a better company compared to company(b) , its as simple as that",
"title": ""
},
{
"docid": "2a2880cc32f51a709d7cc91acef8eb9e",
"text": "\"Let's handle this as a \"\"proof of concept\"\" (POC); OP wants to buy 1 share of anything just to prove that they can do it before doing the months of painstaking analysis that is required before buying shares as an investment. I will also assume that the risks and costs of ownership and taxes would be included in OP's future analyses. To trade a stock you need a financed broker account and a way to place orders. Open a dealing account, NOT an options or CFD etc. account, with a broker. I chose a broker who I was confident that I could trust, others will tell you to look for brokers based on cost or other metrics. In the end you need to be happy that you can get what you want out of your broker, that is likely to include some modicum of trust since you will be keeping money with them. When you create this account they will ask for your bank account details (plus a few other details to prevent fraud, insider trading, money laundering etc.) and may also ask for a minimum deposit. Either deposit enough to cover the price of your share plus taxes and the broker's commission, plus a little extra to be on the safe side as prices move for every trade, including yours, or the minimum if it is higher. Once you have an account the broker will provide an interface through which to buy the share. This will usually either be a web interface, a phone number, or a fax number. They will also provide you with details of how their orders are structured. The simplest type of order is a \"\"market order\"\". This tells the broker that you want to buy your shares at the market price rather than specifying only to buy at a given price. After you have sent that order the broker will buy the share from the market, deduct the price plus tax and her commission from your account and credit your account with your share.\"",
"title": ""
},
{
"docid": "4e469e94c4147cd6d8400187f1aef89c",
"text": "\"In a sense, yes. There's a view in Yahoo Finance that looks like this For this particular stock, a market order for 3000 shares (not even $4000, this is a reasonably small figure) will move the stock past $1.34, more than a 3% move. Say, on the Ask side there are 100,000 shares, all with $10 ask. It would take a lot of orders to purchase all these shares, so for a while, the price may stay right at $10, or a bit lower if there are those willing to sell lower. But, say that side showed $10 1000, $10.25 500, $10.50 1000. Now, the volume is so low that if I decided I wanted shares at any price, my order, a market order will actually drive the market price right up to $10.50 if I buy 2500 shares \"\"market\"\". You see, however, even though I'm a small trader, I drove the price up. But now that the price is $10.50 when I go to sell all 2500 at $10.50, there are no bids to pay that much, so the price the next trade will occur at isn't known yet. There may be bids at $10, with asking (me) at $10.50. No trades will happen until a seller takes the $10 bid or other buyers and sellers come in.\"",
"title": ""
},
{
"docid": "5f99c60c56919e92f08c683b1e2d5532",
"text": "A rough estimate of the money you'd need to take a position in a single stock would be: In the case of your Walmart example, the current share price is 76.39, so assuming your commission is $7, and you'd like to buy, say, 3 shares, then it would cost approximately (76.39 * 3) + 7 = $236.17. Remember that the quoted price usually refers to 100-share lots, and your broker may charge you a higher commission or other fees to purchase an odd lot (less than 100 shares, usually). I say that the equation above gives an approximate minimum because However, I second the comments of others that if you're looking to invest a small amount in the stock market, a low cost mutual fund or ETF, specifically an index fund, is a safer and potentially cheaper option than purchasing individual stocks.",
"title": ""
},
{
"docid": "532bbd8ec65a74efd07b91dd9f8be6ca",
"text": "This is allowed somewhat infrequently. You can often purchase stocks through DRIPs which might have little or no commission. For example Duke Energy (DUK) runs their plan internally, so you are buying from them directly. There is no setup fee, or reinvestment fee. There is a fee to sell. Other companies might have someone else manage the DRIP but might subsidize some transaction costs giving you low cost to invest. Often DRIPs charge relatively large amounts to sell and they are not very nimble if trading is what you are after. You can also go to work for a company, and often they allow you to buy stock from them at a discount (around 15% discount is common). You can use a discount broker as well. TradeKing, which is not the lowest cost broker, allows buys and sells at 4.95 per trade. If trading 100 shares that is similar in cost to the DUK DRIP.",
"title": ""
},
{
"docid": "6eeae50d64c7628d4b012453cccd6cc4",
"text": "Is it correct that there is no limit on the length of the time that the company can keep the money raised from IPO of its stocks, unlike for the debt of the company where there is a limit? Yes that is correct, there is no limit. But a company can buy back its shares any time it wants. Anyone else can also buy shares on the market whenever they want.",
"title": ""
},
{
"docid": "5db2500544c713428b4b849702c8e351",
"text": "In order to see whether you can buy or sell some given quantity of a stock at the current bid price, you need a counterparty (a buyer) who is willing to buy the number of stocks you are wishing to offload. To see whether such a counterparty exists, you can look at the stock's order book, or level two feed. The order book shows all the people who have placed buy or sell orders, the price they are willing to pay, and the quantity they demand at that price. Here is the order book from earlier this morning for the British pharmaceutical company, GlaxoSmithKline PLC. Let's start by looking at the left-hand blue part of the book, beneath the yellow strip. This is called the Buy side. The book is sorted with the highest price at the top, because this is the best price that a seller can presently obtain. If several buyers bid at the same price, then the oldest entry on the book takes precedence. You can see we have five buyers each willing to pay 1543.0 p (that's 1543 British pence, or £15.43) per share. Therefore the current bid price for this instrument is 1543.0. The first buyer wants 175 shares, the next, 300, and so on. The total volume that is demanded at 1543.0p is 2435 shares. This information is summarized on the yellow strip: 5 buyers, total volume of 2435, at 1543.0. These are all buyers who want to buy right now and the exchange will make the trade happen immediately if you put in a sell order for 1543.0 p or less. If you want to sell 2435 shares or fewer, you are good to go. The important thing to note is that once you sell these bidders a total of 2435 shares, then their orders are fulfilled and they will be removed from the order book. At this point, the next bidder is promoted up the book; but his price is 1542.5, 0.5 p lower than before. Absent any further changes to the order book, the bid price will decrease to 1542.5 p. This makes sense because you are selling a lot of shares so you'd expect the market price to be depressed. This information will be disseminated to the level one feed and the level one graph of the stock price will be updated. Thus if you have more than 2435 shares to sell, you cannot expect to execute your order at the bid price in one go. Of course, the more shares you are trying to get rid of, the further down the buy side you will have to go. In reality for a highly liquid stock as this, the order book receives many amendments per second and it is unlikely that your trade would make much difference. On the right hand side of the display you can see the recent trades: these are the times the trades were done (or notified to the exchange), the price of the trade, the volume and the trade type (AT means automatic trade). GlaxoSmithKline is a highly liquid stock with many willing buyers and sellers. But some stocks are less liquid. In order to enable traders to find a counterparty at short notice, exchanges often require less liquid stocks to have market makers. A market maker places buy and sell orders simultaneously, with a spread between the two prices so that they can profit from each transaction. For instance Diurnal Group PLC has had no trades today and no quotes. It has a more complicated order book, enabling both ordinary buyers and sellers to list if they wish, but market makers are separated out at the top. Here you can see that three market makers are providing liquidity on this stock, Peel Hunt (PEEL), Numis (NUMS) and Winterflood (WINS). They have a very unpalatable spread of over 5% between their bid and offer prices. Further in each case the sum total that they are willing to trade is 3000 shares. If you have more than three thousand Dirunal Group shares to sell, you would have to wait for the market makers to come back with a new quote after you'd sold the first 3000.",
"title": ""
},
{
"docid": "f068810e0366ba6a9244f41e7f374873",
"text": "If a company doesn't take out loans to buy back the shares, is it still a bad move? I don't necessarily see the problem with companies retiring shares. If the shares of say Apple have a P/E ratio of 10 and the price to book value max of 1. Wouldn't it be a smart move by the company and the share holders assuming the projected net revenue will hold for at least ten years if not increase. I guess I don't know the true practice of buying back shares but at its core (could be more corrupt), I just don't see it as inherently bad.",
"title": ""
},
{
"docid": "76320099a37d8f9f9b4281d18080ef8b",
"text": "Simple: Do a stock split. Each 1 Ordinary share now = 100 Ordinary shares (or 100,000 or whatever you choose). Then sell 20 (or 20,000) of them to your third party. (Stock splits are fairly routine occurrence. Apple for example has done several, most recently in 2014 when 1 share = 7 shares). Alternatively you could go the route of creating a new share class with different rights, preferences etc. But this is more complicated.",
"title": ""
},
{
"docid": "81321e54f9387eaf0cf434ad54384e54",
"text": "Not going to happen for Facebook since there will be too much demand and the order will go to the top clients of the underwriting investment banks. In general though, if you wanted a piece of a smaller IPO you'd just have to get in touch with a broker whose investment banking department is on the underwriting syndicate (say Merrill Lynch or Smith Barney) since IPOs often have a percentage that they allocate to retail clients (i.e. you) as well as institutional (i.e. hedge funds, pension funds, mutual funds etc.). The more in demand the IPO the harder it is to get a piece.",
"title": ""
}
] |
fiqa
|
340637dee3d36f5af7baedba2d85d747
|
What should I do about proxy statements?
|
[
{
"docid": "4d023fb18dfd4ed07201165c868ccdc2",
"text": "\"You own a fractional share of the company, maybe you should care enough to at least read the proxy statements which explain the pro and con position for each of the issues you are voting on. That doesn't seem like too much to ask. On the other hand, if you are saying that the people who get paid to be knowledgeable about that stuff should just go make the decisions without troubling you with the details, then choose the option to go with their recommendations, which are always clearly indicated on the voting form. However, if you do this, it might make sense to at least do some investigation of who you are voting onto that board. I guess, as mpenrow said, you could just abstain, but I'm not sure how that is any different than just trashing the form. As for the idea that proxy votes are tainted somehow, the one missing piece of that conspiracy is what those people have to gain. Are you implying that your broker who has an interest in you making money off your investments and liking them would fraudulently cast proxy votes for you in a way that would harm the company and your return? Why exactly would they do this? I find your stance on the whole thing a bit confusing though. You seem to have some strong opinions on corporate Governance, but at the same time aren't willing to invest any effort in the one place you have any control over the situation. I'm just sayin.... Update Per the following information from the SEC Website, it looks like the meaning of a proxy vote can vary depending on the mechanics of the specific issue you are voting on. My emphasis added. What do \"\"for,\"\" \"\"against,\"\" \"\"abstain\"\"and \"\"withhold\"\" mean on the proxy card or voter instruction form? Depending on what you are voting on, the proxy card or voting instruction form gives you a choice of voting \"\"for,\"\" \"\"against,\"\" or \"\"abstain,\"\" or \"\"for\"\" or \"\"withhold.\"\" Here is an explanation of the differences: Election of directors: Generally, company bylaws or other corporate documents establish how directors are elected. There are two main types of ways to elect directors: plurality vote and majority vote. A \"\"plurality vote\"\" means that the winning candidate only needs to get more votes than a competing candidate. If a director runs unopposed, he or she only needs one vote to be elected, so an \"\"against\"\" vote is meaningless. Because of this, shareholders have the option to express dissatisfaction with a candidate by indicating that they wish to \"\"withhold\"\" authority to vote their shares in favor of the candidate. A substantial number of \"\"withhold\"\" votes will not prevent a candidate from getting elected, but it can sometimes influence future decisions by the board of directors concerning director nominees. A \"\"majority vote\"\" means that directors are elected only if they receive a majority of the shares voting or present at the meeting. In this case, you have the choice of voting \"\"for\"\" each nominee, \"\"against\"\" each nominee, or you can \"\"abstain\"\" from voting your shares. An \"\"abstain\"\" vote may or may not affect a director's election. Each company must disclose how \"\"abstain\"\" or \"\"withhold\"\" votes affect an election in its proxy statement. This information is often found toward the beginning of the proxy statement under a heading such as \"\"Votes Required to Adopt a Proposal\"\" or \"\"How Your Votes Are Counted.\"\" Proposals other than an election of directors: Matters other than voting on the election of directors, like voting on shareholder proposals, are typically approved by a vote of a majority of the shares voting or present at the meeting. In this situation, you are usually given the choice to vote your shares \"\"for\"\" or \"\"against\"\" a proposal, or to \"\"abstain\"\" from voting on it. Again, the effect of an \"\"abstain\"\" vote may depend on the specific voting rule that applies. The company's proxy statement should again disclose the effect of an abstain vote.\"",
"title": ""
},
{
"docid": "e51a130fe1c7a6a69cca14afabb9d37e",
"text": "Whether or not you want to abstain or throw away the proxy, one reason it's important to at least read the circular is to find out if any of the proposals deal with increasing the company's common stock. When this happens, it can dilute your shares and have an effect on your ownership percentage in the company and shareholder voting control.",
"title": ""
},
{
"docid": "4c12c7ea3fc4a5873fd78f6dd42a2638",
"text": "On most proxy statements (all I have ever received) you have the ability to abstain from voting. Just go down the list and check Abstain then return the form. You will effectively be forfeiting your right to vote. EDIT: According to this, after January 1, 2010 abstaining and trashing the voting materials are the same thing. Prior to January 1, 2010 your broker could vote however they wanted on your behalf if you chose not to vote yourself. The one caveat is this seems to only apply to the NYSE (unless I am reading it wrong). So not sure about stocks listed on the NASDAQ.",
"title": ""
}
] |
[
{
"docid": "122983b67c2e9915ddbad19b29715ab0",
"text": "Oh man. That's awful. Thank you for sharing this. Hey, I'm writing a larger piece on some of the challenges we face today, and both data overload and data legitimacy are a part of it. It would be great to have someone inside the industry to talk to about this. When I get there do you mind if I reach out to you so I can learn more?",
"title": ""
},
{
"docid": "bf7662a065b8944e12c197ad5175fda5",
"text": "\"A few practical thoughts: A practical thing that helps me immensely not to loose important paperwork (such as bank statements, bills, payroll statement, all those statements you need for filing tax return, ...) is: In addition to the folder (Aktenordner) where the statements ultimately need to go I use a Hängeregistratur. There are also standing instead of hanging varieties of the same idea (may be less expensive if you buy them new - I got most of mine used): you have easy-to-add-to folders where you can just throw in e.g. the bank statement when it arrives. This way I give the statement a preliminary scan for anything that is obviously grossly wrong and throw it into the respective folder (Hängetasche). Every once in a while I take care of all my book-keeping, punch the statements, file them in the Aktenordner and enter them into the software. I used to hate and never do the filing when I tried to use Aktenordner only. I recently learned that it is well known that Aktenordner and Schnellhefter are very time consuming if you have paperwork arriving one sheet at a time. I've tried different accounting software (being somewhat on the nerdy side, I use gnucash), including some phone apps. Personally, I didn't like the phone apps I tried - IMHO it takes too much time to enter things, so I tend to forget it. I'm much better at asking for a sales receipt (Kassenzettel) everywhere and sticking them into a calendar at home (I also note cash payments for which I don't have a receipt as far as I recall them - the forgotten ones = difference ends up in category \"\"hobby\"\" as they are mostly the beer or coke after sports). I was also to impatient for the cloud/online solutions I tried (I use one for business, as there the archiving is guaranteed to be according to the legal requirements - but it really takes far more time than entering the records in gnucash).\"",
"title": ""
},
{
"docid": "9798257382abe1279226130c288f7543",
"text": "You could make an entry for the disputed charge as if you were going to lose the dispute, and a second entry that reverses the charge as if you were going to win the dispute. You could then reconcile the account by including the first charge in the reconciliation and excluding the reversal until the issue has been resolved.",
"title": ""
},
{
"docid": "9410aac2831c33bba5318245fae862a3",
"text": "\"As a person who has had several part time assistants in the past I will offer you a simple piece of advise that should apply regardless of what country the assistant is located. If you have an assistant, personal or business, virtual or otherwise, and you don't trust that person with this type of information, get a different assistant. An assistant is someone who is supposed to make your life easier by off loading work. Modifying your records before sending them every month sounds like you are creating more work for yourself not less. Either take the leap of faith to trust your assistant or go somewhere else. An assistant that you feel you have to edit crucial information from is less than useful. That being said, there is no fundamental reason to believe that an operation in the Philippines or anywhere else is any more or less trustworthy than an operation in your native country. However, what is at issue is the legal framework around your relationship and in particular your recourse if something goes wrong. If you and your virtual assistant are both located in the US you would have an easier time collecting damages should something go wrong. I suggest you evaluate your level of comfort for risk vs. cost. If you feel that the risk is too high to use an overseas service versus the savings, then find someone in the states to do this work. Depending on your needs and comfort you might want to seek out a CPA or other licensed/bonded professional. Yes the cost might be higher however you might find that it is worth it for your own piece of mind. As a side note you might even consider finding a local part-time assistant. This can often be more useful than a virtual assistant and may not cost as much as you think. If you can live without someone being bonded. (or are willing to pay for the bonding fee) yourself, depending on your market and needs you may be able to find an existing highly qualified EA or other person that wants some after hours work. If you are in a college town, finance, accounting or legal majors make great assistants. They will usually work a couple hours a week for \"\"beer money\"\", they have flexible schedules and are glad to have something pertinent to their degree to put on their resume when they graduate. Just be prepared to replace them every few years as they move on to real jobs.\"",
"title": ""
},
{
"docid": "6c76b97fce53688c272eebaeee2f0c8d",
"text": "What you are describing here is the opposite of a problem: You're trying to contact a debt-collector to pay them money, but THEY'RE ignoring YOU and won't return your calls! LOL! All joking aside, having 'incidental' charges show up as negative marks on your credit history is an annoyance- thankfully you're not the first to deal with such problems, and there are processes in place to remedy the situation. Contact the credit bureau(s) on which the debt is listed, and file a petition to have it removed from your history. If everything that you say here is true, then it should be relatively easy. Edit: See here for Equifax's dispute resolution process- it sounds like you've already completed the first two steps.",
"title": ""
},
{
"docid": "4a5d9fd18704adeef6278900266fbf8d",
"text": "\"The comments are getting too much, but to verify that you are not insane, you are being bullied. It sounds like this is a sub-prime loan, of which you are wisely trying to get out of. It also sounds like they are doing everything in their power to prevent you from doing so. For them you are a very profitable customer. This might take some legwork for you, but depending on how bad they are violating the law they might be willing to forgive the loan. What I am trying to say, it might be very worth your while! Your first step will be looking for any free resources at your disposal: Just be cautious as many \"\"credit representation\"\" type business are only offering loan consolidation. That is not what you need. Fight those bastards!\"",
"title": ""
},
{
"docid": "7554414804749280c33547386889a22d",
"text": "What are the consequences if I ignore the emails? If you ignore the emails they will try harder to collect the money from you until they give up. Unlike what some other people here say, defaulting on a loan is NOT a crime and is NOT the same as stealing. There is a large number of reasons that can make someone unable to pay off a loan. Lenders are aware of the risk associated with default; they will try to collect the debt but at the end of the day if you don't have money/assets there is not much they can do. As far as immigration goes, there is nothing on a DS-160 form that asks you about bankruptcies or unpaid obligations. I doubt the consular officer will know of this situation, but it is possible. It is not grounds for visa ineligibility however, so you will be fine if everything else is fine. The only scenario in which unpaid student loans can come up relevant in immigration to the US is if and when you apply for US Citizenship. One of the requirements for Citizenship is having good moral character. Having a large amount of unpaid debt constitutes evidence of a poor moral character. But it is very unlikely you'd be denied Citizenship on grounds of that alone. I got a social security number when I took up on campus jobs at the school and I do have a credit score. Can they get a hold of this and report to the credit bureaus even though I don't live in America? Yes, they probably already have. How would this affect me if I visit America often? Does this mean I would not ever be able to live in America? No. See above. You will have a hard time borrowing again. Will they know when I come to America and arrest me at the border or can they take away my passport? No. Unpaid debt is no grounds for inadmissibility, so even if the CBP agent knows of it he will not do anything. And again, unpaid debt is not a crime so you will not be arrested.",
"title": ""
},
{
"docid": "3d5daf9cc17e40cfa669930d0cc5de79",
"text": "Request verification in writing of the debt. They are required to provide this by law. Keep this for your records. Send them a notice by certified mail stating that this is not your debt and not to contact you again. Indicate that you will take legal action if they continue to try and collect. Keep a log of if/when they continue to call or harass you. Contact counsel about your rights under the fair debt collection laws, but if they keep harassing you after being provided proof of your identity, they are liable. You could win a judgement in court if you have proof of bad behavior. If your identity is stolen, you are not legally responsible for the charges. However it is a mess to clean up, so pull your credit reports and review your accounts to be sure.",
"title": ""
},
{
"docid": "7edb70a52db9badb4485567e66444b25",
"text": "I can ONLY WISH this would happen to me. Get every scrap of information that you can. DOCUMENT DOCUMENT DOCUMENT..and then get a nice sleazy lawyer to sue the collector AND your employer if they leaked anything... Plain and simple, it's illegal and there are very nice protections in place for such.",
"title": ""
},
{
"docid": "7b93e0783a91335c0418e313471690db",
"text": "\"Mostly ditto to @grade'eh'bacon, but let me add a couple of comments: Before I did anything, I'd find out more about what's going on. Anytime someone tells me that there's a problem with \"\"security codes or something\"\", I get cautious. Think about what the possibilities are here. Your relative is being scammed. In that case, helping him to transfer his money to the scammer is not the kind of help you really want to give. Despite your firm belief in your relative's integrity, he may have been seduced by the dark side. If he's doing something illegal, I'd be very careful about getting involved. My friends and relatives don't ask me to commit crimes for them, especially not in a way that leaves me holding the bag if things go wrong. Assuming that what is going on here is all legal and ethical, still there is the possibility that you could be making yourself liable for taxes, fees, whatever. At the very least I'd want to know what those are up front. As @Grade'eh'bacon, if he really has a problem with a lost password or expired account, by all means help him fix that problem. But become someone else's financial intermediary has many possible pitfalls.\"",
"title": ""
},
{
"docid": "f3c707c379924f7a5f0f0ce1687b79a4",
"text": "You may have a few options if the company continues to ignore your communication. Even if none of these works out, the debt should still probably be paid out by the estate of your friend.",
"title": ""
},
{
"docid": "0683565621aff565ab849e4edfad786a",
"text": "\"You have to read some appeals court cases see scholar.google.com , as well as SEC enforcement actions on sec.gov to get an understanding of how the SEC operates. http://www.sec.gov/spotlight/insidertrading/cases.shtml There are court created guidelines for how insider trading would be proven There is no clear line, but it is the \"\"emergency asset injunctions\"\" (freezing your assets if you nailed a suspiciously lucrative trade) you really want to avoid, and this is often times enforced/reported by the brokers themselves since the SEC does not have the resources to monitor every account's trading activities. There are some thin lines, such as having your lawyer file a lawsuit, and as soon as it is filed it is technically public so you short the recipient's stock. Or having someone in a court room updating you on case developments as soon as possible so you can make trades (although this may just be actually public, depending on the court). But the rules create the opportunities Also consider that the United States is the most strict country in this regard, there are tons of capital markets and the ideals or views of \"\"illegal insider trading\"\" compared to \"\"having reached a level of society where you are privileged to obtain this information\"\" vary across the board contains charts of countries where an existing insider trading prohibition is actually enforced: http://repository.law.umich.edu/cgi/viewcontent.cgi?article=1053&context=articles https://faculty.fuqua.duke.edu/~charvey/Teaching/BA453_2005/BD_The_world.pdf Finally, consider some markets that don't include equities, as trading on an information advantage is only applicable to things the SEC regulates, and there are plenty of things that agency doesn't regulate. So trying to reverse engineer the SEC may not be the most optimal use of energy\"",
"title": ""
},
{
"docid": "1cc4f7ba9a0c307acb4c55a928045ef2",
"text": "Inform the company that you didn't receive the payment. Only they can trace the payment via their bank.",
"title": ""
},
{
"docid": "e3eaacc24784c090d2d5bdb857dcd3a9",
"text": "\"This doesn't answer your question, but as an aside, it's important to understand that your second and third bullet points are completely incorrect; while it used to be true that Swiss bank accounts often came with \"\"guarantees\"\" of neutrality and privacy, in recent years even the Swiss banks have been caving to political pressure from many sides (especially US/Obama), with regards to the most extreme cases of criminals. That is to say, if you're a terrorist or a child molester or in possession of Nazi warcrime assets, Swiss banks won't provide the protection you're interested in. You might say \"\"But I'm not a terrorist or a pervert or profiteering of war crimes!\"\" but if you're trying so hard to hide your personal assets, it's worth wondering how much longer until Swiss banks make further concessions to start providing information on PEOPLE_DOING_WHAT_YOU_ARE_DOING. Not to discourage you, this is just food for thought. The \"\"bulletproof\"\" protection these accounts used to provide has been compromised. I work with online advertising companies, and a number of people I know in the industry get sued on a regular basis for copyright or trademark infringement or spamming; most of these people still trust Swiss bank accounts, because it's still the best protection available for their assets, and because Swiss banks haven't given up details on someone for spamming... yet.\"",
"title": ""
},
{
"docid": "2b6a35f1951cf41e56a1603955d3ac58",
"text": "As I have worked for H&R Block I know for a fact that they record all your activity with them for future reference. If it is their opinion that you are obligated to use their service if you use some other service then this, most likely, will affect your future dealings with them. So, ask yourself this question: is reducing their income from you this year worth never being able to deal with them again in future years? The answer to that will give you the answer to your question.",
"title": ""
}
] |
fiqa
|
fe238a068bb8b70d5f24b1a78367f63c
|
How to file income tax returns for profits from ESPP stock?
|
[
{
"docid": "9374b0f4b0983345d60caa77ae25a50c",
"text": "Consult a professional CA. For shares sold outside the Indian Stock Exchanges, these will be treated as normal Long Term Capital Gains if held more than one year. The rate would be 10% without Indexation and 20% with Indexation. If the stocks are held for less than 1 years, it will be short term gains and taxed according you to tax bracket.",
"title": ""
},
{
"docid": "9c11adb5071b17afcac09a15263f2afe",
"text": "I did this for the last tax year so hopefully I can help you. You should get a 1099-B (around the same time you're getting your W-2(s)) from the trustee (whichever company facilitates the ESPP) that has all the information you need to file. You'll fill out a Schedule D and (probably) a Form 8949 to describe the capital gains and/or losses from your sale(s). It's no different than if you had bought and sold stock with any brokerage.",
"title": ""
}
] |
[
{
"docid": "f6402f4647bbd723317bbe4ea5e5179f",
"text": "How would I go about doing this? Are there any tax laws I should be worried about? Just report it as a regular sale of asset on your form 8949 (or form 4797 if used for trade/business/rental). It will flow to your Schedule D for capital gains tax. Use form 1116 to calculate the foreign tax credit for the taxes on the gains you'd pay in India (if any).",
"title": ""
},
{
"docid": "43bdaa58a8294a11126f9f82315e045b",
"text": "It really depends. If it is offered as compensation (ie in leiu of, or in addition to salary or cash bonus) then it would be reportable income, and if sold later for a profit then that would be taxable as gains. If this share is purchased as an investment at current value then it would be treated like other securities most likely gains realized at sale. Any discount could be considered income but there are some goofy rules surrounding this enacted to prevent tax evasion and some to spur growth. That is the answer in a nut shell. It is far more complicated in reality as there are somewhere around 2000 pages of regulations deal with different exceptions and scenerios.",
"title": ""
},
{
"docid": "177452e08f5bcd1a5ccb6fada4720bcd",
"text": "\"(Insert the usual disclaimer that I'm not any sort of tax professional; I'm just a random guy on the Internet who occasionally looks through IRS instructions for fun. Then again, what you're doing here is asking random people on the Internet for help, so here goes.) The gigantic book of \"\"How to File Your Income Taxes\"\" from the IRS is called Publication 17. That's generally where I start to figure out where to report what. The section on Royalties has this to say: Royalties from copyrights, patents, and oil, gas, and mineral properties are taxable as ordinary income. In most cases, you report royalties in Part I of Schedule E (Form 1040). However, if you hold an operating oil, gas, or mineral interest or are in business as a self-employed writer, inventor, artist, etc., report your income and expenses on Schedule C or Schedule C-EZ (Form 1040). It sounds like you are receiving royalties from a copyright, and not as a self-employed writer. That means that you would report the income on Schedule E, Part I. I've not used Schedule E before, but looking at the instructions for it, you enter this as \"\"Royalty Property\"\". For royalty property, enter code “6” on line 1b and leave lines 1a and 2 blank for that property. So, in Line 1b, part A, enter code 6. (It looks like you'll only use section A here as you only have one royalty property.) Then in column A, Line 4, enter the royalties you have received. The instructions confirm that this should be the amount that you received listed on the 1099-MISC. Report on line 4 royalties from oil, gas, or mineral properties (not including operating interests); copyrights; and patents. Use a separate column (A, B, or C) for each royalty property. If you received $10 or more in royalties during 2016, the payer should send you a Form 1099-MISC or similar statement by January 31, 2017, showing the amount you received. Report this amount on line 4. I don't think that there's any relevant Expenses deductions you could take on the subsequent lines (though like I said, I've not used this form before), but if you had some specific expenses involved in producing this income it might be worth looking into further. On Line 21 you'd subtract the 0 expenses (or subtract any expenses you do manage to list) and put the total. It looks like there are more totals to accumulate on lines 23 and 24, which presumably would be equally easy as you only have the one property. Put the total again on line 26, which says to enter it on the main Form 1040 on line 17 and it thus gets included in your income.\"",
"title": ""
},
{
"docid": "c2e776fb7b74820146fb41350cfb275e",
"text": "Adding to webdevduck's answer: Before you calculate your profits, you can pay money tax-free into a pension fund for the company director (that is you). Then if you pay yourself dividends, if you made lots of profit you don't have to pay it all as dividends. You can take some where the taxes are low, and then pay more money in later years. What you must NOT do is just take the money. The company may be yours, but the money isn't. It has to be paid as salary or dividend. (You can give the company director a loan, but that loan has to be repaid. Especially if a limited company goes bankrupt, the creditors would insist that loans from the company are repaid). After a bit more checking, here's the optimal approach, perfectly legal, expected and ethical: You pay yourself a salary of £676 per month. That's the point where you get all the advantages of national insurance without having to pay; above that you would have to pay 13.8% employers NI contributions and 12% employee's NI contributions, so for £100 salary the company has to pay £113.80 and you receive £88.00. Below £676 you pay nothing. You deduct the salary from your revenue, then you deduct all the deductible business costs (be wise in what you try to deduct), then you pay whatever you want into a pension fund. Well, up to I think £25,000 per year. The rest is profit. The company pays 19% corporation tax on profits. Then you pay yourself dividends. Any dividends until your income is £11,500 per year are tax free. Then the next £5,000 per year are tax free. Then any dividends until income + dividends = £45,000 per year is taxed at 7.5%. It's illegal to pay so much in dividends that the company can't pay its bills. Above £45,000 you decide if you want your money now and pay more tax, or wait and get it tax free. Every pound of dividend above £45,000 a year you pay 32.5% tax, but there is nobody forcing you to take the money. You can wait until business is bad, or you want a loooong holiday, or you retire. So at that time you will stay below £45,000 per year and pay only 7.5% tax.",
"title": ""
},
{
"docid": "15a1feb3fc0c0c041bde517e1f7565d0",
"text": "\"I dug up an old article on Motley Fool and one approach they mention is to get the stock certificates and then sell them to a friend: If the company was liquidated, you should receive a 1099-DIV form at year's end showing a liquidating distribution. Treat this as if you sold the stock for the amount of the distribution. The date of \"\"sale\"\" is the date that the distribution took place. Using your original cost basis in the shares, you can now compute your loss. If the company hasn't actually been liquidated, you'll need to make sure it's totally worthless before you claim a loss. If you have worthless stock that's not worth the hassle of selling through your broker, you can sell it to a friend (or cousin, aunt, or uncle) for pennies. (However, you can't sell the stock to a spouse, siblings, parents, grandparents, or lineal descendants.) Here's one way to do it: Send the certificate to your stock-transfer agent. Explain that the shares have been sold, and ask to cancel the old shares and issue a new certificate to the new owner. Some brokerages will offer you a quicker alternative, by buying all of your shares of the stock for a penny. They do it to help out their customers; in addition, over time, some of the shares may actually become worth more than the penny the brokers paid for them. By selling the shares, you have a closed transaction with the stock and can declare a tax loss. Meanwhile, your friend, relative, or broker, for a pittance, has just bought a placemat or birdcage liner.\"",
"title": ""
},
{
"docid": "bbf48adc1557e2e46c2031c34e371115",
"text": "SXL is a Master Limited Partnership so all of the income is pass-through. Your equity purchase entitles you to a fraction of the 66% of the company that is not owned by Energy Transfer Partners. You should have been receiving the K-1s from SXL from the time that you bought the shares. Without knowing your specific situation, you will likely have to amend your returns for at most 6 years (if the omitted amount of gross income exceeds 25% of your gross income originally stated as littleadv has graciously pointed out in the comments) and include Schedule E to report the additional income (you'll also be able to deduct any depreciation, losses etc. that are passed through the entity on that form, so that will offset some of the gains). As littleadv has recommended, speak with a tax professional (CPA/EA or attorney) before you take any further steps, as everyone's situation is a bit different. This Forbes article has a nice overview of the MLP. There's a click-through to get to it, but it's not paywalled.",
"title": ""
},
{
"docid": "a673fcb56b419b6a87c7643e71729396",
"text": "You need to report the income from any work as income, regardless of if you invest it, spend it, or put it in your mattress (ignoring tax advantaged accounts like 401ks). You then also need to report any realized gains or losses from non-tax advantaged accounts, as well as any dividends received. Gains and losses are realized when you actually sell, and is the difference between the price you bought for, and the price you sold for. Gains are taxed at the capital gains rate, either short-term or long-term depending on how long you owned the stock. The tax system is complex, and these are just the general rules. There are lots of complications and special situations, some things are different depending on how much you make, etc. The IRS has all of the forms and rules online. You might also consider having a professional do you taxes the first time, just to ensure that they are done correctly. You can then use that as an example in future years.",
"title": ""
},
{
"docid": "b0fe4f46c95a1af4c1c188eddc55166d",
"text": "For tax purposes you will need to file as an employee (T4 slips and tax withheld automatically), but also as an entrepreneur. I had the same situation myself last year. Employee and self-employed is a publication from Revenue Canada that will help you. You need to fill out the statement of business activity form and keep detailed records of all your deductible expenses. Make photocopies and keep them 7 years. May I suggest you take an accountant to file your income tax form. More expensive but makes you less susceptible to receive Revenue Canada inspectors for a check-in. If you can read french, you can use this simple spreadsheet for your expenses. Your accountant will be happy.",
"title": ""
},
{
"docid": "b5dca99a685e3a33d3939c04c8107c93",
"text": "From the instructions: If you do not need to make any adjustments to the basis or type of gain or loss (short-term or long-term) reported to you on Form 1099-B (or substitute statement) or to your gain or loss for any transactions for which basis has been reported to the IRS (normally reported on Form 8949 with box A checked), you do not have to include those transactions on Form 8949. Instead, you can report summary information for those transactions directly on Schedule D. For more information, see Exception 1, later. However, in case of ESPP and RSU, it is likely that you actually do need to make adjustments. Since 2014, brokers are no longer required to track basis for these, so you better check that the calculations are correct. If the numbers are right and you just summarized instead of reporting each on a separate line, its probably not an issue. As long as the gains reported are correct, no-one will waste their time on you. If you missed several thousand dollars because of incorrect calculations, some might think you were intentionally trying to hide something by aggregating and may come after you.",
"title": ""
},
{
"docid": "5113b7444d0fc0998ef14da59956b5ec",
"text": "I agree with the other comments that you should not buy/hold your company stock even if given at a discount. If equity is provided as part of the compensation package (Options/Restrictive Stock Units RSU)then this rule does not apply. As a matter of diversification, you should not have majority equity stake of other companies in the same sector (e.g. technology) as your employer. Asset allocation and diversification if done in the right way, takes care of the returns. Buying and selling on the same day is generally not allowed for ESPP. Taxation headaches. This is from personal experience (Cisco Systems). I had options issued in Sept 2008 at 18$ which vested regularly. I exited at various points - 19$,20$,21$,23$ My friend held on to all of it hoping for 30$ is stuck. Options expire if you leave your employment. ESPP shares though remain.",
"title": ""
},
{
"docid": "abd138c01e6d5a971c99c8f92350dfec",
"text": "\"That's a tricky question and you should consult a tax professional that specializes on taxation of non-resident aliens and foreign expats. You should also consider the provisions of the tax treaty, if your country has one with the US. I would suggest you not to seek a \"\"free advice\"\" on internet forums, as the costs of making a mistake may be hefty. Generally, sales of stocks is not considered trade or business effectively connected to the US if that's your only activity. However, being this ESPP stock may make it connected to providing personal services, which makes it effectively connected. I'm assuming that since you're filing 1040NR, taxes were withheld by the broker, which means the broker considered this effectively connected income.\"",
"title": ""
},
{
"docid": "e50d808161d68b403fd9c24ec47bb822",
"text": "The HMRC website says: Stock dividends are treated as income by virtue of CTA10/S1049, and taxable as savings income under Chapter 5 of Part 4 of ITTOIA05 (sections 409 to 414). ITTOIA05 is the Income Tax (Trading and Other Income) Act 2005, and says: 409 Charge to tax on stock dividend income (1) Income tax is charged on stock dividend income. (2) In this Chapter “stock dividend income” means the income that is treated as arising under section 410. 411 Income charged (1) Tax is charged under this Chapter on the amount of stock dividend income treated for income tax purposes as arising in the tax year. (2) That amount is the cash equivalent of the share capital on the issue of which the stock dividend income arises (see section 412), grossed up by reference to the dividend ordinary rate for the tax year.",
"title": ""
},
{
"docid": "b1e31c0a10ca632844786eb12a4497e3",
"text": "The company will have to pay 20% tax on its profits. Doesn't matter how these profits are earned. Profits = Income minus all money you spend to get the income. However, you can't just take the profits out of the company. The company can pay you a salary, on which income tax, national insurance, and employer's national insurance have to be paid at the usual rate. The company can pay you a dividend, on which tax has to be paid. And the company can pay money into the director's pension fund, which is tax free. Since the amount of company revenue can be of interest, I'd be curious myself what the revenue of such a company would be. And if the company makes losses, I'm sure HMRC won't allow you to get any tax advantages from such losses.",
"title": ""
},
{
"docid": "f2bee9d464e259fa7b7b4558c1080986",
"text": "I'm assuming this was a cashless exercise because you had income show up on your w-2. When I had a similar situation, I did the following: If you made $50,000 in salary and $10,000 in stock options then your W-2 now says $60,000. You'll record that on your taxes just like it was regular income. You'll also get a form that talks about your stock sale. But remember, you bought and sold the stock within seconds. Your forms will probably look like this: Bought stock: $10,000 Sold stock: $10,000 + $50 commission Total profit (loss): ($50) From the Turbotax/IRS view point, you lost $50 on the sale of the stock because you paid the commission, but the buy and sell prices were identical or nearly identical.",
"title": ""
},
{
"docid": "826957611902dd98805eec54b63208a0",
"text": "\"From April 2017 the plan is that there is now also going to be a \"\"Lifetime ISA\"\" (in addition to the Help to Buy ISA). Assuming those plans do not change, they government will give 25% after each year until you are 50, and the maximum you can put in per year will be £4000. Catches: You can only take the money out for certain \"\"life events\"\", currently: Buying a house below £450000 anywhere in the country (not just London). Passing 60 years of age. If you take it out before or for another reason, you lose the government bonus plus 5%, ie. it currently looks like you will be left with 95% of the total of the money you paid in. You cannot use the bonus payments from this one together with bonus payments from a Help to Buy ISA to buy a home. However you can transfer an existing Help to Buy ISA into this one come 2017. While you are not asking about pensions, it is worth mentioning for other readers that while 25% interest per year sounds great, if you use it for pension purposes, consider that this is after tax, so if you pay mostly 20% tax on your income the difference is not that big (and if your employer matches your contributions up to a point, then it may not be worth it). If you pay a significant amount of tax at 40% or higher, then it may not make sense for pension purposes. Tax bands and the \"\"rates\"\" on this ISA may change, of course. On the other hand, if you intend to use the money for a house/flat purchase in 2 or more years' time, then it would seem like a good option. For you specifically: This \"\"only\"\" covers £4000 per year, ie. not the full amount you talked about, but it is likely a good idea for you to spread things out anyway. That way, if one thing turns out to be not as good as other alternatives it has less impact - it is less likely that all your schemes will turn out to be bad luck. Within the M25 the £450000 limit may restrict you to a small house or flat in 5-10 years time. Again, prices may stall as they seem barely sustainable now. But it is hard to predict (measures like this may help push them upwards :) ). On the plus side, you could then still use the money for pension although I have a hard time seeing governments not adjusting this sort of account between now and your 60th birthday. Like pension funds, there is an element of luck/gambling involved and I think a good strategy is to spread things if you can.\"",
"title": ""
}
] |
fiqa
|
709d903dbca3d33442e3b9d7168caff8
|
How do I begin investment saving, rather than just saving in a bank account?
|
[
{
"docid": "f983c383262bb5e484be57c6f264612e",
"text": "In general, the higher the return (such as interest), the higher the risk. If there were a high-return no-risk investment, enough people would buy it to drive the price up and make it a low-return no-risk investment. Interest rates are low now, but so is inflation. They generally go up and down together. So, as a low risk (almost no-risk) investment, the savings account is not at all useless. There are relatively safe investments that will get a better return, but they will have a little more risk. One common way to spread the risk is to diversify. For example, put some of your money in a savings account, some in a bond mutual fund, and some in a stock index fund. A stock index fund such as SPY has the benefit of very low overhead, in addition to spreading the risk among 500 large companies. Mutual funds with a purchase or sale fee, or with a higher management fee do NOT perform any better, on average, and should generally be avoided. If you put a little money in different places regularly, you'll be fairly safe and are likely get a better return. (If you trade back and forth frequently, trying to outguess the market, you're likely to be worse off than the savings account.)",
"title": ""
},
{
"docid": "51eb7c2fbc7b14b84666469006ba81f2",
"text": "CDs may be one good option if you have a sense of when you may need the money(-ish), especially with more generous early withdrawal penalties. You can also take a look at investing in a mix of stock and bond funds, which will lower you volatility compared to stocks, but increase your returns over bonds.",
"title": ""
}
] |
[
{
"docid": "6d34a84c720f48c3f5affaf68c7209c3",
"text": "\"This is only a partial answer to your question #1. If you have a conservative approach to savings (and, actually, even if you don't), you should not invest all of your money in any single industry or product. If you want to invest some money in oil, okay, but don't overdo it. If your larger goal is to invest the money in a manner that is less risky but still more lucrative than a savings account, you should read up on personal finance and investing to get a sense of what options are available. A commonly-recommended option is to invest in low-cost index funds that mirror the performance of the stock market as a whole. The question of \"\"how should I invest\"\" is very broad, but you can find lots of starting points in other questions on this site, by googling, or by visiting your local library.\"",
"title": ""
},
{
"docid": "a6a908e79622930b75bd84c3ed3768c8",
"text": "Peer to peer lending such as Kiva, Lending Club, Funding Circle(small business), SoFi(student loans), Prosper, and various other services provide you with access to the 'basic form' of investing you described in your question. Other funds: You may find the documentary '97% Owned' fascinating as it provides an overview of the monetary system of England, with parallels to US, showing only 3% of money supply is used in exchange of goods and services, 97% is engaged in some form of speculation. If speculative activities are of concern, you may need to denounce many forms of currency. Lastly, be careful of taking the term addiction too lightly and deeming something unethical too quickly. You may be surprised to learn there are many people like yourself working at 'unethical' companies changing them within.",
"title": ""
},
{
"docid": "81dc5a3ab1f76785932744c1f2a511a9",
"text": "\"I get the sense that this is a \"\"the world is unfair; there's no way I can succeed\"\" question, so let's back up a few steps. Income is the starting point to all of this. That could be a job (or jobs), or running your own business. From there, you can do four things with your income: Obviously Spend and Give do not provide a monetary return - they give a return in other ways, such as quality of life, helping others, etc. Save gives you reserves for future expenses, but it does not provide growth. So that just leaves Invest. You seem to be focused on stock market investments, which you are right, take a very long time to grow, although you can get returns of up to 12% depending on how much volatility you're willing to absorb. But there are other ways to invest. You can invest in yourself by getting a degree or other training to improve your income. You can invest by starting a business, which can dramatically increase your income (in fact, this is the most common path to \"\"millionaire\"\" in the US, and probably in other free markets). You can invest by growing your own existing business. You can invest in someone else's business. You can invest in real estate, that can provide both value appreciation and rental income. So yes, \"\"investment\"\" is a key aspect of wealth building, but it is not limited to just stock market investment. You can also look at reducing expenses in order to have more money to invest. Also keep in mind that investment with higher returns come with higher risk (both in terms of volatility and risk of complete loss), and that borrowing money to invest is almost always unwise, since the interest paid directly reduces the return without reducing the risk.\"",
"title": ""
},
{
"docid": "1d6f220dd1677d35b3bed386d664808f",
"text": "Investing in mutual funds, ETF, etc. won't build a large pool of money. Be an active investor if your nature aligns. For e.g. Invest in buying out a commercial space (on bank finance) like a office space and then rent it out. That would give you better return than a savings account. In few years time, you may be able to pay back your financing and then the total return is your net return. Look for options like this for a multiple growth in your worth.",
"title": ""
},
{
"docid": "0b2ee1ec448b87a1e6e61d1e910eee61",
"text": "\"You can start investing with any amount. You can use the ShareBuilder account to purchase \"\"partial\"\" stocks through their automatic investment plan. Usually brokers don't sell parts of stock, and ShareBuilder is the only one allowing it IMHO using its own tricks. What they do basically is buy a stock and then divide it internally among several investors who bought it, while each of the investors doesn't really own it directly. That's perfect for investing small amounts and making first steps in investing.\"",
"title": ""
},
{
"docid": "992d568e9fb89ec12d5ec9d42554e089",
"text": "What is your investing goal? And what do you mean by investing? Do you necessarily mean investing in the stock market or are you just looking to grow your money? Also, will you be able to add to that amount on a regular basis going forward? If you are just looking for a way to get $100 into the stock market, your best option may be DRIP investing. (DRIP stands for Dividend Re-Investment Plan.) The idea is that you buy shares in a company (typically directly from the company) and then the money from the dividends are automatically used to buy additional fractional shares. Most DRIP plans also allow you to invest additional on a monthly basis (even fractional shares). The advantages of this approach for you is that many DRIP plans have small upfront requirements. I just looked up Coca-cola's and they have a $500 minimum, but they will reduce the requirement to $50 if you continue investing $50/month. The fees for DRIP plans also generally fairly small which is going to be important to you as if you take a traditional broker approach too large a percentage of your money will be going to commissions. Other stock DRIP plans may have lower monthly requirements, but don't make your decision on which stock to buy based on who has the lowest minimum: you only want a stock that is going to grow in value. They primary disadvantages of this approach is that you will be investing in a only a single stock (I don't believe that can get started with a mutual fund or ETF with $100), you will be fairly committed to that stock, and you will be taking a long term investing approach. The Motley Fool investing website also has some information on DRIP plans : http://www.fool.com/DRIPPort/HowToInvestDRIPs.htm . It's a fairly old article, but I imagine that many of the links still work and the principles still apply If you are looking for a more medium term or balanced investment, I would advise just opening an online savings account. If you can grow that to $500 or $1,000 you will have more options available to you. Even though savings accounts don't pay significant interest right now, they can still help you grow your money by helping you segregate your money and make regular deposits into savings.",
"title": ""
},
{
"docid": "c5c03b867cb386870c4bb203bde79b9e",
"text": "\"One way to start with stocks is by playing the fake stock market. Investigate what trading fees would be with a broker, then \"\"invest\"\" a certain amount of money - note it on paper or in a spreadsheet. Follow your stocks, make decisions on selling and buying, and see where you would be after a year or so. That way you can get an idea, even if not exactly precise, on what your returns would be if you really invested the money.\"",
"title": ""
},
{
"docid": "9e6a9e8163630b92f5d1d506c5e99bda",
"text": "\"Congratulations on a solid start. Here are my thoughts, based on your situation: Asset Classes I would recommend against a long-term savings account as an investment vehicle. While very safe, the yields will almost always be well below inflation. Since you have a long time horizon (most likely at least 30 years to retirement), you have enough time to take on more risk, as long as it's not more than you can live with. If you are looking for safer alternatives to stocks for part of your investments, you can also consider investment-grade bonds/bond funds, or even a stable value fund. Later, when you are much closer to retirement, you may also want to consider an annuity. Depending on the interest rate on your loan, you may also be able to get a better return from paying down your loan than from putting more in a savings account. I would recommend that you only keep in a savings account what you expect to need in the next few years (cushion for regular expenses, emergency fund, etc.). On Stocks Stocks are riskier but have the best chance to outperform versus inflation over the long term. I tend to favor funds over individual stocks, mostly for a few practical reasons. First, one of the goals of investing is to diversify your risk, which produces a more efficient risk/reward ratio than a group of stocks that are highly correlated. Diversification is easier to achieve via an index fund, but it is possible for a well-educated investor to stay diversified via individual stocks. Also, since most investors don't actually want to take physical possession of their shares, funds will manage the shares for you, as well as offering additional services, such as the automatic reinvestments of dividends and tax management. Asset Allocation It's very important that you are comfortable with the amount of risk you take on. Investment salespeople will prefer to sell you stocks, as they make more commission on stocks than bonds or other investments, but unless you're able to stay in the market for the long term, it's unlikely you'll be able to get the market return over the long term. Make sure to take one or more risk tolerance assessments to understand how often you're willing to accept significant losses, as well as what the optimal asset allocation is for you given the level of risk you can live with. Generally speaking, for someone with a long investment horizon and a medium risk tolerance, even the most conservative allocations will have at least 60% in stocks (total of US and international) with the rest in bonds/other, and up to 80% or even 100% for a more aggressive investor. Owning more bonds will result in a lower expected return, but will also dramatically reduce your portfolio's risk and volatility. Pension With so many companies deciding that they don't feel like keeping the promises they made to yesterday's workers or simply can't afford to, the pension is nice but like Social Security, I wouldn't bank on all of this money being there for you in the future. This is where a fee-only financial planner can really be helpful - they can run a bunch of scenarios in planning software that will show you different retirement scenarios based on a variety of assumptions (ie what if you only get 60% of the promised pension, etc). This is probably not as much of an issue if you are an equity partner, or if the company fully funds the pension in a segregated account, or if the pension is defined-contribution, but most corporate pensions are just a general promise to pay you later in the future with no real money actually set aside for that purpose, so I'd discount this in my planning somewhat. Fund/Stock Selection Generally speaking, most investment literature agrees that you're most likely to get the best risk-adjusted returns over the long term by owning the entire market rather than betting on individual winners and losers, since no one can predict the future (including professional money managers). As such, I'd recommend owning a low-cost index fund over holding specific sectors or specific companies only. Remember that even if one sector is more profitable than another, the stock prices already tend to reflect this. Concentration in IT Consultancy I am concerned that one third of your investable assets are currently in one company (the IT consultancy). It's very possible that you are right that it will continue to do well, that is not my concern. My concern is the risk you're carrying that things will not go well. Again, you are taking on risks not just over the next few years, but over the next 30 or so years until you retire, and even if it seems unlikely that this company will experience a downturn in the next few years, it's very possible that could change over a longer period of time. Please just be aware that there is a risk. One way to mitigate that risk would be to work with an advisor or a fund to structure and investment plan where you invest in a variety of sector funds, except for technology. That way, your overall portfolio, including the single company, will be closer to the market as a whole rather than over-weighted in IT/Tech. However, if this IT Consultancy happens to be the company that you work for, I would strongly recommend divesting yourself of those shares as soon as reasonably possible. In my opinion, the risk of having your salary, pension, and much of your investments tied up in the fortunes of one company would simply be a much larger risk than I'd be comfortable with. Last, make sure to keep learning so that you are making decisions that you're comfortable with. With the amount of savings you have, most investment firms will consider you a \"\"high net worth\"\" client, so make sure you are making decisions that are in your best financial interests, not theirs. Again, this is where a fee-only financial advisor may be helpful (you can find a local advisor at napfa.org). Best of luck with your decisions!\"",
"title": ""
},
{
"docid": "2ccdf1e5dd46c8433b4bc98d3814f4ea",
"text": "We don't have a good answer for how to start investing in poland. We do have good answers for the more general case, which should also work in Poland. E.g. Best way to start investing, for a young person just starting their career? This answer provides a checklist of things to do. Let's see how you're doing: Match on work pension plan. You don't mention this. May not apply in Poland, but ask around in case it does. Given your income, you should be doing this if it's available. Emergency savings. You have plenty. Either six months of spending or six months of income. Make sure that you maintain this. Don't let us talk you into putting all your money in better long term investments. High interest debt. You don't have any. Keep up the good work. Avoid PMI on mortgage. As I understand it, you don't have a mortgage. If you did, you should probably pay it off. Not sure if PMI is an issue in Poland. Roth IRA. Not sure if this is an issue in Poland. A personal retirement account in the US. Additional 401k. A reminder to max out whatever your work pension plan allows. The name here is specific to the United States. You should be doing this in whatever form is available. After that, I disagree with the options. I also disagree with the order a bit, but the basic idea is sound: one time opportunities; emergency savings; eliminate debt; maximize retirement savings. Check with a tax accountant so as not to make easily avoidable tax mistakes. You can use some of the additional money for things like real estate or a business. Try to keep under 20% for each. But if you don't want to worry about that kind of stuff, it's not that important. There's a certain amount of effort to maintain either of those options. If you don't want to put in the effort to do that, it makes sense not to do this. If you have additional money split the bulk of it between stock and bond index funds. You want to maintain a mix between about 70/30 and 75/25 stocks to bonds. The index funds should be based on broad indexes. They probably should be European wide for the most part, although for stocks you might put 10% or so in a Polish fund and another 15% in a true international fund. Think over your retirement plans. Where do you want to live? In your current apartment? In a different apartment in the same city? In one of the places where you inherited property? Somewhere else entirely? Also, do you like to vacation in that same place? Consider buying a place in the appropriate location now (or keeping the one you have if it's one of the inherited properties). You can always rent it out until then. Many realtors are willing to handle the details for you. If the place that you want to retire also works for vacations, consider short term rentals of a place that you buy. Then you can reserve your vacation times while having rentals pay for maintenance the rest of the year. As to the stuff that you have now: Look that over and see if you want any of it. You also might check if there are any other family members that might be interested. E.g. cousins, aunts, uncles, etc. If not, you can probably sell it to a professional company that handles estate sales. Make sure that they clear out any junk along with the valuable stuff. Consider keeping furniture for now. Sometimes it can help sell a property. You might check if you want to drive either of them. If not, the same applies, check family first. Otherwise, someone will buy them, perhaps on consignment (they sell for a commission rather than buying and reselling). There's no hurry to sell these. Think over whether you might want them. Consider if they hold any sentimental value to you or someone else. If not, sell them. If there's any difficulty finding a buyer, consider renting them out. You can also rent them out if you want time to make a decision. Don't leave them empty too long. There's maintenance that may need done, e.g. heat to keep water from freezing in the pipes. That's easy, just invest that. I wouldn't get in too much of a hurry to donate to charity. You can always do that later. And try to donate anonymously if you can. Donating often leads to spam, where they try to get you to donate more.",
"title": ""
},
{
"docid": "35d0603711e7c4e1070df7eb7293ba24",
"text": "\"First off, I highly recommend the book Get a Financial Life. The basics of personal finance and money management are pretty straightforward, and this book does a great job with it. It is very light reading, and it really geared for the young person starting their career. It isn't the most current book (pre real-estate boom), but the recommendations in the book are still sound. (update 8/28/2012: New edition of the book came out.) Now, with that out of the way, there's really two kinds of \"\"investing\"\" to think about: For most individuals, it is best to take care of #1 first. Most people shouldn't even think about #2 until they have fully funded their retirement accounts, established an emergency fund, and gotten their debt under control. There are lots of financial incentives for retirement investing, both from your employer, and the government. All the more reason to take care of #1 before #2! Your employer probably offers some kind of 401k (or equivalent, like a 403b) with a company-provided match. This is a potential 100% return on your investment after the vesting period. No investment you make on your own will ever match that. Additionally, there are tax advantages to contributing to the 401k. (The money you contribute doesn't count as taxable income.) The best way to start investing is to learn about your employer's retirement plan, and contribute enough to fully utilize the employer matching. Beyond this, there are also Individual Retirement Accounts (IRAs) you can open to contribute money to on your own. You should open one of these and start contributing, but only after you have fully utilized the employer matching with the 401k. The IRA won't give you that 100% ROI that the 401k will. Keep in mind that retirement investments are pretty much \"\"walled off\"\" from your day-to-day financial life. Money that goes into a retirement account generally can't be touched until retirement age, unless you want to pay lots of taxes and penalties. You generally don't want to put the money for your house down payment into a retirement account. One other thing to note: Your 401K and your IRA is an account that you put money into. Just because the money is sitting in the account doesn't necessarily mean it is invested. You put the money into this account, and then you use this money for investments. How you invest the retirement money is a topic unto itself. Here is a good starting point. If you want to ask questions about retirement portfolios, it is probably worth posting a new question.\"",
"title": ""
},
{
"docid": "5ee7208f09c10566f9a7a1ef874d6c38",
"text": "\"Index funds can be a very good way to get into the stock market. It's a lot easier, and cheaper, to buy a few shares of an index fund than it is to buy a few shares in hundreds of different companies. An index fund will also generally charge lower fees than an \"\"actively managed\"\" mutual fund, where the manager tries to pick which stocks to invest for you. While the actively managed fund might give you better returns (by investing in good companies instead of every company in the index) that doesn't always work out, and the fees can eat away at that advantage. (Stocks, on average, are expected to yield an annual return of 4%, after inflation. Consider that when you see an expense ratio of 1%. Index funds should charge you more like 0.1%-0.3% or so, possibly more if it's an exotic index.) The question is what sort of index you're going to invest in. The Standard and Poor's 500 (S&P 500) is a major index, and if you see someone talking about the performance of a mutual fund or investment strategy, there's a good chance they'll compare it to the return of the S&P 500. Moreover, there are a variety of index funds and exchange-traded funds that offer very good expense ratios (e.g. Vanguard's ETF charges ~0.06%, very cheap!). You can also find some funds which try to get you exposure to the entire world stock market, e.g. Vanguard Total World Stock ETF, NYSE:VT). An index fund is probably the ideal way to start a portfolio - easy, and you get a lot of diversification. Later, when you have more money available, you can consider adding individual stocks or investing in specific sectors or regions. (Someone else suggested Brazil/Russia/Indo-China, or BRICs - having some money invested in that region isn't necessarily a bad idea, but putting all or most of your money in that region would be. If BRICs are more of your portfolio then they are of the world economy, your portfolio isn't balanced. Also, while these countries are experiencing a lot of economic growth, that doesn't always mean that the companies that you own stock in are the ones which will benefit; small businesses and new ventures may make up a significant part of that growth.) Bond funds are useful when you want to diversify your portfolio so that it's not all stocks. There's a bunch of portfolio theory built around asset allocation strategies. The idea is that you should try to maintain a target mix of assets, whatever the market's doing. The basic simplified guideline about investing for retirement says that your portfolio should have (your age)% in bonds (e.g. a 30-year-old should have 30% in bonds, a 50-year-old 50%.) This helps maintain a balance between the volatility of your portfolio (the stock market's ups and downs) and the rate of return: you want to earn money when you can, but when it's almost time to spend it, you don't want a sudden stock market crash to wipe it all out. Bonds help preserve that value (but don't have as nice of a return). The other idea behind asset allocation is that if the market changes - e.g. your stocks go up a lot while your bonds stagnate - you rebalance and buy more bonds. If the stock market subsequently crashes, you move some of your bond money back into stocks. This basically means that you buy low and sell high, just by maintaining your asset allocation. This is generally more reliable than trying to \"\"time the market\"\" and move into an asset class before it goes up (and move out before it goes down). Market-timing is just speculation. You get better returns if you guess right, but you get worse returns if you guess wrong. Commodity funds are useful as another way to diversify your portfolio, and can serve as a little bit of protection in case of crisis or inflation. You can buy gold, silver, platinum and palladium ETFs on the stock exchanges. Having a small amount of money in these funds isn't a bad idea, but commodities can be subject to violent price swings! Moreover, a bar of gold doesn't really earn any money (and owning a share of a precious-metals ETF will incur administrative, storage, and insurance costs to boot). A well-run business does earn money. Assuming you're saving for the long haul (retirement or something several decades off) my suggestion for you would be to start by investing most of your money* in index funds to match the total world stock market (with something like the aforementioned NYSE:VT, for instance), a small portion in bonds, and a smaller portion in commodity funds. (For all the negative stuff I've said about market-timing, it's pretty clear that the bond market is very expensive right now, and so are the commodities!) Then, as you do additional research and determine what sort investments are right for you, add new investment money in the places that you think are appropriate - stock funds, bond funds, commodity funds, individual stocks, sector-specific funds, actively managed mutual funds, et cetera - and try to maintain a reasonable asset allocation. Have fun. *(Most of your investment money. You should have a separate fund for emergencies, and don't invest money in stocks if you know you're going need it within the next few years).\"",
"title": ""
},
{
"docid": "af1e7f772ced48852837068b40ff5770",
"text": "Investments earn income relative to the principal amounts invested. If you do not have much to invest, then the only way to 'get rich' by investing is to take gambles. And those gambles are more likely to fail than succeed. The simplest way for someone without a high amount of 'capital' [funds available to invest] to build wealth, is to work more, and invest in yourself. Go to school, but only for proven career paths. Take self-study courses. Learn and expand your career opportunities. Only once you are stable financially, have minimal debt [or, understand and respect the debt you plan to pay down slowly, which some people choose to do with school and house debt], and are able to begin contributing regularly to investment plans, can you put your financial focus on investing. Until then, any investment gains would pale in comparison to gains from building your career.",
"title": ""
},
{
"docid": "830e7d4656847c4e1156d0bded526e60",
"text": "The classic answer is simple. Aim to build up a a financial cushion that is the equivalent of 3 times your monthly salary. This should be readily accessible and in cash, to cover any unforeseen expenses that you may incur (car needs repairing, washing machine breaks down etc). Once you have this in place its then time to think about longer term investments. Monthly 'drip feeding' into a mutual stock based investment fund is a good place to start. Pick a simple Index based or fund with a global investment bias and put in a set amount that you can regularly commit to each month. You can get way more complicated but for sheer simplicity and longer term returns, this is a simple way to build up some financial security and longer term investments.",
"title": ""
},
{
"docid": "fd9a78556b28bf71945b4aadc1528b6d",
"text": "Certainly reading the recommended answers about initial investing is a great place to start and I highly recommend reading though that page for sure, but I also believe your situation is a little different than the one described as that person has already started their long-term career while you are still a couple years away. Now, tax-advantaged accounts like IRAs are amazingly good places to start building up retirement funds, but they also lock up the money and have a number of rules about withdrawals. You have fifteen thousand which is a great starting pot of money but college is likely to be done soon and there will likely be a number of expenses with the transition to full-time employment. Moving expenses, first month's rent, nursing exams, job search costs, maybe a car... all of this can be quite a lot especially if you are in a larger city. It sounds like your parents are very helpful though which is great. Make sure you have enough money for that transition and emergency expenses first and if there is a significant pool beyond that then start looking at investments. If you determine now is the best time to start than then above question is has great advice, but even if not it is still well worth taking some time to understand investing through that question, my favorite introduction book on the subject and maybe even a college course. So when you land that first solid nursing job and get situated you can start taking full advantage of the 401K and IRAs.",
"title": ""
},
{
"docid": "78376c71017ce85c9868e6f3729b6df2",
"text": "Your edit indicates that you may not yet be ready to get heavily involved in investing. I say this because it seems you are not very familiar with foundational finance/investing concepts. The returns that you are seeing as 'yearly' are just the reported earnings every 12 months, which all public companies must publish. Those 'returns' are not the same as the earnings of individual investors (which will be on the basis of dividends paid by the company [which are often annual, sometimes semi-annual, and sometimes quarterly], and by selling shares purchased previously. Note that over 3 months time, investing in interest-earning investments [like bank deposits] will earn you something like 0.5%. Investing in the stock market will earn you something like 2% (but with generally higher risk than investing in something earning interest). If you expect to earn significant amounts of money in only 3 months, you will not be able to without taking on extreme levels of risk [risk as high as going to a casino]. Safe investing takes time - years. In the short term, the best thing you can do to earn money is by earning more [through a better job, or a second part-time job], or spending less [budget, pay down high interest debt, and spend less than you earn]. I highly recommend you look through this site for more budgeting questions on how to get control of your finances. If you feel that doesn't apply to you, I encourage you to do a lot more research on investing before you send your money somewhere - you could be taking on more risk than you realize, if you are not properly informed.",
"title": ""
}
] |
fiqa
|
8c0c5f34ea4bc3f3de9f1be179232ab5
|
Why do people buy stocks at higher price in merger?
|
[
{
"docid": "bd1a333f0d4845d3bfc8aa0017e0da31",
"text": "\"Without any highly credible anticipation of a company being a target of a pending takeover, its common stock will normally trade at what can be considered non-control or \"\"passive market\"\" prices, i.e. prices that passive securities investors pay or receive for each share of stock. When there is talk or suggestion of a publicly traded company's being an acquisition target, it begins to trade at \"\"control market\"\" prices, i.e. prices that an investor or group of them is expected to pay in order to control the company. In most cases control requires a would-be control shareholder to own half a company's total votes (not necessarily stock) plus one additional vote and to pay a greater price than passive market prices to non-control investors (and sometimes to other control investors). The difference between these two market prices is termed a \"\"control premium.\"\" The appropriateness and value of this premium has been upheld in case law, with some conflicting opinions, in Delaware Chancery Court (see the reference below; LinkedIn Corp. is incorporated in the state), most other US states' courts and those of many countries with active stock markets. The amount of premium is largely determined by investment bankers who, in addition to applying other valuation approaches, review most recently available similar transactions for premiums paid and advise (formally in an \"\"opinion letter\"\") their clients what range of prices to pay or accept. In addition to increasing the likelihood of being outbid by a third-party, failure to pay an adequate premium is often grounds for class action lawsuits that may take years to resolve with great uncertainty for most parties involved. For a recent example and more details see this media opinion and overview about Dell Inc. being taken private in 2013, the lawsuits that transaction prompted and the court's ruling in 2016 in favor of passive shareholder plaintiffs. Though it has more to do with determining fair valuation than specifically premiums, the case illustrates instruments and means used by some courts to protect non-control, passive shareholders. ========== REFERENCE As a reference, in a 2005 note written by a major US-based international corporate law firm, it noted with respect to Delaware courts, which adjudicate most major shareholder conflicts as the state has a disproportionate share of large companies in its domicile, that control premiums may not necessarily be paid to minority shareholders if the acquirer gains control of a company that continues to have minority shareholders, i.e. not a full acquisition: Delaware case law is clear that the value of a dissenting [target company's] stockholder’s shares is not to be reduced to impose a minority discount reflecting the lack of the stockholders’ control over the corporation. Indeed, this appears to be the rationale for valuing the target corporation as a whole and allocating a proportionate share of that value to the shares of [a] dissenting stockholder [exercising his appraisal rights in seeking to challenge the value the target company's board of directors placed on his shares]. At the same time, Delaware courts have suggested, without explanation, that the value of the corporation as a whole, and as a going concern, should not include a control premium of the type that might be realized in a sale of the corporation.\"",
"title": ""
},
{
"docid": "dbab73634832d7aac0ac778943625e59",
"text": "There are kind of two answers here: the practical reason an acquirer has to pay more for shares than their current trading price and the economic justification for the increase in price. Why must the acquirer must pay a premium as a practical matter? Everyone has a different valuation of a company. The current trading price is the lowest price that any holder of the stock is willing to sell a little bit of stock for and the highest that anyone is willing to buy a little bit for. However, Microsoft needs to buy a controlling share. To do this on the open market they would need to buy all the shares from people who's personal valuation is low, and then a bunch from people whose valuation is higher and so on. The act of buying that much stock would push the price up by buying all the shares from people who are really willing to sell. Moreover, as they buy more and more, the remaining people increase their personal valuation so the price would really shoot up. Acquirers avoid this situation by offering to buy a ton of stock at a substantially higher, single price. Why is Linkedin suddenly worth more than it was yesterday? Microsoft is expecting to be able to use its own infrastructure and tools to make more money with Linkedin than Linkedin would have before. In other words, they believe that the Linkedin division of Microsoft after the merger will be worth more than Linkedin alone was before the merger. This synergistic idea is the theoretical foundation for mergers in general and the main reason people use to argue for a higher price. You could also argue that by expressing an interest in Linkedin, Microsoft may be telling us something it knows about Linkedin's value that maybe we didn't realize before because we aren't as smart and informed as the people on Microsoft's board. But since it's Microsoft that's doing the buying in this case, I'm going to go out on a limb and say this is not the main effect. Given Microsoft's history, the idea that they buy expensive things because they have money to burn is more compelling than the idea that they have an insight into a company's value that we don't.",
"title": ""
},
{
"docid": "bbd64c5d149c47a4026c6066062e4842",
"text": "Microsoft wants to buy a majority in the stock. To accomplish that, they have to offer a good price, so the current share owners are willing to sell. Just because the CEO of LinkedIN agreed to the deal doesn't really mean much, only that he is willing to sell his shares at that price. If he does not own 50%, he basically cannot complete the deal; other willing sellers are needed. If Microsoft could buy 50+% of the shares for the current market price, they would have just done that, without any negotiations. That is called a hostile take-over.",
"title": ""
}
] |
[
{
"docid": "45702570cf4f8c92340508182947fbb4",
"text": "Options pricing is based on the gap between strike and the current market, and volatility. That's why the VIX, a commonly accepted volatility index, is actually just a weighted blend of S&P 500 future options prices. A general rise in the price of options indicates people don't know whether it will go up or down next, and are therefore less willing to take that risk. But your question is why everything underwater in the puts chain went higher, and that's simple: now that Apple's down, the probability of falling a few more points is higher. Especially since Apple has gone through some recent rough times, and stocks in general are seen as risky these days.",
"title": ""
},
{
"docid": "9692b2413db6b0adda9a83275dc5c41d",
"text": "\"First, keep in mind that there are generally 2 ways to buy a corporation's shares: You can buy a share directly from the corporation. This does not happen often; it usually happens at the Initial Public Offering [the first time the company becomes \"\"public\"\" where anyone with access to the stock exchange can become a part-owner], plus maybe a few more times during the corporations existence. In this case, the corporation is offering new ownership in exchange for a price set the corporation (or a broker hired by the corporation). The price used for a public offering is the highest amount that the company believes it can get - this is a very complicated field, and involves many different methods of evaluating what the company should be worth. If the company sets the price too low, then they have missed out on possible value which would be earned by the previous, private shareholders (they would have gotten the same share % of a corporation which would now have more cash to spend, because of increased money paid by new shareholders). If the company sets the price too high, then the share subscription might only be partially filled, so there might not be enough cash to do what the company wanted. You can buy a share from another shareholder. This is more common - when you see the company's share price on the stock exchange, it is this type of transaction - buying out other current shareholders. The price here is simply set based on what current owners are willing to sell at. The \"\"Bid Price\"\" listed by an exchange is the current highest bid that a purchaser is offering for a single share. The \"\"Ask Price\"\" is the current lowest offer that a seller is offering to sell a single share they currently own. When the bid price = the ask price, a share transaction happens, and the most recent stock price changes.\"",
"title": ""
},
{
"docid": "7885461d2f4f9593513df4f245d4d883",
"text": "\"I understand you make money by buying low and selling high. You can also make money by buying high and selling higher, short selling high and buying back low, short selling low and buying back even lower. An important technique followed by many technical traders and investors is to alway trade with the trend - so if the shares are trending up you go long (buy to open and sell to close); if the shares are trending down you go short (sell to open and buy to close). \"\"But even if the stock price goes up, why are we guaranteed that there is some demand for it?\"\" There is never any guarantees in investing or trading. The only guarantee in life is death, but that's a different subject. There is always some demand for a share or else the share price would be zero or it would never sell, i.e zero liquidity. There are many reasons why there could be demand for a rising share price - fundamental analysis could indicated that the shares are valued much higher than the current price; technical analysis could indicate that the trend will continue; greed could get the better of peoples' emotion where they think all my freinds are making money from this stock so I should buy it too (just to name a few). \"\"After all, it's more expensive now.\"\" What determines if a stock is expensive? As Joe mentioned, was Apple expensive at $100? People who bought it at $50 might think so, but people who bought at $600+ would think $100 is very cheap. On the other hand a penny stock may be expensive at $0.20. \"\"It would make sense if we can sell the stock back into the company for our share of the earnings, but why would other investors want it when the price has gone up?\"\" You don't sell your stocks back to the company for a share of the earnings (unless the company has a share-buy-back arrangement in place), you get a share of the earnings by getting the dividends the company distributes to shareholders. Other investor would want to buy the stock when the price has gone up because they think it will go up further and they can make some money out of it. Some of the reasons for this are explained above.\"",
"title": ""
},
{
"docid": "3935b99b72731729fc7d8b53d7836adb",
"text": "Remember that shares represent votes at the shareholders' meeting. If share price drops too far below the value of that percentage of the company, the company gets bought out and taken over. This tends to set a minimum share price derived from the company's current value. The share price may rise above that baseline if people expect it to be worth more in the future, or drop s bit below if people expect awful news. That's why investment is called speculation. If the price asked is too high to be justified by current guesses, nobody buys. That sets the upper limit at any given time. Since some of this is guesswork, the market is not completely rational. Prices can drop after good news if they'd been inflated by the expectation of better news, for example. In general, businesses which don't crash tend to grow. Hence the market as a whole generally trends upward if viewed on a long timescale. But there's a lot of noise on that curve; short term or single stocks are much harder to predict.",
"title": ""
},
{
"docid": "0bc934b26cd5698a318b31ef671dd898",
"text": "\"Simple answer is because the stocks don't split. Most stocks would have a similar high price per share if they didn't split occasionally. Why don't they split? A better way to ask this is probably, why DO most stocks split? The standard answer is that it gives the appearance that stocks are \"\"cheap\"\" again and encourages investors to buy them. Some people, Warren Buffett (of Berkshire Hathaway) don't want any part of these shenanigans and refuse to split their stocks. Buffett also has commented that he thinks splitting a stock also adds unnecessary volatility.\"",
"title": ""
},
{
"docid": "084b6a7c6c93bb138202603fa9676eff",
"text": "You are misunderstanding what makes the price of a stock go up and down. Every time you sell a share of a stock, there is someone else that buys the stock. So it is not accurate to say that stock prices go down when large amounts of the stock are sold, and up when large amounts of the stock are bought. Every day, the amount of shares of a stock that are bought and sold are equal to each other, because in order to sell a share of stock, someone has to buy it. Let me try to explain what actually happens to the price of a stock when you want to sell it. Let's say that a particular stock is listed on the ticker at $100 a share currently. All this means is that the last transaction that took place was for $100; someone sold their share to a buyer for $100. Now let's say that you have a share of the stock you'd like to sell. You are hoping to get $100 for your share. There are 2 other people that also have a share that they want to sell. However, there is only 1 person that wants to buy a share of stock, and he only wants to pay $99 for a share. If none of you wants to sell lower than $100, then no shares get sold. But if one of you agrees to sell at $99, then the sale takes place. The ticker value of the stock is now $99 instead of $100. Now let's say that there are 3 new people that have decided they want to buy a share of the stock. They'd like to buy at $99, but you and the other person left with a share want to sell at $100. Either one of the sellers will come down to $99 or one of the buyers will go up to $100. This process will continue until everyone that wants to sell a share has sold, and everyone who wants to buy a share has bought. In general, though, when there are more people that want to sell than buy, the price goes down, and when there are more people that want to buy than sell, the price goes up. To answer your question, if your selling of the stock had caused the price to go down, it means that you would have gotten less money for your stock than if it had not gone down. Likewise, if your buying the stock had caused it to go up, it just means that it would have cost you more to buy the stock. It is just as likely that you would lose money doing this, rather than gain money.",
"title": ""
},
{
"docid": "555be60b1c7c421fc2d3104626e6fa19",
"text": "\"Most likely because they don't know what they're talking about. They all have a belief without evidence that information set X is internalised into the price but information set Y is not. If there is some stock characteristic, call it y, that belongs to set Y, then that moves the gauge towards a \"\"buy\"\" recommendation. However, the issue is that no evidence has been used to determine the constituents of X and Y, or even whether Y exists in any non-trivial sense.\"",
"title": ""
},
{
"docid": "1a7515e182c34fc75e2d7913c6f5511b",
"text": "The people who cause this sort of sell-off immediately are mostly speculators, short-term day-traders and the like. They realize that, because of the lowered potential for earnings in the future, the companies in question won't be worth as much in the future. They will sell shares at the elevated price, including sometimes shares that they borrow for the explicit purpose of selling (short selling), until the share price is more reasonable. Now, the other question is why the companies in question won't sell for as much in the future: Even if every other company in the world looks less attractive all at once (global economic catastrophe etc) people have other options. They could just put the money in the bank, or in corporate bonds, or in mortgage bonds, or Treasury bonds, or some other low-risk instrument, or something crazy like gold. If the expected return on a stock doesn't justify the price, you're unlikely to find someone paying that price. So you don't actually need to have a huge sell-off to lower the price. You just need a sell-off that's big enough that you run out of people willing to pay elevated prices.",
"title": ""
},
{
"docid": "508cfcdc1b55a353fca9742ac43463c3",
"text": "I'm confused. Are you asking why or telling us that you're bullish? Yes the stock will go up for a merger at a premium, but buying in now only gives you ~0.5% gain if it closes at $21.50. They won't trade over 21.50 unless a competing bid comes in or the bid is increased.",
"title": ""
},
{
"docid": "e40085d2a0da4b760a5a1930c4a79386",
"text": "If the price has gone up from what it was when the person bought, he may sell to collect his profit and spend the money. If someone intends to keep his money in the market, the trick is that you don't know when the price of a given stock will peak. If you could tell the future, sure, you'd buy when the stock was at its lowest point, just before it started up, and then sell at the highest point, just before it started down. But no one knows for sure what those points are. If a stockholder really KNOWS that demand is increasing and the price WILL go up, sure, it would be foolish to sell. But you can never KNOW that. (Or if you have some way that you do know that, please call me and share your knowledge.)",
"title": ""
},
{
"docid": "879c0735767dce73815b86de9e6871b6",
"text": "\"This is a classic correlation does not imply causation situation. There are (at least) three issues at play in this question: If you are swing- or day-trading then the first and second issues can definitely affect your trading. A higher-price, higher-volume stock will have smaller (percentage) volatility fluctuations within a very small period of time. However, in general, and especially when holding any position for any period of time during which unknowns can become known (such as Netflix's customer-loss announcement) it is a mistake to feel \"\"safe\"\" based on price alone. When considering longer-term investments (even weeks or months), and if you were to compare penny stocks with blue chip stocks, you still might find more \"\"stability\"\" in the higher value stocks. This is a correlation alone — in other words, a stable, reliable stock probably has a (relatively) high price but a high price does not mean it's reliable. As Joe said, the stock of any company that is exposed to significant risks can drop (or rise) by large amounts suddenly, and it is common for blue-chip stocks to move significantly in a period of months as changes in the market or the company itself manifest themselves. The last thing to remember when you are looking at raw dollar amounts is to remember to look at shares outstanding. Netflix has a price of $79 to Ford's $12; yet Ford has a larger market cap because there are nearly 4 billion shares compared to Netflix's 52m.\"",
"title": ""
},
{
"docid": "e3b96e44e018d120c089366b8fc93b7b",
"text": "People buy stocks with the intention of making money. They either expect the price to continue to rise or that they will get dividends and the price will not drop (enough) to wipe out their dividend earnings.",
"title": ""
},
{
"docid": "e51cf7afa6aabe63143fd7875be00205",
"text": "The main thing you're missing is that while you bear all the costs of manipulating the market, you have no special ability to capture the profits yourself. You make money by buying low and selling high. But if you want to push the price up, you have to keep buying even though the price is getting high. So you are buying high. This gives everyone, including you, the opportunity to sell high and make money. But you will have no special ability to capture that -- others will see the price going up and will start selling within a tiny fraction of a second. You will have to keep buying all the shares they keep selling at the artificially inflated price. So as you keep trying to buy more and more to push the price up enough to make money, everyone else is selling their shares to you. You have to buy more and more shares at an inflated price as everyone else is selling while you are still buying. When you switch to selling, the price will drop instantly, since there's nobody to buy from you at the inflated price. The opportunity you created has already been taken -- by the very people you were trading with. Billions have been lost by people who thought this strategy would work.",
"title": ""
},
{
"docid": "c3a98c4cdebde920a4f48f427c33fca1",
"text": "Because people bought their shares under the premise that they would make more money and if the company completely lied about that they will be subject to several civil and criminal violations. If people didn't believe the company was going to make more money, they would have valued their shares lower during the IPO by not forming much of a market at all.",
"title": ""
},
{
"docid": "13d5c8d1757f4113f3d00149c7023f95",
"text": "Companies do both quite often. They have opposite effects on the share price, but not on the total value to the shareholders. Doing both causes value to shareholders to rise (ie, any un-bought back shares now own a larger percentage of the company and are worth more) and drops the per-share price (so it is easier to buy a share of the stock). To some that's irrelevant, but some might want a share of an otherwise-expensive stock without paying $700 for it. As a specific example of this, Apple (APPL) split its stock in 2014 and also continued a significant buyback program: Apple announces $17B repurchase program, Oct 2014 Apple stock splits 7-to-1 in June 2014. This led to their stock in total being worth more, but costing substantially less per share.",
"title": ""
}
] |
fiqa
|
a09163f487829f92257ec570e719dd18
|
Will I get a tax form for sale of direct purchased stock (US)?
|
[
{
"docid": "d581f5da4cbbd3a23e4b057cf1e03f0d",
"text": "\"I think I found the answer, at least in my specific case. From the heading \"\"Questar/Dominion Resources Merger\"\" in this linked website: Q: When will I receive tax forms showing the stock and dividend payments? A: You can expect a Form 1099-B in early February 2017 showing the amount associated with payment of your shares. You also will receive a Form 1099-DIV by Jan. 31, 2017, with your 2016 dividends earned.\"",
"title": ""
}
] |
[
{
"docid": "cecb611496cca6b62da8005849636d21",
"text": "You need to track every buy and sell to track your gains, or more likely, losses. Yes, you report each and every transactions. Pages of schedule D.",
"title": ""
},
{
"docid": "1e56ffd6f02629cb5c5f4666c42ba8a9",
"text": "When you get into reading Revenue Rulings and Treasury Regulations - I'd suggest hiring a professional to do that for you. Especially since you also need to assure that the new stock does indeed qualify as QSBS. However, from the revenue ruling you quoted it doesn't sound like there's any other requirement other than reporting the subsequent purchase as a loss on your schedule D. I wouldn't know, however, if there are subsequent/superseding revenue rulings on the matter since 1998. Professional tax adviser (EA/CPA licensed in your State) would have the means and the ability to research this and give you a proper advice.",
"title": ""
},
{
"docid": "2344c287634cb6e22a4b35f37aee3997",
"text": "Sale of a stock creates a capital gain. It can be offset with losses, up to $3000 more than the gains. It can be deferred when held within a retirement account. When you gift appreciated stock, the basis follows. So when I gifted my daughter's trust shares, there was still tax due upon sale. The kiddy tax helped reduce but not eliminate it. And there was no quotes around ownership. The money is gone, her account is for college. No 1031 exchange exists for stock.",
"title": ""
},
{
"docid": "6210d2897e4211bf4057a4113912c180",
"text": "The question seems to be from the point of view actual sales and not its impact on one's taxation. In case you just want to sell, why brokers will respond differently each times. Either there may be issues with ownership and/or the company whose shares it is? In case you feel that the issues lies with brok",
"title": ""
},
{
"docid": "b107f0cef24f9d51830447421b8b2582",
"text": "This answer fills in some of the details you are unsure about, since I'm further along than you. I bought the ESPP shares in 2012. I didn't sell immediately, but in 2015, so I qualify for the long-term capital gains rate. Here's how it was reported: The 15% discount was reported on a W2 as it was also mentioned twice in the info box (not all of my W2's come with one of these) but also This showed the sale trade, with my cost basis as the discounted price of $5000. And for interests sake, I also got the following in 2012: WARNING! This means that just going ahead and entering the numbers means you will be taxed twice! once as income and once as capital gains. I only noticed this was happening because I no longer worked for the company, so this W2 only had this one item on it. This is another example of the US tax system baffling me with its blend of obsessive compulsive need for documentation coupled with inexplicably missing information that's critical to sensible accounting. The 1099 documents must (says the IRS since 2015) show the basis value as the award price (your discounted price). So reading the form 8949: Note: If you checked Box D above but the basis reported to the IRS was incorrect, enter in column (e) the basis as reported to the IRS, and enter an adjustment in column (g) to correct the basis. We discover the number is incorrect and must adjust. The actual value you need to adjust it by may be reported on your 1099, but also may not (I have examples of both). I calculated the required adjustment by looking at the W2, as detailed above. I gleaned this information from the following documents provided by my stock management company (you should the tax resources section of your provider):",
"title": ""
},
{
"docid": "e51fdeb51cecb92c7a69bc78db232a18",
"text": "No, it will show on the LLC tax return (form 1065), in the capital accounts (schedules K-1, L and M-2), attributed to your partner.",
"title": ""
},
{
"docid": "494c34a83089d0334a2aadf6ea57f290",
"text": "You can keep the cash in your account as long as you want, but you have to pay a tax on what's called capital gains. To quote from Wikipedia: A capital gain is a profit that results from investments into a capital asset, such as stocks, bonds or real estate, which exceeds the purchase price. It is the difference between a higher selling price and a lower purchase price, resulting in a financial gain for the investor.[1] Conversely, a capital loss arises if the proceeds from the sale of a capital asset are less than the purchase price. Thus, buying/selling stock counts as investment income which would be a capital gain/loss. When you are filing taxes, you have to report net capital gain/loss. So you don't pay taxes on an individual stock sale or purchase - you pay tax on the sum of all your transactions. Note: You do not pay any tax if you have a net capital loss. Taxes are only on capital gains. The amount you are taxed depends on your tax bracket and your holding period. A short term capital gain is gain on an investment held for less than one year. These gains are taxed at your ordinary income tax rate. A long term capital gain is gain on an investment held for more than one year. These gains are taxed at a special rate: If your income tax rate is 10 or 15%, then long term gains are taxed at 0% i.e. no tax, otherwise the tax rate is 15%. So you're not taxed on specific stock sales - you're taxed on your total gain. There is no tax for a capital loss, and investors sometimes take profits from good investments and take losses from bad investments to lower their total capital gain so they won't be taxed as much. The tax rate is expected to change in 2013, but the current ratios could be extended. Until then, however, the rate is as is. Of course, this all applies if you live in the United States. Other countries have different measures. Hope it helps! Wikipedia has a great chart to refer to: http://en.wikipedia.org/wiki/Capital_gains_tax_in_the_United_States.",
"title": ""
},
{
"docid": "53720fddbf0df8c29e3e5b29b5020ce1",
"text": "Is selling Vested RSU is the same as selling a regular stock? Yes. Your basis (to calculate the gain) is what you've been taxed on when the RSUs vested. Check your payslips/W2 for that period, and the employer should probably have sent you detailed information about that. I'm not a US citizen, my account is in ETrade and my stocks are of a US company, what pre arrangements I need to take to avoid tax issues? You will pay capital gains taxes on the sale in Israel. Depending on where you were when you earned the stocks and what taxes you paid then - it may open additional issues with the Israeli tax authority. Check with an Israeli tax adviser/accountant.",
"title": ""
},
{
"docid": "f7058c5586ad44d8fd12dd70c1f65ccc",
"text": "Now a days, your stocks can be seen virtually through a brokerage account. Back in the days, a stock certificate was the only way to authenticate stock ownership. You can still request them though from the corporation you have shares in or your brokerage. It will have your name, corporation name and number of shares you have. You have to buy shares of a stock either through a brokerage or the corporation itself. Most stock brokerages are legit and are FDIC or SIPC insured. But your risks are your own loses. The $10 you are referring to is the trade commission fee the brokerage charges. When you place an order to buy or sell a stock the brokerage will charge you $10. So for example if you bought 1 share of a $20 stock. The total transaction cost will be $30. Depending on the state you live in, you can basically starting trading stocks at either 18 or 21. You can donate/gift your shares to virtually anyone. When you sell a stock and experience a profit, you will be charged a capital gains tax. If you buy a stock and sell it for a gain within 1 year, you will taxed up to 35% or your tax bracket but if you hold it for more than a year, you will taxed only 15% or your tax bracket.",
"title": ""
},
{
"docid": "27fcc343ed9d01eac9eb28343ef02044",
"text": "\"The IRS W-8BEN form (PDF link), titled \"\"Certificate of Foreign Status of Beneficial Owner for United States Tax Withholding\"\", certifies that you are not an American for tax purposes, so they won't withhold tax on your U.S. income. You're also to use W-8BEN to identify your country of residence and corresponding tax identification number for tax treaty purposes. For instance, if you live in the U.K., which has a tax treaty with the U.S., your W-8BEN would indicate to the U.S. that you are not an American, and that your U.S. income is to be taxed by the U.K. instead of tax withheld in the U.S. I've filled in that form a couple of times when opening stock trading accounts here in Canada. It was requested by the broker because in all likelihood I'd end up purchasing U.S.-listed stocks that would pay dividends. The W-8BEN is needed in order to reduce the U.S. withholding taxes on those dividends. So I would say that the ad revenue provider is requesting you file one so they don't need to withhold full U.S. taxes on your ad revenue. Detailed instructions on the W-8BEN form are also available from the IRS: Instruction W-8BEN (PDF link). On the subject of ad revenue, Google also has some information about W8-BEN: Why can't I submit a W8-BEN form as an individual?\"",
"title": ""
},
{
"docid": "4d7824e28da94256bd9c4b118c6fa7ff",
"text": "Shares used to be paper documents, but these days they are more commonly held electronically instead, although this partly depends on what country you're in. But it doesn't make any significant practical difference. Regardless of their physical form, a share simply signifies that you own a certain proportion of a company, and are thus entitled to receive any dividends that may be paid to the shareholders. To sell your shares, you need a broker -- there are scores of online ones who will sell them for a modest fee. Your tax forms are entirely dependent on the jurisdiction(s) that tax you, and since you've not told us where you are, no one can answer that.",
"title": ""
},
{
"docid": "915ee91396f3b08a0d4af728c8f3d5da",
"text": "\"According to the IRS, you must have written confirmation from your broker \"\"or other agent\"\" whenever you sell shares using a method other than FIFO: Specific share identification. If you adequately identify the shares you sold, you can use the adjusted basis of those particular shares to figure your gain or loss. You will adequately identify your mutual fund shares, even if you bought the shares in different lots at various prices and times, if you: Specify to your broker or other agent the particular shares to be sold or transferred at the time of the sale or transfer, and Receive confirmation in writing from your broker or other agent within a reasonable time of your specification of the particular shares sold or transferred. If you don't have a stockbroker, I'm not sure how you even got the shares. If you have an actual stock certificate, then you are selling very specific shares and the purchase date corresponds to the purchase date of those shares represented on the certificate.\"",
"title": ""
},
{
"docid": "e0d17415eded90e62972b593b0bcd960",
"text": "\"Your employer should send you a statement with this information. If they didn't, you should still be able to find it through E*Trade. Navigate to: Trading & Portfolios>Portfolios. Select the stock plan account. Under \"\"Restricted Stock\"\", you should see a list of your grants. If you click on the grant in question, you should see a breakdown of how many shares were vested and released by date. It will also tell you the cost basis per share and the amount of taxes withheld. You calculate your cost basis by multiplying the number of released shares by the cost basis per share. You can ignore the ordinary income tax and taxes withheld since they will already have been included on your W2 earnings and withholdings. Really all you need to do is report the capital gain or loss from the cost basis (which if you sold right away will be rather small).\"",
"title": ""
},
{
"docid": "b56a85f57234547f3c59a0a0c730b0b9",
"text": "Yes. As long as the stock is in a taxable account (i.e. not a tax deferred retirement account) you'll pay gain on the profit regardless of subsequent purchases. If the sale is a loss, however, you'll risk delaying the claim for the loss if you repurchase identical shares within 30 days of that sale. This is called a wash sale.",
"title": ""
},
{
"docid": "c11fe5f13315fd4fb806ae0e7b291386",
"text": "\"As far as I know, the answer to this is generally \"\"no.\"\" The closest thing would be to identify the stock transfer company representing the company that you want to hold and buy through them. (I have held this way, but I don't know if it's available on all stocks.) This eliminates the broker, but there's still a \"\"middle man\"\" in the transfer company. Note this section from the Stock transfer agent Wikipedia article: A public company usually only designates one company to transfer its stock. Stock transfer agents also run annual meetings as inspector of elections, proxy voting, and special meetings of shareholders. They are considered the official keeper of the corporate shareholder records. The decision to have a single transfer company is a practical one, ensuring that there is one entity responsible for recording this data - Hence even if you could buy stock \"\"directly\"\" from the company that you want to own, it would likely still get routed through the transfer company for recording.\"",
"title": ""
}
] |
fiqa
|
6817f453459ead673e3eed58e8ccf718
|
How to calculate the rate of return on selling a stock?
|
[
{
"docid": "a1fb7f933ec57fb9d7f255a5ba2a5170",
"text": "You probably want the Internal Rate of Return (IRR), see http://en.wikipedia.org/wiki/Internal_rate_of_return which is the compound interest rate that would produce your return. You can compute it in a spreadsheet with XIRR(), I made an example: https://spreadsheets.google.com/ccc?key=0AvuTW2HtDQfYdEsxVlM0RFdrRk1QS1hoNURxZkVFN3c&hl=en You can also use a financial calculator, or there are probably lots of web-based calculators such as the ones people have mentioned.",
"title": ""
},
{
"docid": "469dd93d4f1c4545dd7884fbca865007",
"text": "Simple math. Take the sale proceeds (after trade expenses) and divide by cost. Subtract 1, and this is your return. For example, buy at 80, sell at 100, 100/80 = 1.25, your return is 25%. To annualize this return, multiply by 365 over the days you were in that stock. If the above stock were held for 3 months, you would have an annualized return of 100%. There's an alternative way to annualize, in the same example above take the days invested and dive into 365, here you get 4. I suggested that 25% x 4 = 100%. Others will ask why I don't say 1.25^4 = 2.44 so the return is 144%/yr. (in other words, compound the return, 1.25x1.25x...) A single day trade, noon to noon the next day returning just 1%, would multiply to 365% over a year, ignoring the fact there are about 250 trading days. But 1.01^365 is 37.78 or a 3678% return. For long periods, the compounding makes sense of course, the 8%/yr I hope to see should double my money in 9 years, not 12, but taking the short term trades and compounding creates odd results of little value.",
"title": ""
},
{
"docid": "ce932128386e9ac1e3bdbe0c347a0ad7",
"text": "If annualized rate of return is what you are looking for, using a tool would make it a lot easier. In the post I've also explained how to use the spreadsheet. Hope this helps.",
"title": ""
}
] |
[
{
"docid": "275df9312e040d3309fae20aff051c75",
"text": "Technically you should take the quarterly dividend yield as a fraction, add one, take the cube root, and subtract one (and then multiple by the stock price, if you want a dollar amount per share rather than a rate). This is to account for the fact that you could have re-invested the monthly dividends and earned dividends on that reinvestment. However, the difference between this and just dividing by three is going to be negligible over the range of dividend rates that are realistically paid out by ordinary stocks.",
"title": ""
},
{
"docid": "44f9f76999c87d2d86618c849ab259aa",
"text": "Well, one can easily have rates below -100%. Suppose I start with $100, and end up with $9 after a year. What was my rate of return? It could be -91%, -181%, -218%, or -241%, or something else, depending on the compounding method. We always have that the final amount equals the initial amount times a growth factor G, and we can express this using a rate r and a day count fraction T. In this case, we have T = 1, and B(T) = B(0) * 0.09, so: So, depending on how we compound, we have a rate of return of -91%, -181%, -218%, or -241%. This nicely illustrates that:",
"title": ""
},
{
"docid": "7af4f32798568d7e60f0dbc247e02a37",
"text": "The price-earnings ratio is calculated as the market value per share divided by the earnings per share over the past 12 months. In your example, you state that the company earned $0.35 over the past quarter. That is insufficient to calculate the price-earnings ratio, and probably why the PE is just given as 20. So, if you have transcribed the formula correctly, the calculation given the numbers in your example would be: 0.35 * 4 * 20 = $28.00 As to CVRR, I'm not sure your PE is correct. According to Yahoo, the PE for CVRR is 3.92 at the time of writing, not 10.54. Using the formula above, this would lead to: 2.3 * 4 * 3.92 = $36.06 That stock has a 52-week high of $35.98, so $36.06 is not laughably unrealistic. I'm more than a little dubious of the validity of that formula, however, and urge you not to base your investing decisions on it.",
"title": ""
},
{
"docid": "c5158b4448a8dd6770b62826b77c8ee1",
"text": "In order to calculate the ratio you are looking for, just divide total debt by the market capitalization of the stock. Both values can be found on the link you provided. The market capitalization is the market value of equity.",
"title": ""
},
{
"docid": "6102ca35a6adf578632c2b0f37dadc2f",
"text": "\"Below I will try to explain two most common Binomial Option Pricing Models (BOPM) used. First of all, BOPM splits time to expiry into N equal sub-periods and assumes that in each period the underlying security price may rise or fall by a known proportion, so the value of an option in any sub-period is a function of its possible values in the following sub period. Therefore the current value of an option is found by working backwards from expiry date through sub-periods to current time. There is not enough information in the question from your textbook so we may assume that what you are asked to do is to find a value of a call option using just a Single Period BOPM. Here are two ways of doing this: First of all let's summarize your information: Current Share Price (Vs) = $70 Strike or exercise price (X) = $60 Risk-free rate (r) = 5.5% or 0.055 Time to maturity (t) = 12 months Downward movement in share price for the period (d) = $65 / $70 = 0.928571429 Upward movement in share price for the period (u) = 1/d = 1/0.928571429 = 1.076923077 \"\"u\"\" can be translated to $ multiplying by Vs => 1.076923077 * $70 = $75.38 which is the maximum probable share price in 12 months time. If you need more clarification here - the minimum and maximum future share prices are calculated from stocks past volatility which is a measure of risk. But because your textbook question does not seem to be asking this - you probably don't have to bother too much about it yet. Intrinsic Value: Just in case someone reading this is unclear - the Value of an option on maturity is the difference between the exercise (strike) price and the value of a share at the time of the option maturity. This is also called an intrinsic value. Note that American Option can be exercised prior to it's maturity in this case the intrinsic value it simply the diference between strike price and the underlying share price at the time of an exercise. But the Value of an option at period 0 (also called option price) is a price you would normally pay in order to buy it. So, say, with a strike of $60 and Share Price of $70 the intrinsic value is $10, whereas if Share Price was $50 the intrinsic value would be $0. The option price or the value of a call option in both cases would be fixed. So we also need to find intrinsic option values when price falls to the lowest probable and rises to the maximum probable (Vcd and Vcu respectively) (Vcd) = $65-$60 = $5 (remember if Strike was $70 then Vcd would be $0 because nobody would exercise an option that is out of the money) (Vcu) = $75.38-$60 = $15.38 1. Setting up a hedge ratio: h = Vs*(u-d)/(Vcu-Vcd) h = 70*(1.076923077-0.928571429)/(15.38-5) = 1 That means we have to write (sell) 1 option for each share purchased in order to hedge the risks. You can make a simple calculation to check this, but I'm not going to go into too much detail here as the equestion is not about hedging. Because this position is risk-free in equilibrium it should pay a risk-free rate (5.5%). Then, the formula to price an option (Vc) using the hedging approach is: (Vs-hVc)(e^(rt))=(Vsu-hVcu) Where (Vc) is the value of the call option, (h) is the hedge ratio, (Vs) - Current Share Price, (Vsu) - highest probable share price, (r) - risk-free rate, (t) - time in years, (Vcu) - value of a call option on maturity at the highest probable share price. Therefore solving for (Vc): (70-1*Vc)(e^(0.055*(12/12))) = (75.38-1*15.38) => (70-Vc)*1.056540615 = 60 => 70-Vc = 60/1.056540615 => Vc = 70 - (60/1.056540615) Which is similar to the formula given in your textbook, so I must assume that using 1+r would be simply a very close approximation of the formula above. Then it is easy to find that Vc = 13.2108911402 ~ $13.21 2. Risk-neutral valuation: Another way to calculate (Vc) is using a risk-neutral approach. We first introduce a variable (p) which is a risk-neutral probability of an increase in share price. p = (e^(r*t)-d)/(u-d) so in your case: p = (1.056540615-0.928571429)/(1.076923077-0.928571429) = 0.862607107 Therefore using (p) the (Vc) would be equal: Vc = [pVcu+(1-p)Vcd]/(e^(rt)) => Vc = [(0.862607107*15.38)+(0.137392893*5)]/1.056540615 => Vc = 13.2071229185 ~ $13.21 As you can see it is very close to the hedging approach. I hope this answers your questions. Also bear in mind that there is much more to the option pricing than this. The most important topics to cover are: Multi-period BOPM Accounting for Dividends Black-Scholes-Merton Option Pricing Model\"",
"title": ""
},
{
"docid": "a12da22d330b7e220f7cd8e070ac02ec",
"text": "\"You can calculate the \"\"return on investment\"\" using libreoffice, for example. Look at the xirr function. You would have 2 columns, one a list of dates (ie the dates of the deposits or dividends or whatever that you want to track, the last entry would be today's date and the value of the investment today. The xirr function calculates the internal rate of return for you. If you add money to the account, and the current value includes the original investment and the added funds, it will be difficult to calculate the ROI. If you add money by purchasing additional shares (or redepositing dividends by buying additional shares), and you only want to track the ROI of the initial investment (ignoring future investments), you would have to calculate the current value of all of the added shares (that you don't want to include in the ROI) and subtract that value from the current total value of the account. But, if you include the dates and values of these additional share purchases in the spreadsheet, xirr will calculate the overall IRR for you.\"",
"title": ""
},
{
"docid": "a553405f8eccfb06d6fae1018d4ab54a",
"text": "\"For a retail investor who isn't a Physics or Math major, the \"\"Beta\"\" of the stock is probably the best way to quantify risk. Examples: A Beta of 1 means that a stock moves in line with the market. Over 1 means that you would expect the stock to move up or down faster than the market as a whole. Under 1 means that you would expect the stock to move slower than the market as a whole.\"",
"title": ""
},
{
"docid": "5ee820eda84b17c1564e86100cc24e34",
"text": "Securities change in prices. You can buy ten 10'000 share of a stock for $1 each one day on release and sell it for $40 each if you're lucky in the future for a gross profit of 40*10000 = 400'0000",
"title": ""
},
{
"docid": "bf0540111a2051185227f72005547c32",
"text": "\"Generally if you are using FIFO (first in, first out) accounting, you will need to match the transactions based on the number of shares. In your example, at the beginning of day 6, you had two lots of shares, 100 @ 50 and 10 @ 52. On that day you sold 50 shares, and using FIFO, you sold 50 shares of the first lot. This leaves you with 50 @ 50 and 10 @ 52, and a taxable capital gain on the 50 shares you sold. Note that commissions incurred buying the shares increase your basis, and commissions incurred selling the shares decrease your proceeds. So if you spent $10 per trade, your basis on the 100 @ 50 lot was $5010, and the proceeds on your 50 @ 60 sale were $2990. In this example you sold half of the lot, so your basis for the sale was half of $5010 or $2505, so your capital gain is $2990 - 2505 = $485. The sales you describe are also \"\"wash sales\"\", in that you sold stock and bought back an equivalent stock within 30 days. Generally this is only relevant if one of the sales was at a loss but you will need to account for this in your code. You can look up the definition of wash sale, it starts to get complex. If you are writing code to handle this in any generic situation you will also have to handle stock splits, spin-offs, mergers, etc. which change the number of shares you own and their cost basis. I have implemented this myself and I have written about 25-30 custom routines, one for each kind of transaction that I've encountered. The structure of these deals is limited only by the imagination of investment bankers so I think it is impossible to write a single generic algorithm that handles them all, instead I have a framework that I update each quarter as new transactions occur.\"",
"title": ""
},
{
"docid": "fdbe399b50e8f270715d8522907f7202",
"text": "What you're missing is the continuous compounding computation doesn't work that way. If you compound over n periods of time and a rate of return of r, the formula is e^(r*n), as you have to multiply the returns together with a mulitplicative base of 1. Otherwise consider what 0 does to your formula. If I get a zero return, I have a zero result which doesn't make sense. However, in my formula I'd still get the 1 which is what I'm starting and thus the no effect is the intended result. Continuous compounding would give e^(-.20*12) = e^(-2.4) = .0907 which is a -91% return so for each $100 invested, the person ends up with $9.07 left at the end. It may help to picture that the function e^(-x) does asymptotically approach zero as x tends to infinity, but that is as bad as it can get, so one doesn't cross into the negative unless one wants to do returns in a Complex number system with imaginary numbers in here somehow. For those wanting the usual compounding, here would be that computation which is more brutal actually: For your case it would be (1-.20)^12=(0.8)^12=0.068719476736 which is to say that someone ends up with 6.87% in the end. For each $100 had in the beginning they would end with $6.87 in the end. Consider someone starting with $100 and take 20% off time and time again you'd see this as it would go down to $80 after the first month and then down to $64 the second month as the amount gets lower the amount taken off gets lower too. This can be continued for all 12 terms. Note that the second case isn't another $20 loss but only $16 though it is the same percentage overall. Some retail stores may do discounts on discounts so this can happen in reality. Take 50% off of something already marked down 50% and it isn't free, it is down 75% in total. Just to give a real world example where while you think a half and a half is a whole, taking half and then half of a half is only three fourths, sorry to say. You could do this with an apple or a pizza if you want a food example to consider. Alternatively, consider the classic up and down case where an investment goes up 10% and down 10%. On the surface, these should cancel and negate each other, right? No, in fact the total return is down 1% as the computation would be (1.1)(.9)=.99 which is slightly less than 1. Continuous compounding may be a bit exotic from a Mathematical concept but the idea of handling geometric means and how compounding returns comes together is something that is rather practical for people to consider.",
"title": ""
},
{
"docid": "e7b44d6fb01103d972318fdd1aa04c52",
"text": "\"You'll generally get a number close to market cap of a mature company if you divide profits (or more accurately its free cash flow to equity) by the cost of equity which is usually something like ~7%. The value is meant to represent the amount of cash you'd need to generate investment income off it matching the company you're looking at. Imagine it as asking \"\"How much money do I need to put into the bank so that my interest income would match the profits of the company I'm looking at\"\". Except replace the bank with the market and other forms of investments that generate higher returns of course and that value would be lower.\"",
"title": ""
},
{
"docid": "4f532d58c93660b445922f2c46034831",
"text": "Thanks for showing me that. I can see it now. I have always used my formula, and even a senior at another company confirmed the way I calculated the returns. Luckily, I do not work with that manager, and he has his own model, and so do I. But he was pretty cool about it when I asked about his calculations.",
"title": ""
},
{
"docid": "a346ee2542db4507de800e5de36fc933",
"text": "\"So, there is no truly \"\"correct\"\" way to calculate return. Professionals will often calculate many different rates of return depending on what they wish to understand about their portfolio. However, the two most common ways of calculating multi-period return though are time-weighted return and money-weighted return. I'll leave the details to this good Investopeadia article, but the big picture is time-weighted returns help you understand how the stock performed during the period in question independent of how you invested it it. Whereas money-weighted return helps you understand how you performed investing in the stock in question. From your question, it appears both methods would be useful in combination to help you evaluate your portfolio. Both methods should be fairly easy to calculate yourself in a spread sheet, but if you are interested there are plenty of examples of both in google docs on the web.\"",
"title": ""
},
{
"docid": "e708f9f70f348131c33139a46aa03b34",
"text": "One thing to keep in mind when calculating P/E on an index is that the E (earnings) can be very close to zero. For example, if you had a stock trading at $100 and the earnings per share was $.01, this would result in a P/E of 10,000, which would dominate the P/E you calculate for the index. Of course negative earnings also skew results. One way to get around this would be to calculate the average price of the index and the earnings per share of the index separately, and then divide the average price of the index by the average earnings per share of the index. Different sources calculate these numbers in different ways. Some throw out negative P/Es (or earnings per share) and some don't. Some calculate the price and earnings per share separate and some don't, etc... You'll need to understand how they are calculating the number in order to compare it to PEs of individual companies.",
"title": ""
},
{
"docid": "e9343d55d9c40a3883063ddba8f15f63",
"text": "\"at $8.50: total profit = $120.00 *basis of stock, not paid in cash, so not included in \"\"total paid\"\" at $8.50: total profit = $75.00\"",
"title": ""
}
] |
fiqa
|
54b970adb33ca98d26bd1f291c962172
|
Can one use dollar cost averaging to make money with something highly volatile?
|
[
{
"docid": "8d3c46645af4eaa9727fc0784df921fd",
"text": "As you mentioned in the title, what you're asking about comes down to volatility. DCA when purchasing stock is one way of dealing with volatility, but it's only profitable if the financial instrument can be sold higher than your sunk costs. Issues to be concerned with: Let's suppose you're buying a stock listed on the NYSE called FOO (this is a completely fake example). Over the last six days, the average value of this stock was exactly $1.00Note 1. Over six trading days you put $100 per day into this stockNote 2: At market close on January 11th, you have 616 shares of FOO. You paid $596.29 for it, so your average cost (before fees) is: $596.29 / 616 = $0.97 per share Let's look at this including your trading fees: ($596.29 + $30) / 616 = $1.01 per share. When the market opens on January 12th, the quote on FOO could be anything. Patents, customer wins, wars, politics, lawsuits, press coverage, etc... could cause the value of FOO to fluctuate. So, let's just roll with the assumption that past performance is consistent: Selling FOO at $0.80 nets: (616 * $0.80 - $5) - ($596.29 + $30) = $123.49 Loss Selling FOO at $1.20 nets: (616 * $1.20 - $5) - ($596.29 + $30) = $107.90 Profit Every day that you keep trading FOO, those numbers get bigger (assuming FOO is a constant value). Also remember, even if FOO never changes its average value and volatility, your recoverable profits shrink with each transaction because you pay $5 in fees for every one. Speaking from experience, it is very easy to paper trade. It is a lot harder when you're looking at the ticker all day when FOO has been $0.80 - $0.90 for the past four days (and you're $300 under water on a $1000 portfolio). Now your mind starts playing nasty games with you. If you decide to try this, let me give you some free advice: Unless you have some research (such as support / resistance information) or data on why FOO is a good buy at this price, let's be honest: you're gambling with DCA, not trading. END NOTES:",
"title": ""
},
{
"docid": "0c90a362a5ab130931027c1124414375",
"text": "if you know when and by how much something will fluctuate, you can always make money. Buy it when it's cheaper and sell it when it's more expensive. If you just know that it fluctuated a lot recently, then you don't know what it will do next. Most securities that go to zero or go much higher bounce all over the place for a while first. But you don't know when they'll move decisively lower or higher. So how could you figure out if you'll make money - you can't know. DCA will on average make you better off, unless the extra commissions are too high relative to your purchase sizes. But it will in retrospect make you worse off in many particular cases. This is true of many investment disciplines, such as rebalancing. They are all based on averages. If the volatility is random then on average you can buy more shares when the price is lower using DCA. But when the lowest price turns out to have been on a certain day, you'd have been better off with a single lump sum put in on that day. No way to know in advance. Degree of volatility shouldn't matter; any fluctuation is enough for DCA or rebalancing to get you ahead, though it's true they get you ahead farther if the fluctuations are larger, since there's then more difference between DCA and a lump purchase. I think the real reason to do DCA and rebalancing is risk control. They reduce the risk of putting a whole lump sum in on exactly the wrong day. And they can help keep a portfolio growing even if the market is stagnant.",
"title": ""
},
{
"docid": "6ee95a4848cb93fbdb1f31a78e483bf2",
"text": "That doesn't sound like dollar cost averaging. That sounds like a form of day trading. Dollar cost averaging is how most people add money to their 401K, or how they add money to some IRA accounts. You are proposing a form of day trading.",
"title": ""
},
{
"docid": "90a0d7e413f92d7ff344b6cf2db64f1f",
"text": "Dollar cost averaging is beneficial if you don't have the money to make large investments but are able to add to your holding over time. If you can buy the same monetary amount at regular intervals over time, your average cost per share will be lower than the stock's average value over that time. This won't necessarily get you the best price, but it will get you, on the whole, a good price and will enable you to increase your holdings over time. If you're doing frequent trading on a highly volatile stock, you don't want to use this method. A better strategy is to buy the dips: Know the range, and place limit orders toward the bottom of the range. Then place limit orders to sell toward the high end of the range. If you do it right, you might be able to build up enough money to buy and sell increasing numbers of shares over time. But like any frequent trader, you'll have to deal with transaction fees; you'll need to be sure the fees don't eat all your profit.",
"title": ""
}
] |
[
{
"docid": "6a54e644b5544df0d9b26eb811dd81af",
"text": "You can't tell for sure. If there was such a technique then everyone would use it and the price would instantly change to reflect the future price value. However, trade volume does say something. If you have a lemonade stand and offer a large glass of ice cold lemonade for 1c on a hot summer day I'm pretty sure you'll have high trading volume. If you offer it for $5000 the trading volume is going to be around zero. Since the supply of lemonade is presumably limited at some point dropping the price further isn't going to increase the number of transactions. Trade volumes reflect to some degree the difference of valuations between buyers and sellers and the supply and demand. It's another piece of information that you can try looking at and interpreting. If you can be more successful at this than the majority of others on the market (not very likely) you may get a small edge. I'm willing to bet that high frequency trading algorithms factor volume into their trading decisions among multiple other factors.",
"title": ""
},
{
"docid": "eca7b08aae740dccd9c59d0ec0679496",
"text": "Canadian Couch Potato has an article which is somewhat related. Ask the Spud: Can You Time the Markets? The argument roughly boils down to the following: That said, I didn't follow the advice. I inherited a sum of money, more than I had dealt with before, and I did not feel I was emotionally capable of immediately dumping it into my portfolio (Canadian stocks, US stocks, world stocks, Canadian bonds, all passive indexed mutual funds), and so I decided to add the money into my portfolio over the course of a year, twice a month. The money that I had not yet invested, I put into a money market account. That worked for me because I was purchasing mutual funds with no transaction costs. If you are buying ETFs, this strategy makes less sense. In hindsight, this was not financially prudent; I'd have been financially better off to buy all the mutual funds right at the beginning. But I was satisfied with the tradeoff, knowing that I did not have hindsight and I would have been emotionally hurt had the stock market crashed. There must be research that would prove, based on past performance, the statistically optimal time frame for dollar-cost averaging. However, I strongly suppose that the time frame is rather small, and so I would advise that you either invest the money immediately, or dollar-cost average your investment over the course of the year. This answer is not an ideal answer to your question because it is lacking such a citation.",
"title": ""
},
{
"docid": "818f4cb44f509dfe75279353ce92a310",
"text": "In general, lump sum investing will tend to outperform dollar cost averaging because markets tend to increase in value, so investing more money earlier will generally be a better strategy. The advantage of dollar cost averaging is that it protects you in times when markets are overvalued, or prior to market corrections. As an extreme example, if you done a lump-sum investment in late 2008 and then suffered through the subsequent market crash, it may have taken you 2-3 years to get back to even. If you began a dollar cost averaging investment plan in late 2008, it may have only taken you a 6 months to get back to even. Dollar cost averaging can also help to reduce the urge to time the market, which for most investors is definitely a good thing.",
"title": ""
},
{
"docid": "64e8a098d6ef3e03a3f2c464e91a5ec2",
"text": "As you point out, the moving average is just MA(k)t = (Pt-1 + … + Pt-k )/k and is applied in technical analysis (TA) to smooth out volatile (noise) price action. If it has any logic to it, you might want to think in terms of return series (Pt - Pt-1 / Pt-1) and you could hypothesize that prices are in fact predictable and will oscillate below and above a running moving average. Below is a link to a study on MA trading rules, published in the Journal of Finance, with the conclusion of predictive power and abnormal returns from such strategies. As with any decision made upon historical arguments, one should be aware of structural changes and or data mining. Simple technical trading rules and the stochastic properties of stock returns Brock, W., J. Lakonishok and B. Le Baron, 1992, Simple technical trading rules and the stochastic properties of stock returns, Journal of Finance, 47, 1731-64. MA rules betterthan chance in US stock market, 1897-1986 I don't know whether you are new to TA or not, but a great commercial site, with plenty of computer-generated signals is FinViz.",
"title": ""
},
{
"docid": "4b6da6db0482f0c3ee1f3176632c122c",
"text": "I frequently do this on NADEX, selling out-of-the-money binary calls. NADEX is highly illiquid, and the bid/ask is almost always from the market maker. Out-of-the-money binary calls lose value quickly (NADEX daily options exist for only ~21 hours). If I place an above-ask order, it either gets filled quickly (within a few minutes) due to a spike in the underlying, or not at all. I compensate by changing my price hourly. As Joe notes, one of Black-Scholes inputs is volatility, but price determines (implied) volatility, so this is circular. In other words, you can treat the bid/ask prices as bid/ask volatilities. This isn't as far-fetched as it seems: http://www.cmegroup.com/trading/fx/volatility-quoting-fx-options.html",
"title": ""
},
{
"docid": "53bb45d891a7bec4bad44ba09a8080bb",
"text": "\"I'm just trying to visualize the costs of trading. Say I set up an account to trade something (forex, stock, even bitcoin) and I was going to let a random generator determine when I should buy or sell it. If I do this, I would assume I have an equal probability to make a profit or a loss. Your question is what a mathematician would call an \"\"ill-posed problem.\"\" It makes it a challenge to answer. The short answer is \"\"no.\"\" We will have to consider three broad cases for types of assets and two time intervals. Let us start with a very short time interval. The bid-ask spread covers the anticipated cost to the market maker of holding an asset bought in the market equal to the opportunity costs over the half-life of the holding period. A consequence of this is that you are nearly guaranteed to lose money if your time interval between trades is less than the half-life of the actual portfolio of the market maker. To use a dice analogy, imagine having to pay a fee per roll before you can gamble. You can win, but it will be biased toward losing. Now let us go to the extreme opposite time period, which is that you will buy now and sell one minute before you die. For stocks, you would have received the dividends plus any stocks you sold from mergers. Conversely, you would have had to pay the dividends on your short sales and received a gain on every short stock that went bankrupt. Because you have to pay interest on short sales and dividends passed, you will lose money on a net basis to the market maker. Maybe you are seeing a pattern here. The phrase \"\"market maker\"\" will come up a lot. Now let us look at currencies. In the long run, if the current fiat money policy regime holds, you will lose a lot of money. Deflation is not a big deal under a commodity money regime, but it is a problem under fiat money, so central banks avoid it. So your long currency holdings will depreciate. Your short would appreciate, except you have to pay interest on them at a rate greater than the rate of inflation to the market maker. Finally, for commodities, no one will allow perpetual holding of short positions in commodities because people want them delivered. Because insider knowledge is presumed under the commodities trading laws, a random investor would be at a giant disadvantage similar to what a chess player who played randomly would face against a grand master chess player. There is a very strong information asymmetry in commodity contracts. There are people who actually do know how much cotton there is in the world, how much is planted in the ground, and what the demand will be and that knowledge is not shared with the world at large. You would be fleeced. Can I also assume that probabilistically speaking, a trader cannot do worst than random? Say, if I had to guess the roll of a dice, my chance of being correct can't be less than 16.667%. A physicist, a con man, a magician and a statistician would tell you that dice rolls and coin tosses are not random. While we teach \"\"fair\"\" coins and \"\"fair\"\" dice in introductory college classes to simplify many complex ideas, they also do not exist. If you want to see a funny version of the dice roll game, watch the 1962 Japanese movie Zatoichi. It is an action movie, but it begins with a dice game. Consider adopting a Bayesian perspective on probability as it would be a healthier perspective based on how you are thinking about this problem. A \"\"frequency\"\" approach always assumes the null model is true, which is what you are doing. Had you tried this will real money, your model would have been falsified, but you still wouldn't know the true model. Yes, you can do much worse than 1/6th of the time. Even if you are trying to be \"\"fair,\"\" you have not accounted for the variance. Extending that logic, then for an inexperienced trader, is it right to say then that it's equally difficult to purposely make a loss then it is to purposely make a profit? Because if I can purposely make a loss, I would purposely just do the opposite of what I'm doing to make a profit. So in the dice example, if I can somehow lower my chances of winning below 16.6667%, it means I would simply need to bet on the other 5 numbers to give myself a better than 83% chance of winning. If the game were \"\"fair,\"\" but for things like forex the rules of the game are purposefully changed by the market maker to maximize long-run profitability. Under US law, forex is not regulated by anything other than common law. As a result, the market maker can state any price, including prices far from the market, with the intent to make a system used by actors losing systems, such as to trigger margin calls. The prices quoted by forex dealers in the US move loosely with the global rates, but vary enough that only the dealer should make money systematically. A fixed strategy would promote loss. You are assuming that only you know the odds and they would let you profit from your 83.33 percentage chance of winning. So then, is the costs of trading from a purely probabilistic point of view simply the transaction costs? No matter what, my chances cannot be worse than random and if my trading system has an edge that is greater than the percentage of the transaction that is transaction cost, then I am probabilistically likely to make a profit? No, the cost of trading is the opportunity cost of the money. The transaction costs are explicit costs, but you have ignored the implicit costs of foregone interest and foregone happiness using the money for other things. You will want to be careful here in understanding probability because the distribution of returns for all of these assets lack a first moment and so there cannot be a \"\"mean return.\"\" A modal return would be an intellectually more consistent perspective, implying you should use an \"\"all-or-nothing\"\" cost function to evaluate your methodology.\"",
"title": ""
},
{
"docid": "1979822d496d842ae8f290bf237f6c14",
"text": "Depends on your time scale, but generally, I don't think it would work. What you'd really be betting on in this case is mean-reversal, which does not hold true in the equity universe (atleast not in the long run). If you look at the historical prices of the S&P, you'll notice it increases in terms of absolute dollar value. On the short term, however, if you feel the market has significantly undervalued or overvalued a security, then mean-reversal might be a reasonable bet to make. In that scenario, however, it seems to me that you are really looking for a volatility trade, in which case you might want to consider a straddle position using options. Here, the bet you'd be making is that the price at expiration will be inside a certain band (or outside the band, depending on your position).",
"title": ""
},
{
"docid": "ce6d317e89ec1170e735acd3e5886923",
"text": "\"Personally, I think you are approaching this from the wrong angle. You're somewhat correct in assuming that what you're reading is usually some kind of marketing material. Systematic Investment Plan (SIP) is not a universal piece of jargon in the financial world. Dollar cost averaging is a pretty universal piece of jargon in the financial world and is a common topic taught in finance classes in the US. On average, verified by many studies, individuals will generate better investment returns when they proactively avoid timing the market or attempting to pick specific winners. Say you decide to invest in a mutual fund, dollar cost averaging means you invest the same dollar amount in consistent intervals rather than buying a number of shares or buying sporadically when you feel the market is low. As an example I'll compare investing $50 per week on Wednesdays, versus 1 share per week on Wednesdays, or the full $850 on the first Wednesday. I'll use the Vanguard Large cap fund as an example (VLCAX). I realize this is not really an apples to apples comparison as the invested amounts are different, I just wanted to show how your rate of return can change depending on how your money goes in to the market even if the difference is subtle. By investing a common dollar amount rather than a common share amount you ultimately maintain a lower average share price while the share price climbs. It also keeps your investment easy to budget. Vanguard published an excellent paper discussing dollar cost averaging versus lump sum investing which concluded that you should invest as soon as you have funds, rather than parsing out a lump sum in to smaller periodic investments, which is illustrated in the third column above; and obviously worked out well as the market has been increasing. Ultimately, all of these companies are vying to customers so they all have marketing teams trying to figure out how to make their services sound interesting and unique. If they all called dollar cost averaging, \"\"dollar cost averaging\"\" none of them would appear to be unique. So they devise neat acronyms but it's all pretty much the same idea. Trickle your money in to your investments as the money becomes available to you.\"",
"title": ""
},
{
"docid": "cb4539d14a460c05bbedaebb6a7be667",
"text": "Trying to engage in arbitrage with the metal in nickels (which was actually worth more than a nickel already, last I checked) is cute but illegal, and would be more effective at an industrial scale anyway (I don't think you could make it cost-effective at an individual level). There are more effective inflation hedges than nickels and booze. Some of them even earn you interest. You could at least consider a more traditional commodities play - it's certainly a popular strategy these days. A lot of people shoot for gold, as it's a traditional hedge in a crisis, but there are concerns that particular market is overheated, so you might consider alternatives to that. Normal equities (i.e. the stock market) usually work out okay in an inflationary environment, and can earn you a return as they're doing so.... and it's not like commodities aren't volatile and subject to the whims of the world economy too. TIPs (inflation-indexed Treasury bonds) are another option with less risk, but also a weaker return (and still have interest rate risks involved, since those aren't directly tied to inflation either).",
"title": ""
},
{
"docid": "e54108f7f6de29e4d2c3706195a8385d",
"text": "Dollar Cost Averaging isn't usually the best idea for lump sum investment unless your risk tolerance is very low or your time horizons are low (in which case is the stock market the right place for your money). Usually you will do better by investing immediately. There are lots of articles around on the web about why DCA doesn't work over the long term. http://en.wikipedia.org/wiki/Dollar_cost_averaging http://www.efmoody.com/planning/dollarcost.html",
"title": ""
},
{
"docid": "634bdaeebed3f415b4930b0e86a0187e",
"text": "\"A big part of the answer depends on how \"\"beaten down\"\" the stock is, how long it will take to recover from the drop, and your taste for risk. If you honestly believe the drop is a temporary aberration then averaging down can be a good strategy to lower your dollar-cost average in the stock. But this is a huge risk if you're wrong, because now you're going to magnify your losses by piling on more stock that isn't going anywhere to the shares you already own at a higher cost. As @Mindwin pointed out correctly, the problem for most investors following an \"\"average down\"\" strategy is that it makes them much less likely to cut their losses when the stock doesn't recover. They basically become \"\"married\"\" to the stock because they've actualized their belief the stock will bounce back when maybe it never will or worse, drops even more.\"",
"title": ""
},
{
"docid": "473172c8942be1448d8003049b914273",
"text": "short answer: no, not to my knowledge long answer: why do you want to do that? crypto are very volatile and, in my opinion, if you are looking for a speculative exercise, you are better off seeking to understand basic technical analysis and trading stocks based on that",
"title": ""
},
{
"docid": "734dc1eac022f461a30d9161d3e9296a",
"text": "If you want to make money while European equities markets are crashing and the Euro itself is devaluing: None of these strategies are to be taken lightly. All involve risk. There are probably numerous ways that you can lose even though it seems like you should win. Transaction fees could eat your profits, especially if you have only a small amount of capital to invest with. The worst part is that they all involve timing. If you think the crash is coming next week, you could, say, buy a bunch of puts. But if the crash doesn't come for another 6 months, all of your puts are going to expire worthless and you've lost all of your capital. Even worse, if you sell short an index ETF this week in advance of next week's impending crash, and some rescue package arrives over the weekend, equity prices could spike at the beginning of the week and you'd be screwed.",
"title": ""
},
{
"docid": "c118a4ace670bb12c117ca5b19e52340",
"text": "Here is a deliberately simple example of Dollar Cost averaging: Day 1: Buy 100 shares at $10. Total value = $1,000. Average cost per share = $10.00/share (easy). Day 2: Buy 100 more shares at $9. Total value = $1,900. Average cost per share = $9.50/share (1,900/200). Notice how your average cost per share went from $10.00 to $9.50. Now instead of hoping the stock rises above $10.00 a share to make a profit, you only need it to go to $9.50 a share (assuming no commissions or transaction fees). It's easy to see how this could work to your advantage. The only catch is that you need buy more of a stock that is dropping (people might think you're crazy). This could easily backfire if the stock continues to drop.",
"title": ""
},
{
"docid": "11c296332c95a014ec99e1c7587390cc",
"text": "I've worked at a bank, and even the best prop traders have low Sharpe ratios and large swings. I would advise that the average person without access to flow information does not a chance, and will end up losing eventually.",
"title": ""
}
] |
fiqa
|
52dc50bc2e4c5e9947b6901897322088
|
Pension or Property: Should I invest in more properties, or in a pension?
|
[
{
"docid": "2a4e0e930b1f26fb5c23824259d67121",
"text": "Diversification is one aspect to this question, and Dr Fred touches on its relationship to risk. Another aspect is leverage: So it again comes down to your appetite for risk. A further factor is that if you are successfully renting out your property, someone else is effectively buying that asset for you, or at least paying the interest on the mortgage. Just bear in mind that if you get into a situation where you have 10 properties and the rent on them all falls at the same time as the property market crashes (sound familiar?) then you can be left on the hook for a lot of interest payments and your assets may not cover your liabilities.",
"title": ""
},
{
"docid": "c517ef7ba52c41d23492de2239036a19",
"text": "Investing in property hoping that it will gain value is usually foolish; real estate increases about 3% a year in the long run. Investing in property to rent is labor-intensive; you have to deal with tenants, and also have to take care of repairs. It's essentially getting a second job. I don't know what the word pension implies in Europe; in America, it's an employer-funded retirement plan separate from personally funded retirement. I'd invest in personally funded retirement well before buying real estate to rent, and diversify my money in that retirement plan widely if I was within 10-20 years of retirement.",
"title": ""
},
{
"docid": "865a5ea962ecbf23aa7d29e646c44738",
"text": "\"I think the real answer to your question here is diversification. You have some fear of having your money in the market, and rightfully so, having all your money in one stock, or even one type of mutual fund is risky as all get out, and you could lose a lot of your money in such a stock-market based undiversified investment. However, the same logic works in your rental property. If you lose your tennant, and are unable to find a new one right away, or if you have some very rare problem that insurance doesn't cover, your property could become very much not a \"\"break even\"\" investment very quickly. In reality, there isn't any single investment you can make that has no risk. Your assets need to be balanced between many different market-investments, that includes bonds, US stocks, European stocks, cash, etc. Also investing in mutual funds instead of individual stocks greatly reduces your risk. Another thing to consider is the benefits of paying down debt. While investments have a risk of not performing, if you pay off a loan with interest payments, you definitely will save the money you would have paid in interest. To be specific, I'd recommend the following plan -\"",
"title": ""
}
] |
[
{
"docid": "47cea5f4c2bd6ef611d52e55975e7338",
"text": "I have done something similar to this myself. What you are suggesting is a sound theory and it works. The issues are (which is why it's the reason not everyone does it) : The initial cost is great, many people in their 20s or 30s cannot afford their own home, let alone buy second properties. The time to build up a portfolio is very long term and is best for a pension investment. it's often not best for diversification - you've heard not putting all your eggs in one basket? With property deposits, you need to put a lot of eggs in to make it work and this can leave you vulnerable. there can be lots of work involved. Renovating is a huge pain and cost and you've already mentioned tennants not paying! unlike a bank account or bonds/shares etc. You cannot get to your savings/investments quickly if you need to (or find an opportunity) But after considering these and deciding the plunge is worth it, I would say go for it, be a good landlord, with good quality property and you'll have a great nest egg. If you try just one and see how it goes, with population increase, in a safe (respectable) location, the value of the investment should continue to rise (which it doesn't in a bank) and you can expect a 5%+ rental return (very hard to find in cash account!) Hope it goes well!",
"title": ""
},
{
"docid": "2cd11f8d10fca96e0b515190f11ccc66",
"text": "I wouldn't go into a stock market related investment if you plan on buying a house in 4-5 years, you really need to tie money up in stocks for 10 years plus to be confident of a good return. Of course, you might do well in stocks over 4-5 years but historically it's unlikely. I'd look for a safe place to save some money for the deposit, the more deposit you can get the better as this will lower your loan to valuation (LTV) and therefore you may find you get a better interest rate for your mortgage. Regards the pension, are you paying the maximum you can into the company scheme? If not then top that up as much as you can, company schemes tend to be good as they have low charges, but check the documentation about that and make sure that is the case. Failing that stakeholder pension schemes can also have very low charges, have a look at what's available.",
"title": ""
},
{
"docid": "786ec22cacdc9b9bb03b1d5b85bd57a0",
"text": "Invest in kids, not pension - they never inflate. Without kids your retirement will be miserable anyway. And with them you'll be good. Personally, I do not believe that that our current savings will be worth it in 30 years in these times.",
"title": ""
},
{
"docid": "81f9e0cdef3a0e82ca2d085a310182fb",
"text": "The below assessment is for primary residences as opposed to income properties. The truth is that with the exception of a housing bubble, the value of a house might outpace inflation by one or two percent. According to the US Census, the price of a new home per square foot only went up 4.42% between 1963 and 2008, where as inflation was 4.4%. Since home sizes increased, the price of a new home overall outpaced inflation by 1% at 5.4% (source). According to Case-Shiller, inflation adjusted prices increased a measly .4% from 1890-2004 (see graph here). On the other hand your down payment money and the interest towards owning that home might be in a mutual fund earning you north of eight percent. If you don't put down enough of a down payment to avoid PMI, you'll be literally throwing away money to get yourself in a home that could also be making money. Upgrades to your home that increase its value - unless you have crazy do-it-yourself skills and get good deals on the materials - usually don't return 100% on an investment. The best tend to be around 80%. On top of the fact that your money is going towards an asset that isn't giving you much of a return, a house has costs that a rental simply doesn't have (or rather, it does have them, but they are wrapped into your rent) - closing costs as a buyer, realtor fees and closing costs as a seller, maintenance costs, and constantly escalating property taxes are examples of things that renters deal with only in an indirect sense. NYT columnist David Leonhart says all this more eloquently than I ever could in: There's an interactive calculator at the NYT that helps you apply Leonhart's criteria to your own area. None of this is to say that home ownership is a bad decision for all people at all times. I'm looking to buy myself, but I'm not buying as an investment. For example, I would never think that it was OK to stop funding my retirement because my house will eventually fund it for me. Instead I'm buying because home ownership brings other values than money that a rental apartment would never give me and a rental home would cost more than the same home purchase (given 10 years).",
"title": ""
},
{
"docid": "4647b65189f441f7930a360106a9f1bf",
"text": "I would go with the 2nd option (put down as little as possible) with a small caveat: avoid the mortgage insurance if you can and put down 20%. Holding your rental property(ies)'s mortgage has some benefits: You can write off the mortgage interest. In Canada you cannot write off the mortgage interest from your primary residence. You can write off stuff renovations and new appliances. You can use this to your advantage if you have both a primary residence and a rental property. Get my drift? P.S. I do not think it's a good time right now to buy a property and rent it out simply because the housing prices are over-priced. The rate of return of your investment is too low. P.S.2. I get the feeling from your question that you would like to purchase several properties in the long-term future. I would like to say that the key to good and low risk investing is diversification. Don't put all of your money into one basket. This includes real estate. Like any other investment, real estate goes down too. In the last 50 or so years real estate has only apprepriated around 2.5% per year. While, real estate is a good long term investment, don't make it 80% of your investment portfolio.",
"title": ""
},
{
"docid": "8a01424e83595065e20e56380b974ff5",
"text": "\"I don't know much about New Zealand, but here are just some general thoughts on things to consider. The big difference between buying a house and investing in stocks or the like is that it is fairly easy to invest in a diversified array of stocks (via a mutual fund), but if you buy a house, you are investing in a single piece of property, so everything depends on what happens with that specific property. This in itself is a reason many people don't invest in real estate. Shares of a given company or mutual fund are fungible: if you buy into a mutual fund, you know you're getting the same thing everyone else in the fund is getting. But every piece of real estate is unique, so figuring out how much a property is worth is less of an exact science. Also, buying real estate means you have to maintain it and manage it (or pay someone else to do so). It's a lot more work to accurately assess the income potential of a property, and then maintain and manage the property over years, than it is to just buy some stocks and hold them. Another difficulty is, if and when you do decide to sell the property, doing so again involves work. With stocks you can pretty much sell them whenever you want (although you may take a loss). With a house you have to find someone willing to buy it, which can take time. So a big factor to consider is the amount of effort you're prepared to put into your investment. You mention that your parents could manage the property for you, but presumably you will still have to pay for maintenance and do some managing work yourself (at least discussing things with them and making decisions). Also, if you own the property for a long time your parents will eventually become too old to take care of it, at which point you'll have to rethink the management aspect. So that's sort of the psychological side of things. As for the financial, you don't mention selling the house at any point. If you never sell it, the only gain you get from it is the rent it brings in. So the main factor to consider when deciding whether to buy it as a rental is how much you can rent it for. This is going to be largely determined by where it is located. So from the perspective of making an investment the big question --- which you don't address in the info you provided --- is: how much can you rent this house for, and how much will you be able to rent it for in the future? There is no way to know this for sure, and the only way to get even a rough sense of it is to talk with someone who knows the local real estate market well (e.g., a broker, appraiser, or landlord). If the property is in an \"\"up-and-coming\"\" area (i.e., more people are going to move there in the future), rents could skyrocket; if it's in a backwater, rents could remain stagnant indefinitely. Basically, if you're going to buy a piece of real estate as a long-term investment, you need to know a lot about that property in order to make any kind of comparison with another investment vehicle like a mutual fund. If you already live in the area you may know some things already (like how much you might be able to rent it for). Even so, though, you should try to get some advice from trustworthy people who know the local real estate situation.\"",
"title": ""
},
{
"docid": "ac29167b6acc16b82e5569c9733522b7",
"text": "\"Defined benefit pensions are generally seen as valuable, and hard to replace by investing on your own. So my default assumption would be to keep that pension, unless you think there's a significant risk the pension fund will become insolvent, in which case the earlier you can get out the better. Obviously, you need to look at the numbers. What is a realistic return you could get by investing that 115K? To compare like with like, what \"\"real\"\" investment returns (after subtracting inflation) are needed for it to provide you with $10800 income/year after age 60? Also, consider that the defined benefit insulates you from multiple kinds of risk: Remember that most of your assets are outside the pension and subject to all these risks already. Do you want to add to that risk by taking this money out of your pension? One intermediate strategy to look at - again for the purposes of comparison - is to take the money now, invest it for 10 years without withdrawing anything, then buy an annuity at age 60. If you're single, Canadian annuity rates for age 60 appear to be between 4-5% without index linking - it may not even be possible to get an index-linked annuity. Even without the index-linking you'd need to grow the $115K to about $240K in 10 years, implying taking enough risks to get a return of 7.6% per year, and you wouldn't have index-linking so your income would gradually drop in real terms.\"",
"title": ""
},
{
"docid": "12b393f48f29a67fb2145c2685cdab24",
"text": "\"Some of the other answers recommended peer-to-peer lending and property markets. I would not invest in either of these. Firstly, peer-to-peer lending is not a traditional investment and we may not have enough historical data for the risk-to-return ratio. Secondly, property investments have a great risk unless you diversify, which requires a huge portfolio. Crowd-funding for one property is not a traditional investment, and may have drawbacks. For example, what if you disagree with other crowd-funders about the required repairs for the property? If you invest in the property market, I recommend a well-diversified fund that owns many properties. Beware of high debt leverage used to enhance returns (and, at the same time, risk) and high fees when selecting a fund. However, traditionally it has been a better choice to invest in stocks than to invest in property market. Beware of anyone who says that the property market is \"\"too good to not get into\"\" without specifying which part of the world is meant. Note also that many companies invest in properties, so if you invest only in a well-diversified stock index fund, you may already have property investments in your portfolio! However, in your case I would keep the money in risk-free assets, i.e. bank savings or a genuine low-cost money market fund (i.e. one that doesn't invest in corporate debt or in variable-rate loans which have short duration but long maturity). The reason is that you're going to be unemployed soon, and thus, you may need the money soon. If you have an investment horizon of, say, 10 years, then I would throw stocks into the mix, and if you're saving for retirement, then I would go all in to stocks. In the part of the world where I live in, money market funds generally have better return than bank savings, and better diversification too. However, your 2.8% interest sounds rather high (the money market fund I have in the past invested in currently yields at 0.02%, but then again I live in the eurozone), so be sure to get estimates for the yields of different risk-free assets. So, my advice for investing is simple: risk-free assets for short time horizon, a mixture of stocks and risk-free assets for medium time horizon, and only stocks for long time horizon. In any case, you need a small emergency fund, too, which you should consider a thing separate from your investments. My emergency fund is 20 000 EUR. Your 50 000 AUD is bit more than 30 000 EUR, so you don't really have that much money to invest, only a bit more than a reasonably sized emergency fund. But then again, I live in rental property, so my expenses are probably higher than yours. If you can foresee a very long time horizon for part of your investment, you could perhaps invest 50% of your money to stocks (preference being a geographically diversified index fund or a number of index funds), but I wouldn't invest more because of the need for an emergency fund.\"",
"title": ""
},
{
"docid": "9cb8d2713786a67c691618f992ccd148",
"text": "The assumption that house value appreciates 5% per year is unrealistic. Over the very long term, real house prices has stayed approximately constant. A house that is 10 years old today is 11 years old a year after, so this phenomenon of real house prices staying constant applies only to the market as a whole and not to an individual house, unless the individual house is maintained well. One house is an extremely poorly diversified investment. What if the house you buy turns out to have a mold problem? You can lose your investment almost overnight. In contrast to this, it is extremely unlikely that the same could happen on a well-diversified stock portfolio (although it can happen on an individual stock). Thus, if non-leveraged stock portfolio has a nominal return of 8% over the long term, I would demand higher return, say 10%, from a non-leveraged investment to an individual house because of the greater risks. If you have the ability to diversify your real estate investments, a portfolio of diversified real estate investments is safer than a diversified stock portfolio, so I would demand a nominal return of 6% over the long term from such a diversified portfolio. To decide if it's better to buy a house or to live in rental property, you need to gather all of the costs of both options (including the opportunity cost of the capital which you could otherwise invest elsewhere). The real return of buying a house instead of renting it comes from the fact that you do not need to pay rent, not from the fact that house prices tend to appreciate (which they won't do more than inflation over a very long term). For my case, I live in Finland in a special case of near-rental property where you pay 15% of the building cost when moving in (and get the 15% payment back when moving out) and then pay a monthly rent that is lower than the market rent. The property is subsidized by government-provided loans. I have calculated that for my case, living in this property makes more sense than purchasing a market-priced house, but your situation may be different.",
"title": ""
},
{
"docid": "8dd79db65f2185bdc8fe64923d0173c3",
"text": "\"Is the mortgage debt too high? The rental property is in a hot RE market, so could be easily sold with significant equity. However, they would prefer to keep it. Given the current income, there is no stress. However in absence of any other liquid [cash/near cash] assets, having everything locked into Mortgage is quite high. Even if real estate builds assets, these are highly illiquid investments. Have debt on such investments is risky; if there are no other investments. Essentially everything looks fine now, but if there is an crisis, unwinding mortgage debt is time consuming and if it forces distress sale, it would wipe out any gains. Can they afford another mortgage, and in what amount? (e.g. they are considering $50K for a small cabin, which could be rented out). I guess they can. But should they? Or diversify into other assets like stocks etc. Other than setting cash aside, what would be some good uses of funds to make sure the money would appreciate and outpace inflation and add a nice bonus to retirement? Mutual Funds / Stocks / bullions / 401K or other such retirement plans. They are currently in mid-30's. If there is ONE key strategy or decision they could make today that would help them retire \"\"early\"\" (say, mid-50's), what should it be? This opinion based ... it depends on \"\"what their lifestyle is\"\" and what would they want their \"\"lifestyle\"\" to be when they retire. They should look at saving enough corpus that would give an year on year yield equivalent to the retirement expenses.\"",
"title": ""
},
{
"docid": "7ba9327c8f024c08fa6c256cf3ec6196",
"text": "Which is generally the better option (financially)? Invest. If you can return 7-8% (less than the historical return of the S&P 500) on your money over the course of 25 years this will outperform purchasing personal property. If you WANT to own a house for other reason apart from the financial benefits then buy a house. Will you earn 7-8% on your money, there is a pretty good chance this is no because investors are prone to act emotionally.",
"title": ""
},
{
"docid": "7cfd122bd9fab80baa3b6d76c8f2a0c1",
"text": "Lucky you - here where I live that does not work, you put money on the table year 1. Anyhow... You HAVE to account for inflation. THat is where the gain comes from. Not investment increase (value of item), but the rent goes higher, while your mortgage does not (you dont own more moeny in 3 years if you keep paying, but likely you take more rent). Over 5 or 10 years the difference may be significant. Also you pay back the mortgage - that is not free cash flow, but it is a growth in your capital base. Still, 1 flat does not make a lot ;) You need 10+, so go on earning more down payments.",
"title": ""
},
{
"docid": "ef3df544d40cefb5109c5334ffe89341",
"text": "Can you wait until you retire before needing the money? Will you buy your first house sometime in the future? If yes, then favour an RRSP. Remember that you are rewarded by paying less tax for having the foresight and commitment to defer income taxes until your retirement, when you are presumably earning less income. Are your household expenses higher than 28% of your gross income? 35% of your net income? Does making your mortgage payments stress you? Are interest rates lower than their historical norm and an increase would cause you difficulty? If yes, then favour your mortgage. Do you need this money before your retire? Does your TFSA earn more interest than your mortgage costs your? If yes, then favour a TFSA. Does an alternative investment earn more than your TFSA? Can you handle an uptick in your mortgage interest rate? If yes, then favour the alternative investment and not your RRSP, mortgage or TFSA.",
"title": ""
},
{
"docid": "4ac2c64ce70259bde39978411a151518",
"text": "\"with 150K € to invest to \"\"become a landlord\"\" you have several options: Pay for 100% of one property, and you then will make a significant percentage of the monthly rent as profit each month. That profit can be used to invest in other things, or to save to buy additional properties. At the end of the 21 years in your example, you can sell the flat for return of principal minus selling expenses, or even better make a profit because the property went up in value. Pay 20% down on 5 flats, and then make a much a smaller profit per flat each month due to the mortgage payment for each one. At the end of the 21 years sell the flats. Assuming that a significant portion of the mortgage is paid off each flat will sell for more than the mortgage balance. Thus you will have 5 nice large profits when you sell. something in between 1 and 5 flats. Each has different risks and expenses. With 5 rental properties you are more likely to use a management company, which will add to your monthly cost.\"",
"title": ""
},
{
"docid": "f23416c0ae6956ee2796889c0c53fa72",
"text": "Fahad, in finance we make a distinction between investments that tend to grow in value and assets that hold value. Investments that grow in value are generally related to investing in well-thought out businesses. Investments can be done in retirement accounts through stocks and bonds but also owning part of a business directly. Good investments make more and more money off the money you put in. Common examples of assets include gold and other non-productive property like real-estate you don't rent or cars. You can even have some assets in your retirement account as many would argue government bonds behave like assets. All of these things tend to (more or less) go up in value as the cost of everything goes up in value, but don't tend to make you any excess money in the long run. There is certainly a place for both investments and assets. Especially as a young person it is good to lean toward investments as you likely have a lot of time for the money to grow as you get older. As RonJohn suggests, in the United States this is fairly easy as retirement accounts are common there is a long history of stable financial law even in crises. Pakistan's institutions are fairly stable and improving but still assets and investments of all types can be riskier. So, I recommend taking your father's advice... partially. Having some assets are good in riskier situations, but good investments are generally the way to grow comfortably wealthy. A good mix of the two is the way to grow wealthy slowly while protecting yourself from risk. You, your father and your neighbors know you local situation better than I, who has only visited a number of Pakistan's neighboring countries, so I can't really give more detailed advice but hopefully this gets you started.",
"title": ""
}
] |
fiqa
|
5f7630389475531c68bd5ba47777fb37
|
What are your experiences with 'self directed' 401ks?
|
[
{
"docid": "07c75adfe6ef2da84f3e05878e67e85f",
"text": "My employer matches 6% of my salary, dollar for dollar. So you have a great benefit. The self-directed side has no fees but $10 trades. No option trading. Yours basically allows you to invest your own funds, but not the match. It's a restriction, agreed, but a good plan.",
"title": ""
},
{
"docid": "7656ef45cba6e4625dec01393a52132b",
"text": "My employer matches 1 to 1 up to 6% of pay. They also toss in 3, 4 or 5 percent of your annual salary depending on your age and years of service. The self-directed brokerage account option costs $20 per quarter. That account only allows buying and selling of stock, no short sales and no options. The commissions are $12.99 per trade, plus $0.01 per share over 1000 shares. I feel that's a little high for what I'm getting. I'm considering 401k loans to invest more profitably outside of the 401k, specifically using options. Contrary to what others have said, I feel that limited options trading (the sale cash secured puts and spreads) can be much safer than buying and selling of stock. I have inquired about options trading in this account, since the trustee's system shows options right on the menus, but they are all disabled. I was told that the employer decided against enabling options trading due to the perceived risks.",
"title": ""
},
{
"docid": "b36177c86a000963a421bfef2ab82829",
"text": "I use the self-directed option for the 457b plan at my job, which basically allows me to invest in any mutual fund or ETF. We get Schwab as a broker, so the commissions are reasonable. Personally, I think it's great, because some of the funds offered by the core plan are limited. Generally, the trustees of your plan are going to limit your investment options, as participants generally make poor investment choices (even within the limited options available in a 401k) and may sue the employer after losing their savings. If I was a decision-maker in this area, there is no way I would ever sign off to allowing employees to mess around with options.",
"title": ""
},
{
"docid": "c7efc2dd021ddf9a2a03b9622a11cf2a",
"text": "I have managed two IRA accounts; one I inherited from my wife's 401K and my own's 457B. I managed actively my wife's 401 at Tradestation which doesn't restrict on Options except level 5 as naked puts and calls. I moved half of my 457B funds to TDAmeritrade, the only broker authorized by my employer, to open a Self Directed account. However, my 457 plan disallows me from using a Cash-secured Puts, only Covered Calls. For those who does not know investing, I resent the contention that participants to these IRAs should not be messing around with their IRA funds. For years, I left my 401k/457B funds with my current fund custodian, Great West Financial. I checked it's current values once or twice a year. These last years, the market dived in the last 2 quarters of 2015 and another dive early January and February of 2016. I lost a total of $40K leaving my portfolio with my current custodian choosing all 30 products they offer, 90% of them are ETFs and the rest are bonds. If you don't know investing, better leave it with the pros - right? But no one can predict the future of the market. Even the pros are at the mercy of the market. So, I you know how to invest and choose your stocks, I don't think your plan administrator has to limit you on how you manage your funds. For example, if you are not allowed to place a Cash-Secured Puts and you just Buy the stocks or EFT at market or even limit order, you buy the securities at their market value. If you sell a Cash-secured puts against the stocks/ETF you are interested in buying, you will receive a credit in fraction of a dollar in a specific time frame. In average, your cost to owning a stock/ETF is lesser if you buy it at market or even a limit order. Most of the participants of the IRA funds rely too much on their portfolio manager because they don't know how to manage. If you try to educate yourself at a minimum, you will have a good understanding of how your IRA funds are tied up to the market. If you know how to trade in bear market compared to bull market, then you are good at managing your investments. When I started contributing to my employer's deferred comp account (457B) as a public employee, I have no idea of how my portfolio works. Year after year as I looked at my investment, I was happy because it continued to grow. Without scrutinizing how much it grew yearly, and my regular payroll contribution, I am happy even it only grew 2% per year. And at this age that I am ready to retire at 60, I started taking investment classes and attended pre-retirement seminars. Then I knew that it was not totally a good decision to leave your retirement funds in the hands of the portfolio manager since they don't really care if it tanked out on some years as long at overall it grew to a meager 1%-4% because they managers are pretty conservative on picking the equities they invest. You can generalize that maybe 90% of IRA investors don't know about investing and have poor decision making actions which securities/ETF to buy and hold. For those who would like to remain as one, that is fine. But for those who spent time and money to study and know how to invest, I don't think the plan manager can limit the participants ability to manage their own portfolio especially if the funds have no matching from the employer like mine. All I can say to all who have IRA or any retirement accounts, educate yourself early because if you leave it all to your portfolio managers, you lost a lot. Don't believe much in what those commercial fund managers also show in their presentation just to move your funds for them to manage. Be proactive. If you start learning how to invest now when you are young, JUST DO IT!",
"title": ""
}
] |
[
{
"docid": "90766eb89e7b14ba266fbcc81ccffeb6",
"text": "\"There are a lot of false claims around the internet about this concept - the fact of the matter is you are giving yourself the ability to have money in a tax favored environment with consistent, steady growth as well as the ability to access it whenever you want. Compare this to a 401k plan for example....money is completely at risk, you can't touch it, and you're penalized if you don't follow the government's rules. As far as commissions to the agent - an agent will cut his commission in half by selling you an \"\"infinite banking\"\" style policy as opposed to a traditional whole life policy. @duffbeer703 clearly doesn't understand life insurance in the slightest when he says that the first three years of your premium payements will go to the agents pocket. And as usual offers no alternative except \"\"pick some high yielding dividen stocks and MLPs\"\" - Someone needs to wake up from the Dave Ramsey coma and realize that there is no such thing as a 12% mutual fund....do your research on the stock market (crestmont research). don't just listen to dave ramseys disciples who still thinking getting 12-15% year in and year out is possible. It's frustrating to listen to people who are so uneducated on the subject - remember the internet has turned everyone into \"\"experts\"\" if you want real advice talk to a legitimate expert that understands life insurance and how it actually works.\"",
"title": ""
},
{
"docid": "9cd06ae32ff149087f213ba7ce9abff8",
"text": "\"If it was me, I would drop out. You can achieve a better kind of plan when there is no match. For example Fidelity has no fee accounts for IRAs and Roths with thousands of investment choices. You can also setup automatic drafts, so it simulates what happens with your 401K. Not an employee of Fidelity, just a happy customer. Some companies pass the 401K fees onto their employees, and all have limited investment choices. The only caveat is income. There are limits to the deductibility of IRAs and Roth contributions if you make \"\"too much\"\" money. For Roth's the income is quite high so most people can still make those contributions. About 90% of households earn less than $184K, when Roths start phasing out. Now about this 401K company, it looks like the labor department has jurisdiction over these kinds of plans and I would research on how to make a complaint. It would help if you and other employees have proof of the shenanigans. You might also consult a labor attourney, this might make a great class.\"",
"title": ""
},
{
"docid": "9d67e11a7c3b69dc6f4b90c0aaaa9054",
"text": "I don't know what you mean by 'major'. Do you mean the fund company is a Fidelity or Vanguard, or that the fund is broad, as in an s&P fund? The problem starts with a question of what your goals are. If you already know the recommended mix for your age/risk, as you stated, you should consider minimizing the expenses, and staying DIY. I am further along, and with 12 year's income saved, a 1% hit would be 12% of a year's pay, I'd be working 1-1/2 months to pay the planner? In effect, you are betting that a planner will beat whatever metric you consider valid by at least that 1% fee, else you can just do it yourself and be that far ahead of the game. I've accepted the fact that I won't beat the average (as measured by the S&P) over time, but I'll beat the average investor. By staying in low cost funds (my 401(k) S&P fund charges .05% annual expense) I'll be ahead of the investors paying planner fees, and mutual fund fees on top of that. You don't need to be a CFP to manage your money, but it would help you understand the absurdity of the system.",
"title": ""
},
{
"docid": "56290eb39d292df78b8af33f4e308903",
"text": "Mostly you nailed it. It's a good question, and the points you raise are excellent and comprise good analysis. Probably the biggest drawback is if you don't agree with the asset allocation strategy. It may be too much/too little into stocks/bonds/international/cash. I am kind of in this boat. My 401K offers very little choices in funds, but offers Vanguard target funds. These tend to be a bit too conservative for my taste, so I actually put money in the 2060 target fund. If I live that long, I will be 94 in 2060. So if the target funds are a bit too aggressive for you, move down in years. If they are a bit too conservative, move up.",
"title": ""
},
{
"docid": "9a56cf08aa0055bd8866c3a1cc7284ba",
"text": "Our company does a lot of research on the self-directed IRA industry. We also provide financial advice in this area. In short, we have seen a lot in this industry. You mentioned custodian fees. This can be a sore spot for many investors. However, not all custodians are expensive, you should do your research before choosing the best one. Here is a list of custodians to help with your research Here are some of the more common pros and cons that we see. Pros: 1) You can invest in virtually anything that is considered an investment. This is great if your expertise is in an area that cannot be easily invested in with traditional securities, such as horses, private company stock, tax liens and more. 2) Control- you have greater control over your investments. If you invest in GE, it is likely that you will not have much say in the running of their business. However, if you invest in a rental property, you will have a lot of control over how the investment should operate. 3) Invest in what you know. Peter lynch was fond of saying this phrase. Not everyone wants to invest in the stock market. Many people won't touch it because they are not familiar with it. Self-directed IRAs allow you to invest in assets like real estate that you know well. Cons: 1) many alternative investments are illiquid. This can present a problem if you need to access your capital for withdrawals. 2) Prohibited transactions- This is a new area for many investors who are unfamiliar with how self-directed IRAs work 3) Higher fees- in many cases, the fees associated with self-directed IRA custodians and administrators can be higher. 4) questionable investment sponsors tend to target self-directed IRA owners for fraudulent investments. The SEC put out a good PDF about the risks of fraud with self-directed IRAs. Self Directed IRAs are not the right solution for everyone, but they can help certain investors focus on the areas they know well.",
"title": ""
},
{
"docid": "61983126d87c9525df8f5091a81f81dd",
"text": "Even ignoring the match (which makes it like a non-deductible IRA), the 401k plans that I know all have a range of choices of investment. Can you find one that is part of the portfolio that you want? For example, do you want to own some S&P500 index fund? That must be an option. If so, do the 401k and make your other investments react to it-reduce the proportion of S&P500 because of it(remember that the values in the 401k are pretax, so only count 60%-70% in asset allocation). The tax deferral is huge over time. For starters, you get to invest the 30-40% you would have paid as taxes now. Yes, you will pay that in taxes on withdrawal, but any return you generate is (60%-70%) yours to keep. The same happens for your returns.",
"title": ""
},
{
"docid": "720391beff1d2c84391ee0a7328a2c1f",
"text": "\"Oddly enough, I started to research the \"\"Bank on Yourself\"\" strategy today as well (even before I'd ran across this question!). I'd heard an ad on the radio for it the other day, and it caught my attention because they claimed that the strategy isn't prone to market fluctuations like the stock market. It seemed in their radio ad that their target market was people who had lost serious money in their 401k's. So I set about doing some research of my own. It seems to me that the website bankonyourself.com gives a very superficial overview of the strategy without truly ever getting to the meat of it. I begin having a few misgivings at the point that I realized I'd read through a decent chunk of their website and yet I still didn't have a clear idea of the mechanism behind it all. I become leery any time I have to commit myself to something before I can be given a full understanding of how it works. It's shady and reeks of someone trying to back you into a corner so they can bludgeon you with their sales pitch until you cry \"\"Mercy!\"\" and agree to their terms just to stop the pain (which I suspect is what happens when they send an agent out to talk to you). There were other red flags that stood out to me, but I don't feel like getting into them. Anyway, through the use of google I was able to find a thread on another forum that was a veritable wealth of knowledge with regard to the mechanism of \"\"Bank on Yourself\"\" how it works. Here is the link: Bank on Yourself/Infinite Banking... There are quite a few users in the thread who have excellent insights into how all of it works. After reading through a large portion of the thread, I came away realizing that this strategy isn't for me. However, it does appear to be a potential choice for certain people depending upon their situation.\"",
"title": ""
},
{
"docid": "3a4d4a1b2146a202c55a6995119675bd",
"text": "\"Technically, this doesn't seem like a scam, but I don't think the system is beneficial. They use a lot of half-truths to convince you that their product is right for you. Some of the arguments presented and my thoughts. Don't buy term and invest the rest because you can't predict how much you'll earn from the \"\"rest\"\" Also Don't invest in a 401k because you can't predict how much you'll earn They are correct that you won't know exactly how much you'll have due to stock market, but that doesn't mean the stock market is a bad place to put your money. Investing in a 401k is risky because of the harsh 401k withdrawal rules Yes, 401ks have withdrawal rules (can't typically start before 59.5, must start by 70.5) but those rules don't hamper my investing style in any way. Most Term Life Insurance policies don't pay out They are correct again, but their conclusions are wrong. Yes, most people don't die while you have a term insurance policy which is why Term life insurance is relatively cheap. But they aren't arguing you don't need insurance, just that you need their insurance which is \"\"better\"\" You need the Guaranteed growth they offer The chart used to illustrate their guaranteed growth includes non-guaranteed dividends. They invest $10,000 per year for 36 years and end up with $1,000,000. That's a 5% return! I use 10% for my estimate of stock market performance, but let's say it's only 8%. The same $10,000 per year results in over $2 Million dollars. Using 10.5% (average return of the S&P 500 over it's lifetime) the result is a staggering $3.7 MILLION. So if I'm looking at $3.7M vs. $1M, It costs me $2.7 Million dollars to give me the same coverage as my term life policy. That's one expensive Term Life Insurance policy. My personal favorite: Blindly following the advice of Wall Street and financial “gurus” such as Dave Ramsey and Suze Orman got you where you are. Are you happy with the state of your finances? Do you still believe their fairytale, “Buy Term (insurance) and Invest the Difference”? Yes, I sure do believe that fairytale and I'm prospering quite well thank you. :) While I don't think this is a scam, it's outrageously expensive and not a good financial choice.\"",
"title": ""
},
{
"docid": "0bc6398c3f04ee3f1e4c7f8cf593ea0e",
"text": "There is nothing wrong with self directed IRA's the problem is that most of the assets they specialize in are better done in other ways. Real estate is already extremely tax advantaged in the US. Buying inside a Traditional IRA would turn longterm capital gains (currently 15%) into ordinary income taxed at your tax rate when you withdraw this may be a plus or minus, but it is more likely than not that your ordinary income tax rate is higher. You also can't do the live in each house for 2 years before selling plan to eliminate capital gains taxes (250k individual 500k married couple). The final problem is that you are going to have problems getting a mortgage (it won't be a conforming loan) and will likely have to pay cash for any real estate purchased inside your IRA. Foreign real estate is similar to above except you have additional tax complexities. The key to the ownership in a business is that there are limits on who can control the business (you and maybe your family can't control the business). If you are experienced doing angel investing this might be a viable option (assuming you have a really big IRA you want to gamble with). If you want to speculate on precious metals you will probably be better offer using ETF's in a more traditional brokerage account (lower transactions costs more liquidity).",
"title": ""
},
{
"docid": "126b716be4a8d598b0d3a01391be7562",
"text": "You read it right. Todd's warning is well taken. I don't know the numbers involved, but have a brilliant suggestion that may help. A Solo 401(k) is simple to qualify for. Any bit of declared side income will do. Once the account is set up, a transfer from IRAs is simple. The Solo 401(k) can offer a loan provision as any other 401(k), and you can borrow up to 50% (max of $50K) for any reason with a 5 year payback. The standard rate is Prime+1%, the fee is minimal usually $50-$100. All the warnings of IRA 'loans' apply, but the risk of job loss (the largest objection to 401 loans) isn't there. The fact that you have 6 months to set this up is part of what prompts this suggestion. Note: Any strategies like this aren't for everyone. There are folk who need to access quick cash, and this solves the issue in two ways, both low rate and simple access. Phil already stated he is confident to return the money, the only thing that prompted my answer is there's real risk the 60 days a bit too short for any business deal.",
"title": ""
},
{
"docid": "08272c221245feb74c609aa96ec5c5e3",
"text": "I've never seen anything in any IRS publication that placed limits on the balance of a 401K, only on what you can contribute (and defer from taxes) each year. The way the IRS 'gets theirs' as it were is on the taxes you have to pay (for a traditional IRA anyway) which would not be insubstantial when you start to figure out the required minimum distribution if the balance was 14Mill.. You're required to take out enough to in theory run the thing out of money by your life expectancy.. The IRS has tables for this stuff to give you the exact numbers, but for the sake of a simple example, their number for someone age 70 (single or with a spouse who is not more than 10 years younger) is 27.4.. If we round that to 28 to make the math nice, then you would be forced to withdraw and pay taxes on around $500,000 per year. (So there would be a hefty amount of taxes to be paid out for sure). So a lot of that $500K a year going to pay taxes on your distributions, but then, considering you only contributed 660,000 pre-tax dollars in the first place, what a wonderful problem to have to deal with. Oh don't throw me in THAT briar patch mr fox!",
"title": ""
},
{
"docid": "1ca3177affb79852e18be3648597ccfe",
"text": "Considering it's all to risky for me, outside of a blind 401k, just having money to try it with is a bigger risk than I'm willing to take. I see this complaint a lot and my response is about the same every time, if you know of something better, please share, so next time we can make it more realistic.",
"title": ""
},
{
"docid": "c8dee8604abe737c83388711bbc7a2cc",
"text": "Don’t do anything that causes taxes or penalties, beyond that it’s entirely personal choice and other posters have already done a great job enumerating then. I recently switched jobs in June and rolled over a 401k from my old company to new company and the third party managing the account at the new company was much more professional and walked me through all the required steps and paperwork.",
"title": ""
},
{
"docid": "d5a728a9343d324f805da3dee3ef082c",
"text": "Your example shows a 4% dividend. If we assume the stock continues to yield 4%, the math drops to something simple. Rule of 72 says your shares will double in 18 years. So in 18 years, 1000 shares will be 2000, at whatever price it's trading. Shares X (1.04)^N years = shares after N years. This is as good an oversimplification as any.",
"title": ""
},
{
"docid": "d2ec2555cc2ad70761dec8b1d77383c1",
"text": "\"There are always little tricks you can play with your credit card. For example, the due date of your statement balance is not really set in stone as your bank would like you to believe. Banks have a TOS where they can make you liable to pay interest from the statement generation date (which is a good 25 days before your due date) on your balance, if you don't pay off your balance by your due date. However, you can choose to not pay your balance by your due date upto 30 days and they will not report your late payment to credit agencies. If they ask you to pay interest, you can negotiate yourself out of it as well (although not sure if it will work every-time if you make it a habit!) Be careful though: not all banks report your credit utilization based on your statement balance! DCU for example, reports your credit utilization based on your end-of-the-month balance. This can affect your short term credit score (history?) and mess around with your chances of pulling off these tricks with the bank CSRs. These \"\"little tricks\"\" can effectively net you more than 60 days of interest free loans, but I am not sure if anyone will condone this as a habit, especially on this website :-)\"",
"title": ""
}
] |
fiqa
|
957cdba9ff769970a62f74dd9289be2c
|
Does technical analysis work on small stock exchanges?
|
[
{
"docid": "8d62775b79b4fdda675c738fcc1a3ab2",
"text": "\"Assuming that you accept the premise that technical analysis is legitimate and useful, it makes sense that it might not work for a small market, or at the very least that it wouldn't be the same for a small market as it is for a large market. The reason for this is that a large stock market like the U.S. stock market is as close to a perfect market as you will find: Compare this to a small market in a small country. Market information is harder to get, because there are not as many media outlets covering the news. There aren't as many participants. And possibly it might be more expensive to participate in, and there might be more regulatory intervention than with the large market. All of these things can affect the prices. The closer you get to a perfect market, the closer you get to a point where the prices of the stocks reflect the \"\"true value\"\" of the companies, without external forces affecting prices.\"",
"title": ""
}
] |
[
{
"docid": "174500b2d286ea36587834083f1490ed",
"text": "Different exchanges sometimes offer different order types, and of course have different trading fees. But once a trade is finished, it should not matter where it was executed.",
"title": ""
},
{
"docid": "30aaa612684f58901097058380ef7de2",
"text": "I'm not disputing whether IB is good to start. I'm disputing that anything going through them is 'low latency.' 50-100ms is a lifetime against high frequency traders. Also, if you're co-locating w/ them and using a direct feed and still getting that latency you're getting ripped off. It should take 100ms *for a message to travel between Chicago and NY*, let alone between your computer and the exchange one at the same colo.",
"title": ""
},
{
"docid": "02e7e6416c346bea938301c41d6f9366",
"text": "Fundamental Analysis can be used to help you determine what to buy, but they won't give you an entry signal for when to buy. Technical Analysis can be used to help you determine when to buy, and can give you entry signals for when to buy. There are many Technical Indicator which can be used as an entry signal, from as simple as the price crossing above a moving average line and then selling when the price crosses back below the moving average line, to as complicated as using a combination of indicators to all line up for an entry signal to be valid. You need to find the entry signals that would suit your investing or trading and incorporate them as part of your trading plan. If you want to learn more about entry signals you are better off learning more about Technical Analysis.",
"title": ""
},
{
"docid": "c26abce4a4b994467b349f12d67579d0",
"text": "\"Below is just a little information on this topic from my small unique book \"\"The small stock trader\"\": The most significant non-company-specific factor affecting stock price is the market sentiment, while the most significant company-specific factor is the earning power of the company. Perhaps it would be safe to say that technical analysis is more related to psychology/emotions, while fundamental analysis is more related to reason – that is why it is said that fundamental analysis tells you what to trade and technical analysis tells you when to trade. Thus, many stock traders use technical analysis as a timing tool for their entry and exit points. Technical analysis is more suitable for short-term trading and works best with large caps, for stock prices of large caps are more correlated with the general market, while small caps are more affected by company-specific news and speculation…: Perhaps small stock traders should not waste a lot of time on fundamental analysis; avoid overanalyzing the financial position, market position, and management of the focus companies. It is difficult to make wise trading decisions based only on fundamental analysis (company-specific news accounts for only about 25 percent of stock price fluctuations). There are only a few important figures and ratios to look at, such as: perhaps also: Furthermore, single ratios and figures do not tell much, so it is wise to use a few ratios and figures in combination. You should look at their trends and also compare them with the company’s main competitors and the industry average. Preferably, you want to see trend improvements in these above-mentioned figures and ratios, or at least some stability when the times are tough. Despite all the exotic names found in technical analysis, simply put, it is the study of supply and demand for the stock, in order to predict and follow the trend. Many stock traders claim stock price just represents the current supply and demand for that stock and moves to the greater side of the forces of supply and demand. If you focus on a few simple small caps, perhaps you should just use the basic principles of technical analysis, such as: I have no doubt that there are different ways to make money in the stock market. Some may succeed purely on the basis of technical analysis, some purely due to fundamental analysis, and others from a combination of these two like most of the great stock traders have done (Jesse Livermore, Bernard Baruch, Gerald Loeb, Nicolas Darvas, William O’Neil, and Steven Cohen). It is just a matter of finding out what best fits your personality. I hope the above little information from my small unique book was a little helpful! Mika (author of \"\"The small stock trader\"\")\"",
"title": ""
},
{
"docid": "2a299334dcf6600c0e5f2e0f087fa951",
"text": "You'd need millions of dollars to trade the number of shares it would take to profit from these penny variations. What you bring up here is the way high frequency firms front-run trades and profit on these pennies. Say you have a trade commission of $5. Every time you buy you pay $5, every time you sell you pay $5. So you need a gain in excess of $10, a 10% gain on $100. Now if you wanted to trade on a penny movement from $100 to $100.01, you need to have bought 1,000 shares totaling $100,000 for the $0.01 price movement to cover your commission costs. If you had $1,000,000 to put at risk, that $0.01 price movement would net you $90 after commission, $10,000,000 would have made you $990. You need much larger gains at the retail level because commissions will equate to a significant percentage of the money you're investing. Very large trading entities have much different arrangements and costs with the exchanges. They might not pay a fee on each transaction but something that more closely resembles a subscription fee, and costs something that more closely resembles a house. Now to your point, catching these price movements and profiting. The way high frequency trading firms purportedly make money relates to having a very low latency network connection to a particular exchange. Their very low latency/very fast network connection lets them see orders and transact orders before other parties. Say some stock has an ask at $101 x 1,000 shares. The next depth is $101.10. You see a market buy order come in for 1,000 shares and place a buy order for 1,000 shares at $101 which hits the exchange first, then immediately place a sell order at $101.09, changing the ask from $101.00 to $101.09 and selling in to the market order for a gain of $0.09 per share.",
"title": ""
},
{
"docid": "d6785de13ddb0dbb31dddee8e6ca16c9",
"text": "Reuters has a service you can subscribe to that will give you lots of Financial information that is not readily available in common feeds. One of the things you can find is the listing/delist dates of stocks. There are tools to build custom reports. That would be a report you could write. You can probably get the data for free through their rss feeds and on their website, but the custom reports is a paid feature. FWIW re-listing(listings that have been delisted but return to a status that they can be listed again) is pretty rare. And I can not think of too many(any actually) penny stocks that have grown to be listed on a major exchange.",
"title": ""
},
{
"docid": "35ecc70f06b1d857067088599dea1266",
"text": "\"Your questions In the world of technical analysis, is candlestick charting an effective trading tool in timing the markets? It depends on how you define effective. But as a standalone and systematic strategy, it tends not to be profitable. See for example Market Timing with Candlestick Technical Analysis: Using robust statistical techniques, we find that candlestick trading rules are not profitable when applied to DJIA component stocks over 1/1/1992 – 31/12/2002 period. Neither bullish or bearish candlestick single lines or patterns provide market timing signals that are any better than what would be expected by chance. Basing ones trading decisions solely on these techniques does not seem sensible but we cannot rule out the possibility that they compliment some other market timing techniques. There are many other papers that come to the same conclusion. If used correctly, how accurate can they be in picking turning points in the market? Technical analysts generally fall into two camps: (i) those that argue that TA can't be fully automated and that interpretation is part of the game; (ii) those that use TA as part of a systematic investment model (automatically executed by a machine) but generally use a combination of indicators to build a working model. Both groups would argue (for different reasons) that the conclusions of the paper I quoted above should be disregarded and that TA can be applied profitably with the proper framework. Psychological biases It is very easy to get impressed by technical analysis because we all suffer from \"\"confirmation bias\"\" whereby we tend to acknowledge things that confirm our beliefs more than those that contradict them. When looking at a chart, it is very easy to see all the occurences when a certain pattern worked and \"\"miss\"\" the occurences when it did not work (and not missing those is much harder than it sounds). Conclusions\"",
"title": ""
},
{
"docid": "98c511623d1fdfd6d509115f9d468932",
"text": "Technical Analysis in general is something to be cognizant of, I don't use a majority of studies and consider them a waste of time. I also use quantitative analysis more so than technical analysis, and prefer the insight it gives into the market. The markets are more about predicting other people's behavior, psychology. So if you are trading an equity that you know retail traders love, retail traders use technical analysis and you can use their fabled channel reversals and support levels against them, as examples. Technical analysis is an extremely broad subject. So I suggest getting familiar, but if your historical pricing charts are covered in various studies, I would say you are doing it wrong. A more objective criticism of technical analysis is that many of the studies were created in the 1980s or earlier. Edges in the market do not typically last more than a few weeks. On the other side of that realization, some technical analysis works if everyone also thinks it will work, if everyone's charts say buy when the stock reaches the $90 price level and everyone does, the then stock will go higher. But the market makers and the actions of the futures markets and the actions of options traders, can undermine the collective decisions of retail traders using technical analysis.",
"title": ""
},
{
"docid": "a635582cdd46c5dd1c4bcac24c074290",
"text": "The study of technical analysis is generally used (sometimes successfully) to time the markets. There are many aspects to technical analysis, but the simplest form is to look for uptrends and downtrends in the charts. Generally higher highs and higher lows is considered an uptrend. And lower lows and lower highs is considered a downtrend. A trend follower would go with the trend, for example see a dip to the trend-line and buy on the rebound. Whilst a bottom fisher would wait until a break in the downtrend line and buy after confirmation of a higher high (as this could be the start of a new uptrend). There are many more strategies dealing with the study of technical analysis, and if you are interested you would need to find and learn about ones that suit your investment styles and your appetite for risk.",
"title": ""
},
{
"docid": "28a55eec01c1f3f06b65170e0b5a45d0",
"text": "I would go even farther than Victor's answer. There is little evidence that candlestick patterns and technical analysis in general have any predictive power. Even if they did in the past, of which there is some evidence, in modern times they are so easy to do on computers that if they worked algorithmic traders would have scanned almost all traded stocks and bought/sold the stock before you even had a chance to look at the graph. While the best technical traders who are very good at quickly using pattern recognition across many indicators as Victor mentioned might be able to add some advantage. The odds that a pattern so simple to code such as Bullish Engulfing would have predictive power is tiny.",
"title": ""
},
{
"docid": "c7205cbaecf85917426224c0955e77ce",
"text": "\"For any large company, there's a lot of activity, and if you sell at \"\"market\"\" your buy or sell will execute in seconds within a penny or two of the real-time \"\"market\"\" price. I often sell at \"\"limit\"\" a few cents above market, and those sell within 20 minutes usually. For much smaller companies, obviously you are beholden to a buyer also wanting that stock, but those are not on major exchanges. You never see whose buy order you're selling into, that all happens behind the curtain so to speak.\"",
"title": ""
},
{
"docid": "b37e26089960d75e1ba62ecb40a88e49",
"text": "It is called the Monday Effect or the Weekend Effect. There are a number of similar theories including the October Effect and January Effect. It's all pretty much bunk. If there were any truth to traders would be all over it and the resulting market forces would wipe it out. Personally, I think all technical analysis has very little value other than to fuel conversations at dinner parties about investments. You might also consider reading about Market efficiency to see further discussion about why technical approaches like this might, but probably don't work.",
"title": ""
},
{
"docid": "dd3510a458e8018f039c340394beb77c",
"text": "Also important to keep in mind is the difference in liquidity. The stock could be very liquid in 1 exchange but not in another. When times get bad, liquidity could dry up 1 one exchange, which results in a trading discount.",
"title": ""
},
{
"docid": "eea837f2962ad63b6cc13e0c938fd84a",
"text": "Support and resistance only works as a self-fulfilling prophecy. If everyone trading that stock agrees there's a resistance at so-and-so level, and it is on such-and-such scale, then they will trade accordingly and there will really be a support or resistance. So while you can identify them at any time scale (although as a rule the time scale on which you observed them should be similar to the time scale on which you intend to use them), it's no matter unless that's what all the other traders are thinking as well. Especially if there are multiple possible S/P levels for different time scales, there will be no consensus, and the whole system will break down as one cohort ruins the other group's S/P by not playing along and vice versa. But often fundamentals are expected to dominate in the long run, so if you are thinking of trades longer than a year, support and resistance will likely become meaningless regardless. It's not like that many people can hold the same idea for that long anyhow.",
"title": ""
},
{
"docid": "70efccedd8237d35629ae1115d0767f9",
"text": "Your question is a bit odd in that you are mixing long-term fundamental analysis signals which are generally meant to work on longer time frames with medium term trading where these fundamental signals are mostly irrelevant. Generally you would buy-and-hold on a fundamental signal and ride the short-term fluctuations if you believe you have done good analysis. If you would like to trade on the 2-6 month time scale you would need a signal that works on that sort of time scale. Some people believe that technical analysis can give you those kind of signals, but there are many, many, many different technical signals and how you would trade using them is highly dependent on which one you believe works. Some people do mix fundamental and technical signals, but that can be very complicated. Learning a good amount about technical analysis could get you started. I will note, though, that studies of non-professionals continuously show that the more frequently people trade the more on they underperform on average in the long term when compared with people that buy-and-hold. An aside on technical analysis: michael's comment is generally correct though not well explained. Say Bob found a technical signal that works and he believes that a stock that costs $10 dollars should be $11. He buys it and makes money two months later when the rest of the market figures out the right price is $11 and he sells at that price. This works a bunch of times and he now publishes how the signal works on Stack Exchange to show everyone how awesome he is. Next time, Bob's signal finds a different stock at $10 that should be $11, but Anna just wrote a computer program that checks that signal Bob published faster than he ever could. The computer program buys as much as it can in milliseconds until the price is $11. Bob goes to buy, but now it is too late the price is already $11 and he can't make any money. Eventually, people learn to anticipate/adjust for this signal and even Anna's algorithms don't even work anymore and the hunt for new signals starts again.",
"title": ""
}
] |
fiqa
|
844eac8fa3ef2b42df9c9aa0587210e0
|
Estimate a future option price given greeks and a 1$ move in underlying
|
[
{
"docid": "3c0b7d3095559509ec23312044a5ee9b",
"text": "It's not that straightforward, even though your gamma will change your delta on the fly, you likely won't see the full $.48 after such a small move. If the vega drops due to lack of volatility while the stock is moving up, those few percentage points up might help your delta (2% gain $50 to $51 in your example) but will be partially negated by volatility going down. I mean, don't be surprised to see it at closer to $1.33 or something. The market is out to make money, not to make you money.",
"title": ""
}
] |
[
{
"docid": "ac4977a4961a36d663225f022a72b039",
"text": "Not to be a jerk, since I'm learning about options myself, but I think you have a few things wrong about your tesla put postion. First, assuming it was itm or atm within a week of strike it would maybe be worth $12-15, I glanced at the 11/10 put strikes ~325 trades for $12.65 https://www.barchart.com/stocks/quotes/TSLA/options?expiration=2017-11-10 The closer it gets to expiry theta decay reduces put value significantly, the 325's that expire tomorrow are only worth $2.60. Expecting TSLA to drop from 325 to 50 a share by Jan has a .006% chance of occuring according to the current delta. It's a lottery ticket at best. _____ Lastly the $50 price is the expected share price at the date of expiry. The price you pay is 57c for the options. So in order to get to your 494k number TSLA would need to decline $50 dollars to a price of 275 a share. You wouldn't want to buy 50 dollar puts, just the 275 puts, provided it declines very quickly. EDIT: I was looking at the recent expiration to get an idea of atm or itm prices, since it's not like you would hold to expiration. Also the Jan has a 0.6% chance of hitting 50, not .006%. That said what I've noticed when things start to slide is that puts have a weird way of pricing themselves. For example when something gradually goes up all of the calls go up down the chain through time frames, but puts do not in the same fashion. Further out lower priced puts won't move nearly as much unless the company is basically headed for bankruptcy.",
"title": ""
},
{
"docid": "c091e3281e221f90416b841dccd337be",
"text": "Ok maybe I should have went into further detail but I'm not interested in a single point estimate to compare the different options. I want to look at the comparable NPVs for the two different options for a range of exit points (sell property / exit lease and sell equity shares). I want to graph the present values of each (y-axis being the PVs and x-axis being the exit date) and look at the 'cross-over' point where one option becomes better than the other (i'm taking into account all of the up front costs of the real estate purchase which will be a bit different in the first years). i'm also looking to do the same for multiple real estate and equity scenarios, in all likelihood generate a distribution of cross-over points. this is all theoretical, i'm not really going to take the results to heart. merely an exercise and i'm tangling with the discount rates at the moment.",
"title": ""
},
{
"docid": "37e4a50d30b9b0df113973cbb7a4e610",
"text": "The other answer covers the mechanics of how to buy/sell a future contract. You seem however to be under the impression that you can buy the contract at 1,581.90 today and sell at 1,588.85 on expiry date if the index does not move. This is true but there are two important caveats: In other words, it is not the case that your chance of making money by buying that contract is more than 50%...",
"title": ""
},
{
"docid": "5887589fd2f004e5ffadf2a922b01929",
"text": "Im creating a 5-year projection on Profit and loss, cash flow and balance sheet and i\\m suppose to use the LIBOR (5 year forward curve) as interest rate on debt. This is the information i am given and it in USD. Thanks for the link. I guess its the USD LIBOR today, in one year, in two years, three years, four years and five years",
"title": ""
},
{
"docid": "966f5f6dfd5cf39b9a1c72a734924277",
"text": "\"Currently, when \"\"implied volatility\"\" is spoken, the Black-Scholes-Merton model is implied. This model has been shown to be deficient, thus the Variance Gamma Model should be used. However, as nearly no one uses VG, it can be assumed that BS is still being implied. The BS formula has multiple variables. Some are external to the underlying in question. The rest are internal. When all but one variable is known or assumed, the last variable can be calculated, so if one has the price of the underlying and all else except the volatility, the volatility can be calculated thus implied. If one selects an implied volatility, and all variables except the underlying price is known, the underlying price can be calculated. For the present, one uses the current price of the underlying to calculate the implied volatility. For future option prices, one assumes an implied volatility at a later date to calculate a possible price. For prices not at the money, the BS model is extremely imprecise. The VG model can better determine a potential future price.\"",
"title": ""
},
{
"docid": "e215380be65e1d229d6662ffc05ffa45",
"text": "A bullish (or 'long') call spread is actually two separate option trades. The A/B notation is, respectively, the strike price of each trade. The first 'leg' of the strategy, corresponding to B, is the sale of a call option at a strike price of B (in this case $165). The proceeds from this sale, after transaction costs, are generally used to offset the cost of the second 'leg'. The second 'leg' of the strategy, corresponding to A, is the purchase of a call option at a strike price of A (in this case $145). Now, the important part: the payoff. You can visualize it as so. This is where it gets a teeny bit math-y. Below, P is the profit of the strategy, K1 is the strike price of the long call, K2 is the strike price of the short call, T1 is the premium paid for the long call option at the time of purchase, T2 is the premium received for the short call at the time of sale, and S is the current price of the stock. For simplicity's sake, we will assume that your position quantity is a single option contract and transaction costs are zero (which they are not). P = (T2 - max(0, S - K2)) + (max(0, S - K1) - T1) Concretely, let's plug in the strikes of the strategy Nathan proposes, and current prices (which I pulled from the screen). You have: P = (1.85 - max(0, 142.50 - 165)) - (max(0, 142.50 - 145)) = -$7.80 If the stock goes to $150, the payoff is -$2.80, which isn't quite break even -- but it may have been at the time he was speaking on TV. If the stock goes to $165, the payoff is $12.20. Please do not neglect the cost of the trades! Trading options can be pretty expensive depending on the broker. Had I done this trade (quantity 1) at many popular brokers, I still would've been net negative PnL even if NFLX went to >= $165.",
"title": ""
},
{
"docid": "9e080f52dc5ab00a2c1dee3097206fc9",
"text": "Don´t forget that changing volatility will have an impact on the time value too! So at times it can happen that your time value is increasing instead of decreasing, if the underlying (market) volatility moves up strongly. Look for articles on option greeks, and how they are interdependent. Some are well explaining in simple language.",
"title": ""
},
{
"docid": "057082b5885da5dc1df7c391596501ef",
"text": "I have been looking into CMEs trading tool. I might just play around with futures on it. You make a good point on that though. I am reading Hull's book on options, futures and derivatives, and so far so good. Only thing I would want to test is options on futures, which is missing :( .",
"title": ""
},
{
"docid": "705edc8917c352edfecb5356b6058ef2",
"text": "I'm not entirely sure about some of the details in your question, since I think you meant to use $10,000 as the value of the futures contract and $3 as the value of the underlying stock. Those numbers would make more sense. That being said, I can give you a simple example of how to calculate the profit and loss from a leveraged futures contract. For the sake of simplicity, I'll use a well-known futures contract: the E-mini S&P500 contract. Each E-mini is worth $50 times the value of the S&P 500 index and has a tick size of 0.25, so the minimum price change is 0.25 * $50 = $12.50. Here's an example. Say the current value of the S&P500 is 1,600; the value of each contract is therefore $50 * 1,600 = $80,000. You purchase one contract on margin, with an initial margin requirement1 of 5%, or $4,000. If the S&P 500 index rises to 1,610, the value of your futures contract increases to $50 * 1,610 = $80,500. Once you return the 80,000 - 4,000 = $76,000 that you borrowed as leverage, your profit is 80,500 - 76,000 = $4,500. Since you used $4,000 of your own funds as an initial margin, your profit, excluding commissions is 4,500 - 4,000 = $500, which is a 500/4000 = 12.5% return. If the index dropped to 1,580, the value of your futures contract decreases to $50 * 1,580 = $79,000. After you return the $76,000 in leverage, you're left with $3,000, or a net loss of (3,000 - 4000)/(4000) = -25%. The math illustrates why using leverage increases your risk, but also increases your potential for return. Consider the first scenario, in which the index increases to 1,610. If you had forgone using margin and spent $80,000 of your own funds, your profit would be (80,500 - 80,000) / 80000 = .625%. This is smaller than your leveraged profit by a factor of 20, the inverse of the margin requirement (.625% / .05 = 12.5%). In this case, the use of leverage dramatically increased your rate of return. However, in the case of a decrease, you spent $80,000, but gained $79,000, for a loss of only 1.25%. This is 20 times smaller in magnitude than your negative return when using leverage. By forgoing leverage, you've decreased your opportunity for upside, but also decreased your downside risk. 1) For futures contracts, the margin requirements are set by the exchange, which is CME group, in the case of the E-mini. The 5% in my example is higher than the actual margin requirement, which is currently $3,850 USD per contract, but it keeps the numbers simple. Also note that CME group refers to the initial margin as the performance bond instead.",
"title": ""
},
{
"docid": "32ca0287dec65ed058c50e3065c832de",
"text": "\"Suppose the stock is $41 at expiry. The graph says I will lose money. I think I paid $37.20 for (net debit) at this price. I would make money, not lose. What am I missing? The `net debit' doesn't have anything to do with your P/L graph. Your graph is also showing your profit and loss for NOW and only one expiration. Your trade has two expirations, and I don't know which one that graph is showing. That is the \"\"mystery\"\" behind that graph. Regardless, your PUTs are mitigating your loss as you would expect, if you didn't have the put you would simply lose more money at that particular price range. If you don't like that particular range then you will have to consider a different contract. it was originally a simple covered call, I added a put to protect from stock going lower.. Your strike prices are all over the place and NBIX has a contract at every whole number.... there is nothing simple about this trade. You typically won't find an \"\"always profitable\"\" combination of options. Also, changes in volatility can distort your projects greatly.\"",
"title": ""
},
{
"docid": "6473d727ce6f8ff477b24768d2c05b49",
"text": "\"Option pricing models used by exchanges to calculate settlement prices (premiums) use a volatility measure usually describes as the current actual volatility. This is a historic volatility measure based on standard deviation across a given time period - usually 30 to 90 days. During a trading session, an investor can use the readily available information for a given option to infer the \"\"implied volatility\"\". Presumably you know the option pricing model (Black-Scholes). It is easy to calculate the other variables used in the pricing model - the time value, the strike price, the spot price, the \"\"risk free\"\" interest rate, and anything else I may have forgotten right now. Plug all of these into the model and solve for volatility. This give the \"\"implied volatility\"\", so named because it has been inferred from the current price (bid or offer). Of course, there is no guarantee that the calculated (implied) volatility will match the volatility used by the exchange in their calculation of fair price at settlement on the day (or on the previous day's settlement). Comparing the implied volatility from the previous day's settlement price to the implied volatility of the current price (bid or offer) may give you some measure of the fairness of the quoted price (if there is no perceived change in future volatility). What such a comparison will do is to give you a measure of the degree to which the current market's perception of future volatility has changed over the course of the trading day. So, specific to your question, you do not want to use an annualised measure. The best you can do is compare the implied volatility in the current price to the implied volatility of the previous day's settlement price while at the same time making a subjective judgement about how you see volatility changing in the future and how this has been reflected in the current price.\"",
"title": ""
},
{
"docid": "8a6e87ece5bda5dbb3720b8f90837b88",
"text": "\"Here is how I would approach that problem: 1) Find the average ratios of the competitors: 2) Find the earnings and book value per share of Hawaiian 3) Multiply the EPB and BVPS by the average ratios. Note that you get two very different numbers. This illustrates why pricing from ratios is inexact. How you use those answers to estimate a \"\"price\"\" is up to you. You can take the higher of the two, the average, the P/E result since you have more data points, or whatever other method you feel you can justify. There is no \"\"right\"\" answer since no one can accurately predict the future price of any stock.\"",
"title": ""
},
{
"docid": "3f2d02333fab4076506ce124c981619d",
"text": "If you're talking about just Theta, the amount of decay due to the passage of time (all else being equal), then theoretically, the time value is a continuous function, so it would decay throughout the day (although by the day of expiry the time value is very, very small). Which makes sense, since even with 15 minutes to go, there's still a 50/50 shot of an ATM option expiring in-the-money, so there should be some time value associated with that one-sided probability. The further away from ATM the option is, the smaller the time value will be, and will be virtually zero for options that are deep in- or out-of-the-money. If you're talking about total time value, then yes it will definitely change during the day, since the underlying components (volatility, underlying price, etc.) change more or less continuously.",
"title": ""
},
{
"docid": "0bc45136caca7745acf7af3a33fe7e41",
"text": "I don't think you understand options. If it expires, you can't write a new call for the same expiration date as it expired that day. Also what if the stock price decreases further to $40 or even more? If you think the stock will move in either way greatly, and you wish to be profit from it, look into straddles.",
"title": ""
},
{
"docid": "351f89bd9a41b943744b8ce95e967cdb",
"text": "Excellent, very sharp. No it will not be vega neutral exactly! If you think about it, what does a higher vol imply? That the delta of the option is higher than under BS model. Therefore, the vega should also be greater (simplistic explanation but generally accurate). So no, if you trade a 25-delta risky in equal size per leg, the vega will not be neutral. But, in reality, that is a very small portion of your risk. It plays a part, but in general the vanna position dominates by many many multiples. What do you do that you asked such a question, if you don't mind?",
"title": ""
}
] |
fiqa
|
f8e2bb0c96fd1f0e8ffd646374485001
|
What's the best gold investment strategy for a Singapore resident?
|
[
{
"docid": "1ea028386d7b77f54bba0eb3c5e18b8c",
"text": "With gold at US$1300 or so, a gram is about $40. For your purposes, you have the choice between the GLD ETF, which represents a bit less than 1/10oz gold equivalent per share, or the physical metal itself. Either choice has a cost: the commission on the buy plus, eventually, the sale of the gold. There may be ongoing fees as well (fund fees, storage, etc.) GLD trades like a stock and you can enter limit orders or any other type of order the broker accepts.",
"title": ""
}
] |
[
{
"docid": "4de489ebd03b93df065d778a03d65857",
"text": "I don't see any trading activity on rough rice options, so I'll just default to gold. The initial margin on a gold futures contract is $5,940. An option on a gold futures represents 1 contract. The price of an October gold futures call with a strike of $1310 is currently $22.70. Gold spot is currently $1308.20. The October gold futures price is $1307.40. So, yeah, you can buy 1 option to later control 1 futures for $22.70, but the moment you exercise you must have $5,940 in a margin account to actually use the futures contract. You could also sell the option. I don't know how much you're going to enjoy trading options on futures though -- the price of this option just last week ranged from $13.90 to $26, and last month it ranged from $15.40 to $46.90. There's some crazy leverage involved.",
"title": ""
},
{
"docid": "f6b93d56422824ec67ede47fd8faf611",
"text": "Very interesting. I would like to expand beyond just precious metals and stocks, but I am not ready just to jump in just yet (I am a relatively young investor, but have been playing around with stocks for 4 years on and off). The problem I often find is that the stock market is often too overvalued to play Ben Graham type strategy/ PE/B, so I would like to expand my knowledge of investing so I can invest in any market and still find value. After reading Jim Rogers, I was really interested in commodities as an alternative to stocks, but I like to play really conservative (generally). Thank you for your insight. If you don't mind, I would like to add you as a friend, since you seem quite above average in the strategy department.",
"title": ""
},
{
"docid": "25a38b50c7fa018f6d9168ae1325fc2f",
"text": "\"Since you are going to be experiencing a liquidity crisis that even owning physical gold wouldn't solve, may I suggest bitcoins? You will still be liquid and people anywhere will be able to trade it. This is different from precious metals, whereas even if you \"\"invested\"\" in gold you would waste considerable resources on storage, security and actually making it divisible for trade. You would be illiquid. Do note that the bitcoin currency is currently more volatile than a Greek government bond.\"",
"title": ""
},
{
"docid": "96a7f25ee20dc1b974b4c5e296b433dd",
"text": "if you bought gold in late '79, it would have taken 30 years to break even. Of all this time it was two brief periods the returns were great, but long term, not so much. Look at the ETF GLD if you wish to buy gold, and avoid most of the buy/sell spread issues. Edit - I suggest looking at Compound Annual Growth Rate and decide whether long term gold actually makes sense for you as an investor. It's sold with the same enthusiasm as snake oil was in the 1800's, and the suggestion that it's a storehouse of value seems nonsensical to me.",
"title": ""
},
{
"docid": "2a4101d422ea1202cbc43ffd2a8abbf0",
"text": "Are you going to South Africa or from? (Looking on your profile for this info.) If you're going to South Africa, you could do worse than to buy five or six one-ounce krugerrands. Maybe wait until next year to buy a few; you may get a slightly better deal. Not only is it gold, it's minted by that country, so it's easier to liquidate should you need to. Plus, they go for a smaller premium in the US than some other forms of gold. As for the rest of the $100k, I don't know ... either park it in CD ladders or put it in something that benefits if the economy gets worse. (Cheery, ain't I? ;) )",
"title": ""
},
{
"docid": "701044a51a7f47011eb598f92c1ca560",
"text": "Gold's valuation is so stratospheric right now that I wonder if negative numbers (as in, you should short it) are acceptable in the short run. In the long run I'd say the answer is zero. The problem with gold is that its only major fundamental value is for making jewelry and the vast majority is just being hoarded in ways that can only be justified by the Greater Fool Theory. In the long run gold shouldn't return more than inflation because a pile of gold creates no new wealth like the capital that stocks are a claim on and doesn't allow others to create new wealth like money lent via bonds. It's also not an important and increasingly scarce resource for wealth creation in the global economy like oil and other more useful commodities are. I've halfway-thought about taking a short position in gold, though I haven't taken any position, short or long, in gold for the following reasons: Straight up short-selling of a gold ETF is too risky for me, given its potential for unlimited losses. Some other short strategy like an inverse ETF or put options is also risky, though less so, and ties up a lot of capital. While I strongly believe such an investment would be profitable, I think the things that will likely rise when the flight-to-safety is over and gold comes back to Earth (mainly stocks, especially in the more beaten-down sectors of the economy) will be equally profitable with less risk than taking one of these positions in gold.",
"title": ""
},
{
"docid": "3bb6573295f5d3d4689845334f1e5589",
"text": "\"Setting a certain % of income for pension actually depends on person. \"\"Always pay yourself first\"\" This is the quote which I love the most and which I am currently following. If you are planning to do 8%, then why don't you stretch a little bit more to 10%. I suggest you to do monthly review. If you can stretch more, increase % a little more by challenging yourself. This is rewarding. For pension plan, there is SRS Supplementary Retirement Plan where foreigners can also set aside of their money. This is long term plan and you can enjoy tax relief too. The catch is you can only withdraw the money when you reach certain age. Otherwise, you have to pay tax again (certain %) once you decide to withdraw. Serveral banks in Singapore offers to open this account. I suggest to compare pro and cons. If you are planning to work in Singapore for quite long, you may wish to consider this. Useful links http://www.mof.gov.sg/MOF-For/Individuals/Supplementary-Retirement-Scheme-SRS https://blog.moneysmart.sg/budgeting/is-the-supplementary-retirement-scheme-a-waste-of-your-time-and-money/\"",
"title": ""
},
{
"docid": "d63aa09ac7937a4d61812e0a102489b3",
"text": "You can make a start to learn how to make better investing decisions by learning and understanding what your current super funds are invested in. Does the super fund give you choices of where you can invest your funds, and how often does it allow you to change your investment choices each year? If you are interested in one area of investing over others, eg property or shares, then you should learn more on this subject, as you can also start investing outside of superannuation. Your funds in superannuation are taxed less but you are unable to touch them for another 30 to 35 years. You also need to consider investing outside super to help meet your more medium term goals and grow your wealth outside of super as well. If you are interested in shares then I believe you should learn about both fundamental and technical analysis, they can help you to make wiser decisions about what to invest in and when to invest. Above is a chart of the ASX200 over the last 20 years until January 2015. It shows the Rate Of Change (ROC) indicator below the chart. This can be used to make medium to long term decisions in the stock market by investing when the ROC is above zero and getting out of the market when the ROC is below zero. Regarding your aggressiveness in your investments, most would say that yes because you are still young you should be aggressive because you have time on your side, so if there is a downturn in your investments then you still have plenty of time for them to recover. I have a different view, and I will use the stock market as an example. Refer back to the chart above, I would be more aggressive when the ROC is above zero and less aggressive when the ROC is below zero. How can you relate this to your super fund? If it does provide you to change your investment choices, then I would be invested in more aggressive investments like shares when the ROC crosses above zero, and then when the ROC moves below zero take a less aggressive approach by moving your investments in the super fund to a more balanced or capital guaranteed strategy where less of your funds are invested in shares and more are invested in bonds and cash. You can also have a similar approach with property. Learn about the property cycles (remember super funds usually invest in commercial and industrial property rather than houses, so you would need to learn about the commercial and industrial property cycles which would be different to the residential property cycle). Regarding your question about SMSFs, if you can increase your knowledge and skills in investing, then yes switching to a SMSF will give you more control and possibly better returns. However, I would avoid switching your funds to a SMSF right now. Two reasons, firstly you would want to increase your knowledge as mentioned above, and secondly you would want to have at least $300,000 in funds before switching to a SMSF or else the setup and compliance costs would be too high as a percentage of your funds at the moment ($70,000). You do have time on your side, so whilst you are increasing your funds you can use that time to educate yourself in your areas of interest. And remember a SMSF is not only an investment vehicle whilst you are building your funds during your working life, but it is also an investment vehicle when you are retired and it becomes totally tax free during this phase, where any investment returns are tax free and any income you take out is also tax free.",
"title": ""
},
{
"docid": "68307d5be9ffcdcde08545453139e73a",
"text": "\"Buying physical gold: bad idea; you take on liquidity risk. Putting all your money in a German bank account: bad idea; you still do not escape Euro risk. Putting all your money in USD: bad idea; we have terrible, terrible fiscal problems here at home and they're invisible right now because we're in an election year. The only artificially \"\"cheap\"\" thing that is well-managed in your part of the world is the Swiss Franc (CHF). They push it down artificially, but no government has the power to fight a market forever. They'll eventually run out of options and have to let the CHF rise in value.\"",
"title": ""
},
{
"docid": "1020c04a207e3f79fa26ae09276bcb99",
"text": "One option is buying physical gold. I don't know about Irish law -- but from an economic standpoint, putting funds in foreign currencies would also be an option. You could look into buying shares in an ETF tracking foreign currency as an alternative to direct money exchange.",
"title": ""
},
{
"docid": "fdc8b26879a2340e97a9b043f7e3f155",
"text": "My personal gold/metals target is 5.0% of my retirement portfolio. Right now I'm underweight because of the run up in gold/metals prices. (I haven't been selling, but as I add to retirement accounts, I haven't been buying gold so it is going below the 5% mark.) I arrived at this number after reading a lot of different sample portfolio allocations, and some books. Some people recommend what I consider crazy allocations: 25-50% in gold. From what I could figure out in terms of modern portfolio theory, holding some metal reduces your overall risk because it generally has a low correlation to equity markets. The problem with gold is that it is a lousy investment. It doesn't produce any income, and only has costs (storage, insurance, commissions to buy/sell, management of ETF if that's what you're using, etc). The only thing going for it is that it can be a hedge during tough times. In this case, when you rebalance, your gold will be high, you'll sell it, and buy the stocks that are down. (In theory -- assuming you stick to disciplined rebalancing.) So for me, 5% seemed to be enough to shave off a little overall risk without wasting too much expense on a hedge. (I don't go over this, and like I said, now I'm underweighted.)",
"title": ""
},
{
"docid": "613f34cb917a8d2321c89092453f5ebe",
"text": "I think your question is very difficult to answer because it involves speculation. I think the best article describing why or why not to invest in gold in a recent Motley Fool Article.",
"title": ""
},
{
"docid": "e7872e2a2885e23482027b15df8710aa",
"text": "Putting the money in a bank savings account is a reasonably safe investment. Anything other than that will come with additional risk of various kinds. (That's right; not even a bank account is completely free of risk. Neither is withdrawing cash and storing it somewhere yourself.) And I don't know which country you are from, but you will certainly have access to your country's government bonds and the likes. You may also have access to mutual funds which invest in other countries' government bonds (bond or money-market funds). The question you need to ask yourself really is twofold. One, for how long do you intend to keep the money invested? (Shorter term investing should involve lower risk.) Two, what amount of risk (specifically, price volatility) are you willing to accept? The answers to those questions will determine which asset class(es) are appropriate in your particular case. Beyond that, you need to make a personal call: which asset class(es) do you believe are likely to do better or less bad than others? Low risk usually comes at the price of a lower return. Higher return usually involves taking more risk (specifically price volatility in the investment vehicle) but more risk does not necessarily guarantee a higher return - you may also lose a large fraction of or even the entire capital amount. In extreme cases (leveraged investments) you might even lose more than the capital amount. Gold may be a component of a well-diversified portfolio but I certainly would not recommend putting all of one's money in it. (The same goes for any asset class; a portfolio composed exclusively of stocks is no more well-diversified than a portfolio composed exclusively of precious metals, or government bonds.) For some specifics about investing in precious metals, you may want to see Pros & cons of investing in gold vs. platinum?.",
"title": ""
},
{
"docid": "eddf10b9b6dae95cbbd0441684ab2b0a",
"text": "Diversification is an important aspect of precious metals investing. Therefore I would suggest diversifying in a number of different ways:",
"title": ""
},
{
"docid": "eb6cf381a81bcc5bf1f0ada803b42b6f",
"text": "Gold and silver are for after the crisis, not during. Gold and silver are far more likely to be able to be exchanged for things you need, since they are rare, easily divided, etc. Getting land away from where the crap is happening is also good, but it's more than that. Say you have land somewhere. How will the locals view you if you move there to hunker down only when things go bad? They won't really trust you, and you'll inherit a new set of problems. Building relationships in an off-the-beaten-path area requires a time investment. Investing in lifestyle in general is good. Lifestyle isn't just toys, but it's privacy, peace of mind, relationships with people with whom you can barter skills, as well as the skills you might think you'd need to do more than just get by in whatever scenario you envision. For the immediate crisis, you'd better have the things you'll need for a few months. Stores probably won't be supplied on any regular basis, and the shelves will be bare. Trying to use gold or silver during the crisis just makes you a target for theft. With regard to food, it's best to get acclimated to a diet of what you'd have on hand. If you get freeze-dried food, eat it now, so that it's not a shock to your system when you have to eat it. (Can you tell I've been thinking about this? :) )",
"title": ""
}
] |
fiqa
|
89067c9fa46483c092d63c49adab6b8c
|
Historical Stock Price Quote on delisted stock without knowing stock symbol as of quote date
|
[
{
"docid": "b0450d67e8cbf88413d3c97a3f56ac2f",
"text": "You need a source of delisted historical data. Such data is typically only available from paid sources. According to my records 20 Feb 2006 was not a trading day - it was Preisdent's Day and the US exchanges were closed. The prior trading date to this was 17 Feb 2006 where the stock had the following data: Open: 14.40 High 14.46 Low 14.16 Close 14.32 Volume 1339800 (consolidated volume) Source: Symbol NVE-201312 within Premium Data US delisted stocks historical data set available from http://www.premiumdata.net/products/premiumdata/ushistorical.php Disclosure: I am a co-owner of Norgate / Premium Data.",
"title": ""
}
] |
[
{
"docid": "a3e2f1b61d32cacf842186f073f09885",
"text": "\"Note that the series you are showing is the historical spot index (what you would pay to be long the index today), not the history of the futures quotes. It's like looking at the current price of a stock or commodity (like oil) versus the futures price. The prompt futures quote will be different that the spot quote. If you graphed the history of the prompt future you might notice the discontinuity more. How do you determine when to roll from one contract to the other? Many data providers will give you a time series for the \"\"prompt\"\" contract history, which will automatically roll to the next expiring contract for you. Some even provide 2nd prompt, etc. time series. If that is not available, you'd have to query multiple futures contracts and interleave them based on the expiry rules, which should be publicly available. Also is there not a price difference from the contract which is expiring and the one that is being rolled forward to? Yes, since the time to delivery is extended by ~30 days when you roll to the next contract. but yet there are no sudden price discontinuities in the charts. Well, there are, but it could be indistinguishable from the normal volatility of the time series.\"",
"title": ""
},
{
"docid": "fba69109c372ce3a7f882968dd7b3e36",
"text": "Note that your link shows the shares as of March 31, 2016 while http://uniselect.com/content/files/Press-release/Press-Release-Q1-2016-Final.pdf notes a 2-for-1 stock split so thus you have to double the shares to get the proper number is what you are missing. The stock split occurred in May and thus is after the deadline that you quoted.",
"title": ""
},
{
"docid": "d6785de13ddb0dbb31dddee8e6ca16c9",
"text": "Reuters has a service you can subscribe to that will give you lots of Financial information that is not readily available in common feeds. One of the things you can find is the listing/delist dates of stocks. There are tools to build custom reports. That would be a report you could write. You can probably get the data for free through their rss feeds and on their website, but the custom reports is a paid feature. FWIW re-listing(listings that have been delisted but return to a status that they can be listed again) is pretty rare. And I can not think of too many(any actually) penny stocks that have grown to be listed on a major exchange.",
"title": ""
},
{
"docid": "dbefd691fe01fda159ed6044bff5b448",
"text": "Here is what I could find on the net: http://education.wallstreetsurvivor.com/options-symbol-changes-coming-february-12th-2010 So it sounds like it does not affect how you invest in options but only how you look them up. I remember using a Bloomberg terminal and it wasn't clear what the expiry date of the option you were looking at was. It looks like the new quote system addresses this. HTH.",
"title": ""
},
{
"docid": "1fe2c6cb65515b9032aed7caae98453f",
"text": "\"This is the same answer as for your other question, but you can easily do this yourself: ( initial adjusted close / final adjusted close ) ^ ( 1 / ( # of years sampled) ) Note: \"\"# of years sampled\"\" can be a fraction, so the one week # of years sampled would be 1/52. Crazy to say, but yahoo finance is better at quick, easy, and free data. Just pick a security, go to historical prices, and use the \"\"adjusted close\"\". money.msn's best at presenting finances quick, easy, and cheap.\"",
"title": ""
},
{
"docid": "fc995ec5e7c0691a5351985999c81cc2",
"text": "For stock splits, let's say stock XYZ closed at 100 on February 5. Then on February 6, it undergoes a 2-for-1 split and closes the day at 51. In Yahoo's historical prices for XYZ, you will see that it closed at 51 on Feb 6, but all of the closing prices for the previous days will be divided by 2. So for Feb 5, it will say the closing price was 50 instead of 100. For dividends, let's say stock ABC closed at 200 on December 18. Then on December 19, the stock increases in price by $2 but it pays out a $1 dividend. In Yahoo's historical prices for XYZ, you will see that it closed at 200 on Dec 18 and 201 on Dec 19. Yahoo adjusts the closing price for Dec 19 to factor in the dividend.",
"title": ""
},
{
"docid": "2227038c0029b9fdd52d89545028260a",
"text": "The last column in the source data is volume (the number of stocks that was exchanged during the day), and it also has a value of zero for that day, meaning that nobody bought or sold the stocks on that day. And since the prices are prices of transactions (the first and the last one on a particular day, and the ones with the highest/lowest price), the prices cannot be established, and are irrelevant as there was not a single transaction on that day. Only the close price is assumed equal to its previous day counterpart because this is the most important value serving as a basis to determine the daily price change (and we assume no change in this case). Continuous-line charts also use this single value. Bar and candle charts usually display a blank space for a day where no trade occurred.",
"title": ""
},
{
"docid": "8ac3f7737b4923500e318bf9888f039a",
"text": "Your assets are marked to market. If you buy at X, and the market is bidding at 99.9% * X then you've already lost 0.1%. This is a market value oriented way of looking at costs. You could always value your assets with mark to model, and maybe you do, but no one else will. Just because you think the stock is worth 2*X doesn't mean the rest of the world agrees, evidenced by the bid. You surely won't get any margin loans based upon mark to model. Your bankers won't be convinced of the valuation of your assets based upon mark to model. By strictly a market value oriented way of valuing assets, there is a bid/ask cost. more clarification Relative to littleadv, this is actually a good exposition between the differences between cash and accrual accounting. littleadv is focusing completely on the cash cost of the asset at the time of transaction and saying that there is no bid/ask cost. Through the lens of cash accounting, that is 100% correct. However, if one uses accrual accounting marking assets to market (as we all do with marketable assets like stocks, bonds, options, etc), there may be a bid/ask cost. At the time of transaction, the bids used to trade (one's own) are exhausted. According to exchange rules that are now practically uniform: the highest bid is given priority, and if two bids are bidding the exact same highest price then the oldest bid is given priority; therefore the oldest highest bid has been exhausted and removed at trade. At the time of transaction, the value of the asset cannot be one's own bid but the highest oldest bid leftover. If that highest oldest bid is lower than the price paid (even with liquid stocks this is usually the case) then one has accrued a bid/ask cost.",
"title": ""
},
{
"docid": "def659aae548de1cffe0daa87eeb0196",
"text": "I believe that it's not possible for the public to know what shares are being exchanged as shorts because broker-dealers (not the exchanges) handle the shorting arrangements. I don't think exchanges can even tell the difference between a person selling a share that belongs to her vs. a share that she's just borrowing. (There are SEC regulations requiring some traders to declare that trades are shorts, but (a) I don't think this applies to all traders, (b) it only applies to the sells, and (c) this information isn't public.) That being said, you can view the short interest in a symbol using any of a number of tools, such as Nasdaq's here. This is often cited as an indicator similar to what you proposed, though I don't know how helpful it would be from an intra-day perspective.",
"title": ""
},
{
"docid": "06fecd6d3adef976fa7230cdaf8d5f75",
"text": "At the higher level - yes. The value of an OTM (out of the money) option is pure time value. It's certainly possible that when the stock price gets close to that strike, the value of that option may very well offer you a chance to sell at a profit. Look at any OTM strike bid/ask and see if you can find the contract low for that option. Most will show that there was an opportunity to buy it lower at some point in the past. Your trade. Ask is meaningless when you own an option. A thinly traded one can be bid $0 /ask $0.50. What is the bid on yours?",
"title": ""
},
{
"docid": "8479415d2f76ac41122f65caeebe24b2",
"text": "Yahoo Finance's Historical Prices section allows you to look up daily historical quotes for any given stock symbol, you don't have to hit a library for this information. Your can choose a desired time frame for your query, and the dataset will include High/Low/Close/Volume numbers. You can then download a CSV version of this report and perform additional analysis in a spreadsheet of your choice. Below is Twitter report from IPO through yesterday: http://finance.yahoo.com/q/hp?s=TWTR&a=10&b=7&c=2013&d=08&e=23&f=2014&g=d",
"title": ""
},
{
"docid": "77309b603ad362f75b20265cadb82d0a",
"text": "\"As JoeTaxpayer says, there's a lot you can do with just the stock price. Exploring that a bit: Stock prices are a combination of market sentiment and company fundamentals. Options are just a layer on top of that. As such, options are mostly formulaic, which is why you have a hard time finding historical option data -- it's just not that \"\"interesting\"\", technically. \"\"Mostly\"\" because there are known issues with the assumptions the Black-Scholes formula makes. It's pretty good, and importantly, the market relies on it to determine fair option pricing. Option prices are determined by: Relationship of stock price to strike. Both distance and \"\"moneyness\"\". Time to expiration. Dividends. Since dividend payments reduce the intrinsic value of a company, the prospect of dividend payments during the life of a call option depresses the price of the option, as all else equal, without the payments, the stock would be more likely to end up in the money. Reverse the logic for puts. Volatility. Interest rates. But this effect is so tiny, it's safe to ignore. #4, Volatility, is the biggie. Everything else is known. That's why option trading is often considered \"\"volatility trading\"\". There are many ways to skin this cat, but the result is that by using quoted historical values for the stock price, and the dividend payments, and if you like, interest rates, you can very closely determine what the price of the option would have been. \"\"Very closely\"\" depending on your volatility assumption. You could calculate then-historical volatility for each time period, by figuring the average price swing (in either direction) for say the past year (year before the date in question, so you'd do this each day, walking forward). Read up on it, and try various volatility approaches, and see if your results are within a reasonable range. Re the Black-Scholes formula, There's a free spreadsheet downloadable from http://optiontradingtips.com. You might find it useful to grab the concept for coding it up yourself. It's VBA, but you can certainly use that info to translate in your language of choice. Or, if you prefer to read Perl, CPAN has a good module, with full source, of course. I find this approach easier than reading a calculus formula, but I'm a better developer than math-geek :)\"",
"title": ""
},
{
"docid": "6f8f4f0e86dfd43dd70b7d48f6ee9d1f",
"text": "A number of places. First, fast and cheap, you can probably get this from EODData.com, as part of a historical index price download -- they have good customer service in my experience and will likely confirm it for you before you buy. Any number of other providers can get it for you too. Likely Capital IQ, Bloomberg, and other professional solutions. I checked a number of free sites, and Market Watch was the only that had a longer history than a few months.",
"title": ""
},
{
"docid": "d304e33e18f5f22766283a4d16a7ca8b",
"text": "http://finance.yahoo.com/q/hp?s=EDV+Historical+Prices shows this which matches Vanguard: Mar 24, 2014 0.769 Dividend Your download link doesn't specify dates which makes me wonder if it is a cumulative distribution or something else as one can wonder how did you ensure that the URL is specifying to list only the most recent distribution and not something else. For example, try this URL which specifies date information in the a,b,c,d,e,f parameters: http://real-chart.finance.yahoo.com/table.csv?s=EDV&a=00&b=29&c=2014&d=05&e=16&f=2014&g=v&ignore=.csv",
"title": ""
},
{
"docid": "29834763126125feae688d2a6584967f",
"text": "Your question is missing information. The most probable reason is that the company made a split or a dividend paid in stock and that you might be confusing your historical price (which is relevant for tax purposes) with your actual market price. It is VERY important to understand this concepts before trading stocks.",
"title": ""
}
] |
fiqa
|
e7473c5cfc875b01eefc66764686ff76
|
How to file tax for the sale of stocks from form 1099B?
|
[
{
"docid": "0dde42cb2eb328499f4a02f6e692de0e",
"text": "You report each position separately. You do this on form 8949. 7 positions is nothing, it will take you 5 minutes. There's a tip on form 8949 that says this, though: For Part I (short term transactions): Note. You may aggregate all short-term transactions reported on Form(s) 1099-B showing basis was reported to the IRS and for which no adjustments or codes are required. Enter the total directly on Schedule D, line 1a; you are not required to report these transactions on Form 8949 (see instructions). For Part II (long term transactions): Note. You may aggregate all long-term transactions reported on Form(s) 1099-B showing basis was reported to the IRS and for which no adjustments or codes are required. Enter the total directly on Schedule D, line 8a; you are not required to report these transactions on Form 8949 (see instructions). If the 1099B in your case shows basis for each transaction as reported to the IRS - you're in luck, and don't have to type them all in separately.",
"title": ""
},
{
"docid": "87f69bd4a84c17b4ecab98edadb49928",
"text": "\"You can group your like-kind (same symbol, ST/LT) stock positions, just be sure that your totals match the total dollar amounts on the 1099. An inconsistency will possibly result in a letter from IRS to clarify. So, if you sold the 100 shares, and they came from 7 different buys, list it once. The sell price and date is known, and for the buy price, add all the buys and put \"\"Various\"\" for the date. If you have both long term and short term groups as part of those 7 buys, split them into two groups and list them separately.\"",
"title": ""
}
] |
[
{
"docid": "57e727fb40b21bd2c80d0ec6311b1577",
"text": "If the $882 is reported on W2 as your income then it is added to your taxable income on W2 and is taxed as salary. Your basis then becomes $5882. If it is not reported on your W2 - you need to add it yourself. Its salary income. If its not properly reported on W2 it may have some issues with FICA, so I suggest talking to your salary department to verify it is. In any case, this is not short term capital gain. Your broker may or may not be aware of the reporting on W2, and if they report the basis as $5000 on your 1099, when you fill your tax form you can add a statement that it is ESPP reported on W2 and change the basis to correct one. H&R Block and TurboTax both support that (you need to chose the correct type of investment there).",
"title": ""
},
{
"docid": "9c11adb5071b17afcac09a15263f2afe",
"text": "I did this for the last tax year so hopefully I can help you. You should get a 1099-B (around the same time you're getting your W-2(s)) from the trustee (whichever company facilitates the ESPP) that has all the information you need to file. You'll fill out a Schedule D and (probably) a Form 8949 to describe the capital gains and/or losses from your sale(s). It's no different than if you had bought and sold stock with any brokerage.",
"title": ""
},
{
"docid": "e65f6a428a57a6e3118afe397365a752",
"text": "There are two parts in this 1042-S form. The income/dividends go into the Canada T5 form. There will be credit if 1042-S has held money already, so use T2209 to report too.",
"title": ""
},
{
"docid": "ec3d14f8d9e15d3aab6f98d3a9cf46fd",
"text": "If you are tax-resident in the US, then you must report income from sources within and without the United States. Your foreign income generally must be reported to the IRS. You will generally be eligible for a credit for foreign income taxes paid, via Form 1116. The question of the stock transfer is more complicated, but revolves around the beneficial owner. If the stocks are yours but held by your brother, it is possible that you are the beneficial owner and you will have to report any income. There is no tax for bringing the money into the US. As a US tax resident, you are already subject to income tax on the gain from the sale in India. However, if the investment is held by a separate entity in India, which is not a US domestic entity or tax resident, then there is a separate analysis. Paying a dividend to you of the sale proceeds (or part of the proceeds) would be taxable. Your sale of the entity containing the investments would be taxable. There are look-through provisions if the entity is insufficiently foreign (de facto US, such as a Subpart-F CFC). There are ways to structure that transaction that are not taxable, such as making it a bona fide loan (which is enforceable and you must pay back on reasonable terms). But if you are holding property directly, not through a foreign separate entity, then the sale triggers US tax; the transfer into the US is not meaningful for your taxes, except for reporting foreign accounts. Please review Publication 519 for general information on taxation of resident aliens.",
"title": ""
},
{
"docid": "4feee62d05283e344f0ef317796f6d4e",
"text": "Starting of 2011, your broker has to keep track of all the transactions and the cost basis, and it will be reported on your 1099-B. Also, some brokers allow downloading the data directly to your tax software or to excel charts (I use E*Trade, and last year TurboTax downloaded all the transaction directly from them).",
"title": ""
},
{
"docid": "90544e3c1e3bf85fdd78b635d8ba2d0f",
"text": "\"the state of New Mexico provides guidance in this exact situation. On page 4: Gross receipts DOES NOT include: Example: When the seller passes tax to the buyer, the seller should separate, or “back out”, that tax from the total income to arrive at \"\"Gross Receipts,\"\" the amount reported in Column D of the CRS-1 Form. (Please see the example on page 48.) and on page 48: How do I separate (“back out”) gross receipts tax from total gross receipts? See the following examples of how to separate the gross receipts tax: 1) To separate (back out) tax from total receipts at the end of the report period, first subtract deductible and exempt receipts, and then divide total receipts including the tax for the report period by one plus the applicable gross receipts tax rate. For example, if your tax rate is 5.5% and your total receipts including tax are $1,055.00 with no deductions or exemptions, divide $1,055.00 by 1.055. The result is your gross receipts excluding tax (to enter in Column D of the CRS-1 Form) or $1,000. 2) If your tax rate is 5.5%, and your total gross receipts including tax are $1,055.00, and included in that figure are $60 in deductions and another $45 in exemptions: a) Subtract $105 (the sum of your deductions and exemptions) from $1,055. The remainder is $950. This figure still includes the tax you have recovered from your buyers. b) Divide $950 by 1.055 (1 plus the 5.5% tax rate). The result is $900.47. c) In Column D enter the sum of $900.47 plus $60 (the amount of deductible receipts)*, or $960.47. This figure is your gross receipts excluding tax.\"",
"title": ""
},
{
"docid": "923403f0704091c3e4cf237f5f4586ce",
"text": "Elaborating on kelsham's answer: You buy 100 shares XYZ at $1, for a total cost of $100 plus commissions. You sell 100 shares XYZ at $2, for a total income of $200 minus commissions. Exclusive of commissions, your capital gain is $100 for this trade, and you will pay taxes on that. Even if you proceed to buy 200 shares XYZ at $1, reinvesting all your income from the sale, you still owe taxes on that $100 gain. The IRS has met this trick before.",
"title": ""
},
{
"docid": "93b6457e8a48c4363e86f317dbc0934e",
"text": "From 26 CFR 1.1012(c)(1)i): ... if a taxpayer sells or transfers shares of stock in a corporation that the taxpayer purchased or acquired on different dates or at different prices and the taxpayer does not adequately identify the lot from which the stock is sold or transferred, the stock sold or transferred is charged against the earliest lot the taxpayer purchased or acquired to determine the basis and holding period of the stock. From 26 CFR 1.1012(c)(3): (i) Where the stock is left in the custody of a broker or other agent, an adequate identification is made if— (a) At the time of the sale or transfer, the taxpayer specifies to such broker or other agent having custody of the stock the particular stock to be sold or transferred, and ... So if you don't specify, the first share bought (for $100) is the one sold, and you have a capital gain of $800. But you can specify to the broker if you would rather sell the stock bought later (and thus have a lower gain). This can either be done for the individual sale (no later than the settlement date of the trade), or via standing order: 26 CFR 1.1012(c)(8) ... A standing order or instruction for the specific identification of stock is treated as an adequate identification made at the time of sale, transfer, delivery, or distribution.",
"title": ""
},
{
"docid": "a673fcb56b419b6a87c7643e71729396",
"text": "You need to report the income from any work as income, regardless of if you invest it, spend it, or put it in your mattress (ignoring tax advantaged accounts like 401ks). You then also need to report any realized gains or losses from non-tax advantaged accounts, as well as any dividends received. Gains and losses are realized when you actually sell, and is the difference between the price you bought for, and the price you sold for. Gains are taxed at the capital gains rate, either short-term or long-term depending on how long you owned the stock. The tax system is complex, and these are just the general rules. There are lots of complications and special situations, some things are different depending on how much you make, etc. The IRS has all of the forms and rules online. You might also consider having a professional do you taxes the first time, just to ensure that they are done correctly. You can then use that as an example in future years.",
"title": ""
},
{
"docid": "54ff023d50700b8483b49872d5648296",
"text": "Fill out the form manually, using last year's return as an example of how to report these gains. Or experiment with one of the low-priced tax programs; I've been told that they are available for as little as $17, and if your alternative is doing it manually, spending a bit of time checking their results isn't a huge problem. Or run the basic TTax, and tell it to add the appropriate forms manually. It supports them, it just doesn't have the interview sections to handle them. (@DanielCarson's answer has more details about that.) Or...",
"title": ""
},
{
"docid": "200fcef0533e0e0a2d7806632fc623de",
"text": "\"For example, if I have an income of $100,000 from my job and I also realize a $350,000 in long-term capital gains from a stock sale, will I pay 20% on the $350K or 15%? You'll pay 20% assuming filing single and no major offsets to taxable income. Capital gains count towards your income for determining tax bracket. They're on line 13 of the 1040 which is in the \"\"income\"\" section and aren't adjusted out/excluded from your taxable income, but since they are taxed at a different rate make sure to follow the instructions for line 44 when calculating your tax due.\"",
"title": ""
},
{
"docid": "8f5439eccba9927dbad2c3edb01e31dd",
"text": "Such activity is normally referred to as bartering income. From the IRS site - You must include in gross income in the year of receipt the fair market value of goods or services received from bartering. Generally, you report this income on Form 1040, Schedule C (PDF), Profit or Loss from Business (Sole Proprietorship), or Form 1040, Schedule C-EZ (PDF), Net Profit from Business (Sole Proprietorship). If you failed to report this income, correct your return by filing a Form 1040X (PDF), Amended U.S. Individual Income Tax Return. Refer to Topic 308 and Amended Returns for information on filing an amended return.",
"title": ""
},
{
"docid": "dc95981f0c9cdf734451c8280615c376",
"text": "The business and investment would be shown on separate parts of the tax return. (An exception to this is where an investment is related and part of your business, such as futures trading on business products) On the business side of it, you would show the transfer to the stocks as a draw from the business, the amount transferred would then be the cost base of the investment. For taxes, you only have to report gains or losses on investments.",
"title": ""
},
{
"docid": "2ed3c177786d18301727f0854afccc2d",
"text": "\"In the USA there are two ways this situation can be treated. First, if your short position was held less than 45 days. You have to (when preparing the taxes) add the amount of dividend back to the purchase price of the stock. That's called adjusting the basis. Example: short at $10, covered at $8, but during this time stock paid a $1 dividend. It is beneficial for you to add that $1 back to $8 so your stock purchase basis is $9 and your profit is also $1. Inside software (depending what you use) there are options to click on \"\"adjust the basis\"\" or if not, than do it manually specifically for those shares and add a note for tax reviewer. Second option is to have that \"\"dividednd payment in lieu paid\"\" deducted as investment expence. But that option is only available if you hold the shorts for more than 45 days and itemize your deductions. Hope that helps!\"",
"title": ""
},
{
"docid": "3700ea152d1680761ab5001bc0390c48",
"text": "Reading IRS Regulations section 15a.453-1(c) more closely, I see that this was a contingent payment sale with a stated maximum selling price. Therefore, at the time of filing prior years, there was no way of knowing the final contingent payment would not be reached and thus the prior years were filed correctly and should not be amended. Those regulations go on to give an example of a sale with a stated maximum selling price where the maximum was not reached due to contingency and states that in such cases: When the maximum [payment] amount is subsequently reduced, the gross profit ratio will be recomputed with respect to payments received in or after the taxable year in which an event requiring reduction occurs. However, in this case, that would result in a negative gross profit ratio on line 19 of form 6252 which Turbo Tax reports should be a non-negative number. Looking further in the regulations, I found an example which relates to bankruptcy and a resulting loss in a subsequent year: For 1992 A will report a loss of $5 million attributable to the sale, taken at the time determined to be appropriate under the rules generally applicable to worthless debts. Therefore, I used a gross profit ratio of zero on line 19 and entered a separate stock sale not reported on a 1099-B as a worthless stock on Form 8949 as a capital loss based upon the remaining basis in the stock sold in an installment sale. I also included an explanatory statement with my return to the IRS stating: In 2008, I entered into an installment sale of stock. The sale was a contingent payment sale with a stated maximum selling price. The sales price did not reach the agreed upon maximum sales price due to some contingencies not being met. According to the IRS Regulations section 15a.453-1(c) my basis in the stock remains at $500 in 2012 after the final payment. Rather than using a negative gross profit ratio on line 19 of form 6252, I'm using a zero ratio and treating the remaining basis as a schedule-D loss similar to worthless stock since the sale is now complete and my remaining basis is no longer recoverable.",
"title": ""
}
] |
fiqa
|
33a9029c4d263e46a6a6c58632defffc
|
Qualified Stock Options purtchased through my Roth IRA
|
[
{
"docid": "9230b874441939256ea7912de4cf896b",
"text": "\"No, you cannot. ISO are given to you in your capacity as an employee (that's why it is \"\"qualified\"\"), while your IRA is not an employee. You cannot transfer property to the IRA, so you cannot transfer them to the IRA once you paid for them as well. This is different from non-qualified stock options (discussed in this question), which I believe technically can be granted to IRA. But as Joe suggests in his answer there - there may be self-dealing issues and you better talk to a licensed tax adviser (EA/CPA licensed in your State) if this is something you're considering to do.\"",
"title": ""
}
] |
[
{
"docid": "7656ef45cba6e4625dec01393a52132b",
"text": "My employer matches 1 to 1 up to 6% of pay. They also toss in 3, 4 or 5 percent of your annual salary depending on your age and years of service. The self-directed brokerage account option costs $20 per quarter. That account only allows buying and selling of stock, no short sales and no options. The commissions are $12.99 per trade, plus $0.01 per share over 1000 shares. I feel that's a little high for what I'm getting. I'm considering 401k loans to invest more profitably outside of the 401k, specifically using options. Contrary to what others have said, I feel that limited options trading (the sale cash secured puts and spreads) can be much safer than buying and selling of stock. I have inquired about options trading in this account, since the trustee's system shows options right on the menus, but they are all disabled. I was told that the employer decided against enabling options trading due to the perceived risks.",
"title": ""
},
{
"docid": "2ea4f500a9647f4a7a6c4586c0066f03",
"text": "Vesting As you may know a stock option is the right to acquire a given amount of stock at a given price. Actually acquiring the stock is referred to as exercising the option. Your company is offering you options over 200,000 shares but not all of those options can be exercised immediately. Initially you will only be able to acquire 25,000 shares; the other 175,000 have conditions attached, the condition in this case presumably being that you are still employed by the company at the specified time in the future. When the conditions attached to a stock option are satisfied that option is said to have vested - this simply means that the holder of the option can now exercise that option at any time they choose and thereby acquire the relevant shares. Dividends Arguably the primary purpose of most private companies is to make money for their owners (i.e. the shareholders) by selling goods and/or services at a profit. How does that money actually get to the shareholders? There are a few possible ways of which paying a dividend is one. Periodically (potentially annually but possibly more or less frequently or irregularly) the management of a company may look at how it is doing and decide that it can afford to pay so many cents per share as a dividend. Every shareholder would then receive that number of cents multiplied by the number of shares held. So for example in 4 years or so, after all your stock options have vested and assuming you have exercised them you will own 200,000 shares in your company. If the board declares a dividend of 10 cents per share you would receive $20,000. Depending on where you are and your exact circumstances you may or may not have to pay tax on this. Those are the basic concepts - as you might expect there are all kinds of variations and complications that can occur, but that's hopefully enough to get you started.",
"title": ""
},
{
"docid": "c7efc2dd021ddf9a2a03b9622a11cf2a",
"text": "I have managed two IRA accounts; one I inherited from my wife's 401K and my own's 457B. I managed actively my wife's 401 at Tradestation which doesn't restrict on Options except level 5 as naked puts and calls. I moved half of my 457B funds to TDAmeritrade, the only broker authorized by my employer, to open a Self Directed account. However, my 457 plan disallows me from using a Cash-secured Puts, only Covered Calls. For those who does not know investing, I resent the contention that participants to these IRAs should not be messing around with their IRA funds. For years, I left my 401k/457B funds with my current fund custodian, Great West Financial. I checked it's current values once or twice a year. These last years, the market dived in the last 2 quarters of 2015 and another dive early January and February of 2016. I lost a total of $40K leaving my portfolio with my current custodian choosing all 30 products they offer, 90% of them are ETFs and the rest are bonds. If you don't know investing, better leave it with the pros - right? But no one can predict the future of the market. Even the pros are at the mercy of the market. So, I you know how to invest and choose your stocks, I don't think your plan administrator has to limit you on how you manage your funds. For example, if you are not allowed to place a Cash-Secured Puts and you just Buy the stocks or EFT at market or even limit order, you buy the securities at their market value. If you sell a Cash-secured puts against the stocks/ETF you are interested in buying, you will receive a credit in fraction of a dollar in a specific time frame. In average, your cost to owning a stock/ETF is lesser if you buy it at market or even a limit order. Most of the participants of the IRA funds rely too much on their portfolio manager because they don't know how to manage. If you try to educate yourself at a minimum, you will have a good understanding of how your IRA funds are tied up to the market. If you know how to trade in bear market compared to bull market, then you are good at managing your investments. When I started contributing to my employer's deferred comp account (457B) as a public employee, I have no idea of how my portfolio works. Year after year as I looked at my investment, I was happy because it continued to grow. Without scrutinizing how much it grew yearly, and my regular payroll contribution, I am happy even it only grew 2% per year. And at this age that I am ready to retire at 60, I started taking investment classes and attended pre-retirement seminars. Then I knew that it was not totally a good decision to leave your retirement funds in the hands of the portfolio manager since they don't really care if it tanked out on some years as long at overall it grew to a meager 1%-4% because they managers are pretty conservative on picking the equities they invest. You can generalize that maybe 90% of IRA investors don't know about investing and have poor decision making actions which securities/ETF to buy and hold. For those who would like to remain as one, that is fine. But for those who spent time and money to study and know how to invest, I don't think the plan manager can limit the participants ability to manage their own portfolio especially if the funds have no matching from the employer like mine. All I can say to all who have IRA or any retirement accounts, educate yourself early because if you leave it all to your portfolio managers, you lost a lot. Don't believe much in what those commercial fund managers also show in their presentation just to move your funds for them to manage. Be proactive. If you start learning how to invest now when you are young, JUST DO IT!",
"title": ""
},
{
"docid": "0135bf2ab914c53905961d531f2b4ae1",
"text": "My understanding was that if they cash out they only have to pay capital gains tax on it, which is lower than income tax for their bracket. You also have to think about tax on dividends from these stock options, which is only 15%, which is paltry to regular incometax rate that the rich pay on their salaries. According to Wikipedia: Congress passed the Jobs and Growth Tax Relief Reconciliation Act of 2003 (JGTRRA), which included some of the cuts Bush requested and which he signed into law on May 28, 2003. Under the new law, qualified dividends are taxed at the same rate as long-term capital gains, which is 15 percent for most individual taxpayers Anyways, SOMETHING needs to be done.",
"title": ""
},
{
"docid": "3d1e1dcc1720a7572a82eaa13e92c8cb",
"text": "\"Your employer can require a W8-BEN or W-9 if you are a contractor, and in some special cases. I believe this bank managing your stock options can as well; it's to prove you don't have \"\"foreign status\"\". See the IRS's W-9 instructions for details.\"",
"title": ""
},
{
"docid": "20f01969fc7c5ecc435420d3f8a15930",
"text": "This is not right. Inferring the employee stock pool’s takeaway is not as easy as just taking a fraction of the purchase price. As an example, that wouldn’t account for any preferred returns of other ownership classes, among other things. All considered though, it’s reasonable to assume that the employee stock pool will get some premium. Best of luck.",
"title": ""
},
{
"docid": "0ccf4fabeb824d7b3def25056a99e2f2",
"text": "You also need to remember that stock options usually become valueless if not exercised while an employee of the company. So if there is any chance that you will leave the company before an IPO, the effective value of the stock options is zero. That is the safest and least risky valuation of the stock options. With a Google or Facebook, stock options can be exercised and immediately sold, as they are publicly traded. In fact, they may give stock grants where you sell part of the grant to pay tax withholding. You can then sell the remainder of the grant for money at any time, even after you leave the company. You only need the option/grant to vest to take advantage of it. Valuing these at face value (current stock price) makes sense. That's at least a reasonable guess of future value. If you are absolutely sure that you will stay with the company until the IPO, then valuing the stock based on earnings can make sense. A ten million dollar profit can justify a hundred million dollar IPO market capitalization easily. Divide that by the number of shares outstanding and multiply by how many you get. If anything, that gives you a conservative estimate. I would still favor the big company offers though. As I said, they are immediately tradeable while this offer is effectively contingent on the IPO. If you leave before then, you get nothing. If they delay the IPO, you're stuck. You can't leave the company until then without sacrificing that portion of your compensation. That seems a big commitment to make.",
"title": ""
},
{
"docid": "2095856000a43ba310d2ac61948c6cb0",
"text": "Stuff I wish I had known, based on having done the following: Obtained employment at a startup that grants Incentive Stock Options (ISOs); Early-exercised a portion of my options when fair market value was very close to my strike price to minimize AMT; made a section 83b) election and paid my AMT up front for that tax year. All this (the exercise and the AMT) was done out of pocket. I've never see EquityZen or Equidate mention anything about loans for your exercise. My understanding is they help you sell your shares once you actually own them. Stayed at said startup long enough to have my exercised portion of these ISOs vest and count as long term capital gains; Tried to sell them on both EquityZen and Equidate with no success, due to not meeting their transaction minimums. Initial contact with EquityZen was very friendly and helpful, and I even got a notice about a potential sale, but then they hired an intern to answer emails and I remember his responses being particularly dismissive, as if I was wasting their time by trying to sell such a small amount of stock. So that didn't go anywhere. Equidate was a little more friendly and was open to the option of pooling shares with other employees to make a sale in order to meet their minimum, but that never happened either. My advice, if you're thinking about exercising and you're worried about liquidity on the secondary markets, would be to find out what the minimums would be for your specific company on these platforms before you plunk any cash down. Eventually brought my request for liquidity back to the company who helped connect me with an interested external buyer, and we completed the transaction that way. As for employer approval - there's really no reason or basis that your company wouldn't allow it (if you paid to exercise then the shares are yours to sell, though the company may have a right of first refusal). It's not really in the company's best interest to have their shares be illiquid on the secondary markets, since that sends a bad signal to potential investors and future employees.",
"title": ""
},
{
"docid": "7912721aeec16df874e5977ea2a9eaa0",
"text": "Here's an article on it that might help: http://thefinancebuff.com/restricted-stock-units-rsu-sales-and.html One of the tricky things is that you probably have the value of the vested shares and withheld taxes already on your W-2. This confuses everyone including the IRS (they sent me one of those audits-by-mail one year, where the issue was they wanted to double-count stock compensation that was on both 1099-B and W-2; a quick letter explaining this and they were happy). The general idea is that when you first irrevocably own the stock (it vests) then that's income, because you're receiving something of value. So this goes on a W-2 and is taxed as income, not capital gains. Conceptually you've just spent however many dollars in income to buy stock, so that's your basis on the stock. For tax paid, if your employer withheld taxes, it should be included in your W-2. In that case you would not separately list it elsewhere.",
"title": ""
},
{
"docid": "d7b4e34b04275f2d36fcb863c7e5b369",
"text": "Stock options represent an option to buy a share at a given price. What you have been offered is the option to buy the company share at a given price ($5) starting a given date (your golden handcuffs aka vesting schedule). If the company's value doubles in 1 year and the shares are liquid (i.e. you can sell them) then you've just made $125k of profit. If the company's value has gone to zero in 1 year then you've lost nothing other than your hopes of getting rich. As others have mentioned, the mechanics of exercising the option and selling the shares can typically be accomplished without any cash involved. The broker will do both in a single transaction and use the proceeds of the sale to pay the cost of buying the shares. You should always at least cover the taxable portion of the transaction and typically the broker will withhold that tax anyways. Otherwise you could find yourself in a position where you have actually lost money due to tax being owed while the shares decline in value below that tax. You don't have to worry about that right now. Again as people have mentioned options will typically expire 10 years from vesting or 90 days from leaving your employment with the company. I'm sure there are some variations on the theme. Make sure you ask and all this should be part of some written contract. I'm sure you can ask to see it if you wish. Also typical is that stock option grants have to be approved by the board which is normally a technicality. Some general advice:",
"title": ""
},
{
"docid": "875dad8f95dc8fdbe28434eb61a793ed",
"text": "You only got 75 shares, so your basis is the fair market value of the stock as of the grant date times the number of shares you got: $20*75. Functionally, it's the same thing as if your employer did this: As such, the basis in that stock is $1,500 ($20*75). The other 25 shares aren't yours and weren't ever yours, so they aren't part of your basis (for net issuance; if they were sell to cover, then the end result would be pretty similar, but there'd be another transaction involved, but we won't go there). To put it another way, suppose your employer paid you a $2000 bonus, leaving you with a $1500 check after tax withholding. Being a prudent person and not wishing to blow your bonus on luxury goods, you invest that $1500 in a well-researched investment. You wouldn't doubt that your cost basis in that investment at $1500.",
"title": ""
},
{
"docid": "52ac5428aefb5e55a7576108668702e0",
"text": "Back in the late 80's I had a co-worked do exactly this. In those days you could only do things quarterly: change the percentage, change the investment mix, make a withdrawal.. There were no Roth 401K accounts, but contributions could be pre-tax or post-tax. Long term employees were matched 100% up to 8%, newer employees were only matched 50% up to 8% (resulting in 4% match). Every quarter this employee put in 8%, and then pulled out the previous quarters contribution. The company match continued to grow. Was it smart? He still ended up with 8% going into the 401K. In those pre-Enron days the law allowed companies to limit the company match to 100% company stock which meant that employees retirement was at risk. Of course by the early 2000's the stock that was purchased for $6 a share was worth $80 a share... Now what about the IRS: Since I make designated Roth contributions from after-tax income, can I make tax-free withdrawals from my designated Roth account at any time? No, the same restrictions on withdrawals that apply to pre-tax elective contributions also apply to designated Roth contributions. If your plan permits distributions from accounts because of hardship, you may choose to receive a hardship distribution from your designated Roth account. The hardship distribution will consist of a pro-rata share of earnings and basis and the earnings portion will be included in gross income unless you have had the designated Roth account for 5 years and are either disabled or over age 59 ½. Regarding getting just contributions: What happens if I take a distribution from my designated Roth account before the end of the 5-taxable-year period? If you take a distribution from your designated Roth account before the end of the 5-taxable-year period, it is a nonqualified distribution. You must include the earnings portion of the nonqualified distribution in gross income. However, the basis (or contributions) portion of the nonqualified distribution is not included in gross income. The basis portion of the distribution is determined by multiplying the amount of the nonqualified distribution by the ratio of designated Roth contributions to the total designated Roth account balance. For example, if a nonqualified distribution of $5,000 is made from your designated Roth account when the account consists of $9,400 of designated Roth contributions and $600 of earnings, the distribution consists of $4,700 of designated Roth contributions (that are not includible in your gross income) and $300 of earnings (that are includible in your gross income). See Q&As regarding Rollovers of Designated Roth Contributions, for additional rules for rolling over both qualified and nonqualified distributions from designated Roth accounts.",
"title": ""
},
{
"docid": "fa4a0c6adca42d26c09ea9e94ba3ad8f",
"text": "I've been offered a package that includes 100k stock options at 5 dollars a share. They vest over 4 years at 25% a year. Does this mean that at the end of the first year, I'm supposed to pay for 25,000 shares? Wouldn't this cost me 125,000 dollars? I don't have this kind of money. At the end of the first year, you will generally have the option to pay for the shares. Yes, this means you have to use your own money. You generally dont have to buy ANY until the whole option vests, after 4 years in your case, at which point you either buy, or you are considered 'vested' (you have equity in the company without buying) or the option expires worthless, with you losing your window to buy into the company. This gives you plenty of opportunity to evaluate the company's growth prospects and viability over this time. Regarding options expiration the contract can have an arbitrarily long expiration date, like 17 years. You not having the money or not isn't a consideration in this matter. Negotiate a higher salary instead. I've told several companies that I don't want their equity despite my interest in their business model and product. YMMV. Also, options can come with tax consequences, or none at all. its not a raw deal but you need to be able to look at it objectively.",
"title": ""
},
{
"docid": "381ac48cf2db90a9ec2b8b900edf4b5c",
"text": "Your question doesn't make much sense. The exceptions are very specific and are listed on this site (IRS.GOV). I can't see how you can use any of the exceptions regularly while still continuing being employed and contributing. In any case, you pay income tax on any distribution that has not been taxed before (which would be a Roth account or a non-deductible IRA contribution). Including the employer's match. Here's the relevant portion: The following additional exceptions apply only to distributions from a qualified retirement plan other than an IRA:",
"title": ""
},
{
"docid": "0113c8ffa8e8339f5813442eaa943034",
"text": "The estimated cost of $200 sqft of living space is achievable by builders who are following one the their standard plans. They build hundreds of homes each year across the region using those standard plans. They have detailed schedules for constructing those homes, and they know exactly how many 2x4s are need to build house X with options A, F, and P. They buy hundreds of dishwashers and get discounts. That price also includes the cost of the raw land and the required improvements of the property. You need to know the zoning for that land. You need to know what you can build by right, and what you can get exceptions for. You don't want to pay $600 K and then find out you can only build a 1 level house and you can only use 1/4 acre. You would need to start with a design and then have the architect and the builder and a real estate lawyer look over the property. Then they can give you an estimate of what it would cost to put that design on that property. 83k sqft? I mean it can accommodate at least 10 houses. It depends on what is the minimum lot size. If the maximum allowable density is three houses per acre you can get 6 in 2.2 acres, but if the minimum lot size is supposed to be 5 acres, then you will need an exception just to build one house. And exceptions involve paperwork, hearings and lawyers.",
"title": ""
}
] |
fiqa
|
56982cf654202f3f55050a086d069951
|
Quarterly dividends to monthly dividends
|
[
{
"docid": "275df9312e040d3309fae20aff051c75",
"text": "Technically you should take the quarterly dividend yield as a fraction, add one, take the cube root, and subtract one (and then multiple by the stock price, if you want a dollar amount per share rather than a rate). This is to account for the fact that you could have re-invested the monthly dividends and earned dividends on that reinvestment. However, the difference between this and just dividing by three is going to be negligible over the range of dividend rates that are realistically paid out by ordinary stocks.",
"title": ""
}
] |
[
{
"docid": "5c4a4c3fcdc71141911a2575338dd386",
"text": "\"Dividends are paid based on who owns the security on a designated day. If a particular security pays once per year, you hold 364 days and sell on the day before the \"\"critical\"\" day, you get no dividend. This is not special to 401(k) or to DRIP. It's just how the system works. The \"\"critical\"\" day is the day before the posted ex-dividend date for the security. If you own at the end of that day, you get the dividend. If you sell on that day or before, you do not. Your company changing providers is not in itself relevant. The important factor is whether you can still hold your same investments in the new plan. If not, you will not get the dividend on anything that you currently hold but \"\"sell\"\" due to the change in providers. If you can, then you potentially get the dividend so long as there's no glitch in the transition. Incidentally, it works the other way too. You might end up getting a dividend through the new plan for something that you did not hold the full year.\"",
"title": ""
},
{
"docid": "af163056cc5badfd493698d5f2da9724",
"text": "The answer to this question requires looking at the mathematics of the Qualified Dividends and Capital Gains Worksheet (QDCGW). Start with Taxable Income which is the number that appears on Line 43 of Form 1040. This is after the Adjusted Gross Income has been reduced by the Standard Deduction or Itemized Deductions as the case may be, as well as the exemptions claimed. Then, subtract off the Qualified Dividends and the Net Long-Term Capital Gains (reduced by Net Short-Term Capital Losses, if any) to get the non-cap-gains part of the Taxable Income. Assigning somewhat different meanings to the numbers in the OPs' question, let's say that the Taxable Income is $74K of which $10K is Long-Term Capital Gains leaving $64K as the the non-cap-gains taxable income on Line 7 of the QDCGW. Since $64K is smaller than $72.5K (not $73.8K as stated by the OP) and this is a MFJ return, $72.5K - $64K = $8.5K of the long-term capital gains are taxed at 0%. The balance $1.5K is taxed at 15% giving $225 as the tax due on that part. The 64K of non-cap-gains taxable income has a tax of $8711 if I am reading the Tax Tables correctly, and so the total tax due is $8711+225 = $8936. This is as it should be; the non-gains income of $64K was assessed the tax due on it, $8.5K of the cap gains were taxed at 0%, and $1.5K at 15%. There are more complications to be worked out on the QDCGW for high earners who attract the 20% capital gains rate but those are not relevant here.",
"title": ""
},
{
"docid": "5a471ff2224383dc5a4b1d140d6501ee",
"text": "The methodology for divisor changes is based on splits and composition changes. Dividends are ignored by the index. Side note - this is why, in my opinion, that any discussion of the Dow's change over a long term becomes meaningless. Ignoring even a 2% per year dividend has a significant impact over many decades. The divisor can be found at http://wsj.com/mdc/public/page/2_3022-djiahourly.html",
"title": ""
},
{
"docid": "ffcfbbbf77acfc7817be2bc3cc848775",
"text": "\"EPS is often earnings/diluted shares. That is counting shares as if all convertible securities (employee stock options for example) were converted. Looking at page 3 of Q4 2015 Reissued Earnings Press Release we find both basic ($1.13) and diluted EPS ($1.11). Dividends are not paid on diluted shares, but only actual shares. If we pull put this chart @ Yahoo finance, and hovering our mouse over the blue diamond with a \"\"D\"\", we find that Pfizer paid dividends of $0.28, $0.28, $0.28, $0.30 in 2015. Or $1.14 per share. Very close to the $1.13, non-diluted EPS. A wrinkle is that one can think of the dividend payment as being from last quarter, so the first one in 2015 is from 2014. Leaving us with $0.28, $0.28, $0.30, and unknown. Returning to page three of Q4 2015 Reissued Earnings Press Release, Pfizer last $0.03 per share. So they paid more in dividends that quarter than they made. And from the other view, the $0.30 cents they paid came from the prior quarter, then if they pay Q1 2016 from Q4 2015, then they are paying more in that view also.\"",
"title": ""
},
{
"docid": "0d4101687bba339129bacff76ff10e39",
"text": "Your example isn't consistent: Q1 end market value (EMV) is $15,750, then you take out $2,000 and say your Q2 BMV is $11,750? For the following demo calculations I'll assume you mean your Q2 BMV is $13,750, with quarterly returns as stated: 10%, 5%, 10%. The Q2 EMV is therefore $15,125. True time-weighted return :- http://en.wikipedia.org/wiki/True_time-weighted_rate_of_return The following methods have the advantage of not requiring interim valuations. Money-weighted return :- http://en.wikipedia.org/wiki/Rate_of_return#Internal_rate_of_return Logarithmic return :- http://en.wikipedia.org/wiki/Rate_of_return#Logarithmic_or_continuously_compounded_return Modified Dietz return :- http://en.wikipedia.org/wiki/Modified_Dietz_method Backcalculating the final value (v3) using the calculated returns show the advantage of the money-weighted return over the true time-weighted return.",
"title": ""
},
{
"docid": "e598a5e481f764900e0fa46f0aeed3e1",
"text": "This answer contains three assumptions: New Share Price: Old Share Price * 1.0125 Quarterly Dividend: (New Share Price*0.01) * # of Shares in Previous Quarter Number of Shares: Shares from Previous Quarter + Quarterly Dividend/New Share Price For example, starting from right after Quarter One: New share price: $20 * 1.0125 = 20.25 1000 shares @ $20.25 a share yields $20.25 * 0.01 * 1000 = $202.5 dividend New shares: $202.5/20.25 = 10 shares Quarter Two: New share price: $20.503 1010 shares @ 20.503 yields $20.503*0.01*1010 = $207.082 dividend New shares: $207.082/20.503 = 10.1 shares Repeat over many cycles: 8 Quarters (2 years): 1061.52 shares @ $21.548 a share 20 Quarters (5 years): 1196.15 shares @ $25.012 a share 40 Quarters (10 years): 1459.53 shares @ $32.066 a share Graphically this looks like this: It's late enough someone may want to check my math ;). But I'd also assert that a 5% growth rate and a 4% dividend rate is pretty optimistic.",
"title": ""
},
{
"docid": "7db730d06199ca78710eb4791cf69fe3",
"text": "Daily > Weekly > Monthly. This statement says that if you use daily returns you will get more noise than if you used weekly or monthly returns. Much of the research performed uses monthly returns, although weekly returns have been used as well. For HFT you would need to detrend the data in order to spot true turning points.",
"title": ""
},
{
"docid": "44e44e38fb9e620d14bf154cfd1786bd",
"text": "Ignoring the wildly unreasonable goal, I'll answer just the Headline question asked. It's possible to choose dividend paying stocks so that you receive a dividend check each month. Dividends are typically paid quarterly, so 3 stocks chosen by quality first, but also for their dividend date will do this. To get $2000/mo or $24,000/yr would only take an investment of $600,000 in stocks that are yielding a 4% dividend.",
"title": ""
},
{
"docid": "4cf53539bda07f5efe80c4aa08b8b8f3",
"text": "The dividend quoted on a site like the one you linked to on Yahoo shows what 1 investor owning 1 share received from the company. It is not adjusted at all for taxes. (Actually some dividend quotes are adjusted but not for taxes... see below.) It is not adjusted because most dividends are taxed as ordinary income. This means different rates for different people, and so for simplicity's sake the quotes just show what an investor would be paid. You're responsible for calculating and paying your own taxes. From the IRS website: Ordinary Dividends Ordinary (taxable) dividends are the most common type of distribution from a corporation or a mutual fund. They are paid out of earnings and profits and are ordinary income to you. This means they are not capital gains. You can assume that any dividend you receive on common or preferred stock is an ordinary dividend unless the paying corporation or mutual fund tells you otherwise. Ordinary dividends will be shown in box 1a of the Form 1099-DIV you receive. Now my disclaimer... what you see on a normal stock quote for dividend in Yahoo or Google Finance is adjusted. (Like here for GE.) Many corporations actually pay out quarterly dividends. So the number shown for a dividend will be the most recent quarterly dividend [times] 4 quarters. To find out what you would receive as an actual payment, you would need to divide GE's current $0.76 dividend by 4 quarters... $0.19. So you would receive that amount for each share of stock you owned in GE.",
"title": ""
},
{
"docid": "aa1f9c1214d7c33fb2a1e73c46fcb482",
"text": "\"You don't. No one uses vanilla double entry accounting software for \"\"Held-For-Trading Security\"\". Your broker or trading software is responsible for providing month-end statement of changes. You use \"\"Mark To Market\"\" valuation at the end of each month. For example, if your cash position is -$5000 and stock position is +$10000, all you do is write-up/down the account value to $5000. There should be no sub-accounts for your \"\"Investment\"\" account in GNUCash. So at the end of the month, there would be the following entries:\"",
"title": ""
},
{
"docid": "289270da721e0e136ede814135c932bf",
"text": "\"Re. question 2 If I buy 20 shares every year, how do I get proper IRR? ... (I would have multiple purchase dates) Use the money-weighted return calculation: http://en.wikipedia.org/wiki/Rate_of_return#Internal_rate_of_return where t is the fraction of the time period and Ct is the cash flow at that time period. For the treatment of dividends, if they are reinvested then there should not be an external cash flow for the dividend. They are included in the final value and the return is termed \"\"total return\"\". If the dividends are taken in cash, the return based on the final value is \"\"net return\"\". The money-weighted return for question 2, with reinvested dividends, can be found by solving for r, the rate for the whole 431 day period, in the NPV summation. Now annualising And in Excel\"",
"title": ""
},
{
"docid": "c8e6b1e733931958f9180e8ad4a2b7d7",
"text": "No, they do not. Stock funds and bonds funds collect income dividends in different ways. Stock funds collect dividends (as well as any capital gains that are realized) from the underlying stocks and incorporates these into the funds’ net asset value, or daily share price. That’s why a stock fund’s share price drops when the fund makes a distribution – the distribution comes out of the fund’s total net assets. With bond funds, the internal accounting is different: Dividends accrue daily, and are then paid out to shareholders every month or quarter. Bond funds collect the income from the underlying bonds and keep it in a separate internal “bucket.” A bond fund calculates a daily accrual rate for the shares outstanding, and shareholders only earn income for the days they actually hold the fund. For example, if you buy a bond fund two days before the fund’s month-end distribution, you would only receive two days’ worth of income that month. On the other hand, if you sell a fund part-way through the month, you will still receive a partial distribution at the end of the month, pro-rated for the days you actually held the fund. Source Also via bogleheads: Most Vanguard bond funds accrue interest to the share holders daily. Here is a typical statement from a prospectus: Each Fund distributes to shareholders virtually all of its net income (interest less expenses) as well as any net capital gains realized from the sale of its holdings. The Fund’s income dividends accrue daily and are distributed monthly. The term accrue used in this sense means that the income dividends are credited to your account each day, just like interest in a savings account that accrues daily. Since the money set aside for your dividends is both an asset of the fund and a liability, it does not affect the calculated net asset value. When the fund distributes the income dividends at the end of the month, the net asset value does not change as both the assets and liabilities decrease by exactly the same amount. [Note that if you sell all of your bond fund shares in the middle of the month, you will receive as proceeds the value of your shares (calculated as number of shares times net asset value) plus a separate distribution of the accrued income dividends.]",
"title": ""
},
{
"docid": "d14fb27da79fc6cbf91391e62d5f4610",
"text": "Ok so I used Excel solver for this but it's on the right track. Latest price = $77.19 Latest div = $1.50 3-yr div growth = 28% g = ??? rs = 14% So we'll grow out the dividend 3 years @ 28%, and then capitalize them into perpetuity using a cap rate of [rs - g], and take the NPV using the rs of 14%. We can set it up and then solve g assuming an NPV of the current share price of $77.19. So it should be: NPV = $77.19 = [$1.50 / (1+0.14)^0 ] + [$1.50 x (1+0.28)^1 / (1+0.14)^1 ] + ... + [$1.50 x (1+0.28)^3 / (1+0.14)^3 ] + [$1.50 x (1+0.28)^3 x (1+g) / (0.14-g) / (1+0.14)^4 ] Which gives an implied g of a little under 9%. Let me know if this makes sense, and definitely check the work...",
"title": ""
},
{
"docid": "1b6204d3f9eabcbb760debffba4fbe26",
"text": "Why do people talk about stock that pay high dividends? Traditionally people who buy dividend stocks are looking for income from their investments. Most dividend stock companies pay out dividends every quarter ( every 90 days). If set up properly an investor can receive a dividend check every month, every week or as often as they have enough money to stagger the ex-dates. There is a difference in high $$ amount of the dividend and the yield. A $1/share dividend payout may sound good up front, but... how much is that stock costing you? If the stock cost you $100/share, then you are getting 1% yield. If the stock cost you $10/share, you are getting 10% yield. There are a lot of factors that come into play when investing in dividend stocks for cash flow. Keep in mind why are you investing in the first place. Growth or cash flow. Arrange your investing around your major investment goals. Don't chase big dollar dividend checks, do your research and follow a proven investment plan to reach your goals safely.",
"title": ""
},
{
"docid": "137304a6d70a9b27ece9809f15ac64d2",
"text": "I think your math is fine, and also consider insurance costs and the convenience factor of each scenario. Moving a car frequently to avoid parking tickets will become tedious. I'd rather spend an hour renting a car 20 times in a year rather than have to spend 15 minutes moving a car every three days. And if there's no other easy parking, that 15 minutes can take a lot longer. Plus it'll get dirty sitting there, could get vandalized. Yuck. For only 20 days/year, I don't see how owning a car is worth the hassle. I recommend using a credit card that comes with free car rental insurance.",
"title": ""
}
] |
fiqa
|
cc60ff355ecb831584434692ada60a3b
|
I own a mutual fund that owns voting shares, who gets the vote?
|
[
{
"docid": "d74c461745691a73e06d8e065bffe6e0",
"text": "You will not get a vote on any issues of the underlying stock. The mutual fund owner/manager will do the voting. In 2004, the Securities and Exchange Commission (SEC) required that fund companies disclose proxy votes, voting guidelines and conflicts of interest in the voting process. All funds must make these disclosures to the SEC through an N-PX filing, which must either be available to shareholders on the fund company's websites or upon request by telephone. You can also find your fund's N-PX filing on the SEC website. -- http://www.investopedia.com/articles/mutualfund/08/acting-in-interest.asp",
"title": ""
}
] |
[
{
"docid": "2bdde0d4794fe9988782373b8a264726",
"text": "This should all be covered in your stock grant documentation, or the employee stock program of which your grant is a part. Find those docs and it should specify how or when you can sale your shares, and how the money is paid to you. Generally, vested shares are yours until you take action. If instead you have options, then be aware these need to be exercised before they become shares. There is generally a limited time period on how long you can wait to exercise. In the US, 10 years is common. Unvested shares will almost certainly expire upon your departure of the company. Whether your Merrill Lynch account will show this, or show them as never existing, I can't say. But either way, there is nothing you can or should do.",
"title": ""
},
{
"docid": "424e2f75897201bd354f7f3e56b09a66",
"text": "\"Mutual funds invest according to their prospectus. If they declare that they match the investments to a certain index - then that's what they should do. If you don't want to be invested in a company that is part of that index, then don't invest in that fund. Short-selling doesn't \"\"exclude\"\" your investment. You cannot sell your portion of the position in the fund to cover it. Bottom line is that money has no smell. But if you want to avoid investing in a certain company and it is important to you - you should also avoid the funds that invest in it, and companies that own portions of it, and also probably the companies that buy their products or services. Otherwise, its just \"\"nice talk\"\" bigotry.\"",
"title": ""
},
{
"docid": "91c50e774803034969f7d5fb7a32d253",
"text": "\"It is true, as farnsy noted, that you generally do not know when stock that you're holding has been loaned by your broker to someone for a short sale, that you generally consent to that when you sign up somewhere in the small print, and that the person who borrows has to make repay and dividends. The broker is on the hook to make sure that your stock is available for you to sell when you want, so there's limited risk there. There are some risks to having your stock loaned though. The main one is that you don't actually get the dividend. Formally, you get a \"\"Substitute Payment in Lieu of Dividends.\"\" The payment in lieu will be taxed differently. Whereas qualified dividends get reported on Form 1099-DIV and get special tax treatment, substitute payments get reported on Form 1099-MISC. (Box 8 is just for this purpose.) Substitute payments get taxed as regular income, not at the preferred rate for dividends. The broker may or may not give you additional money beyond the dividend to compensate you for the extra tax. Whether or not this tax difference matters, depends on how much you're getting in dividends, your tax bracket, and to some extent your general perspective. If you want to vote your shares and exercise your ownership rights, then there are also some risks. The company only issues ballots for the number of shares issued by them. On the broker's books, however, the short sale may result in more long positions than there are total shares of stock. Financially the \"\"extra\"\" longs are offset by shorts, but for voting this does not balance. (I'm unclear how this is resolved - I've read that the the brokers essentially depend on shareholder apathy, but I'd guess there's more to it than that.) If you want to prevent your broker from loaning out your shares, you have some options:\"",
"title": ""
},
{
"docid": "135120000e9b25f90f97beb69b319bff",
"text": "How to 'use' your shares: If you own common shares in a company (as opposed to a fund) then you have the right (but not the obligation) to excersize one vote per share on questions put before the shareholders. Usually, this occurs once a year. Usually these questions regard approval of auditors. Sometimes they involve officers such as directors on the board. You will be mailed a form to fill out and mail back in. Preferred shares usually are not voting shares,but common shares always are. By the way, I do not recommend owning shares in companies. I recommend funds instead,either ETFs or mutual funds. Owning shares in companies puts you at risk of a failure of that company. Owning funds spreads that risk around,thus reducing your exposure. There are, really, two purposes for owning shares 1) Owning shares gives you the right to declared dividends 2) Owning shares allows you to sell those shares at some time in the future. (Hopefully at a profit) One obscure thing you can do with owned shares is to 'write' (sell) covered put options. But options are not something that you need to concern yourself with at this point. You may find it useful to sign up for a free daily email from www.investorwords.com.",
"title": ""
},
{
"docid": "764546861d56bdb5f695573a8b26477b",
"text": "When you own a share, you also own a vote (in most cases). That vote is your means of controlling the assets and management of the company. If you had enough votes and wanted to trade a share for an iPhone or liquidate the company entirely, you could do it. The only thing that prevents you from doing that is that companies are not set up to handle the transaction that way. Stock holders are usually trying to buy investments, not iPhones. There are companies that have more cash in the bank than the market cap (total value) of their stock. They usually don't remain as public companies for long in that case. An investor or group of investors buy them up and split the cash. If you had enough shares of Apple, you could do that to; or, just trade one for an iPhone.",
"title": ""
},
{
"docid": "a3098a35499b252d57dc59783b87d239",
"text": "If they own enough shares to vote to sell, you will be paid the offer price quoted to you. At that point if you do not wish to sell your only recourse will be to file a lawsuit. This is a common tactic for significant shareholders who have a minority stake and cannot block the sale because they have insufficient voting rights. What usually happens then is that they either settle the lawsuit out of court by paying a little more to the holdouts or the lawsuit is thrown out and they take the original offer from the buyer. Rarely does a lawsuit from a buyout go to trial.",
"title": ""
},
{
"docid": "0b8333e65a4904eda82fab6b725587ca",
"text": "Generally, ETFs and mutual funds don't pay taxes (although there are some cases where they do, and some countries where it is a common case). What happens is, the fund reports the portion of the gain attributed to each investor, and the investor pays the tax. In the US, this is reported to you on 1099-DIV as capital gains distribution, and can be either short term (as in the scenario you described), long term, or a mix of both. It doesn't mean you actually get a distribution, though, but if you don't - it reduces your basis.",
"title": ""
},
{
"docid": "01f1bf7f09638ed1715bea4b8f0846d5",
"text": "I would be nice to live in a world where people voted with their wallets and held businesses managers accountable for their actions. We don't currently live in that world so as long as WF makes money and pays dividends investors are still going to buy their stock.",
"title": ""
},
{
"docid": "ca40f9b445156190dec0799d8d34b5f7",
"text": "\"I always liked the answer that in the short term, the market is a voting machine and in the long term the market is a weighing machine. People can \"\"vote\"\" a stock up or down in the short term. In the long term, typically, the intrinsic value of a company will be reflected in the price. It's a rule of thumb, not perfect, but it is generally true. I think it's from an old investing book that talks about \"\"Mr. Market\"\". Maybe it's from one of Warren Buffet's annual letters. Anyone know? :)\"",
"title": ""
},
{
"docid": "ab9d23b9c64bf48c909c67f1f807bef8",
"text": "\"A mutual fund could make two different kinds of distributions to you: Capital gains: When the fund liquidates positions that it holds, it may realize a gain if it sells the assets for a greater price than the fund purchased them for. As an example, for an index fund, assets may get liquidated if the underlying index changes in composition, thus requiring the manager to sell some stocks and purchase others. Mutual funds are required to distribute most of their income that they generate in this way back to its shareholders; many often do this near the end of the calendar year. When you receive the distribution, the gains will be categorized as either short-term (the asset was held for less than one year) or long-term (vice versa). Based upon the holding period, the gain is taxed differently. Currently in the United States, long-term capital gains are only taxed at 15%, regardless of your income tax bracket (you only pay the capital gains tax, not the income tax). Short-term capital gains are treated as ordinary income, so you will pay your (probably higher) tax rate on any cash that you are given by your mutual fund. You may also be subject to capital gains taxes when you decide to sell your holdings in the fund. Any profit that you made based on the difference between your purchase and sale price is treated as a capital gain. Based upon the period of time that you held the mutual fund shares, it is categorized as a short- or long-term gain and is taxed accordingly in the tax year that you sell the shares. Dividends: Many companies pay dividends to their stockholders as a way of returning a portion of their profits to their collective owners. When you invest in a mutual fund that owns dividend-paying stocks, the fund is the \"\"owner\"\" that receives the dividend payments. As with capital gains, mutual funds will redistribute these dividends to you periodically, often quarterly or annually. The main difference with dividends is that they are always taxed as ordinary income, no matter how long you (or the fund) have held the asset. I'm not aware of Texas state tax laws, so I can't comment on your other question.\"",
"title": ""
},
{
"docid": "25ecfa8f3c795681212ee83de19234fc",
"text": "Private investors as mutual funds are a minority of the market. Institutional investors make up a substantial portion of the long term holdings. These include pension funds, insurance companies, and even corporations managing their money, as well as individuals rich enough to actively manage their own investments. From Business Insider, with some aggregation: Numbers don't add to 100% because of rounding. Also, I pulled insurance out of household because it's not household managed. Another source is the Tax Policy Center, which shows that about 50% of corporate stock is owned by individuals (25%) and individually managed retirement accounts (25%). Another issue is that household can be a bit confusing. While some of these may be people choosing stocks and investing their money, this also includes Employee Stock Ownership Plans (ESOP) and company founders. For example, Jeff Bezos owns about 17% of Amazon.com according to Wikipedia. That would show up under household even though that is not an investment account. Jeff Bezos is not going to sell his company and buy equity in an index fund. Anyway, the most generous description puts individuals as controlling about half of all stocks. Even if they switched all of that to index funds, the other half of stocks are still owned by others. In particular, about 26% is owned by institutional investors that actively manage their portfolios. In addition, day traders buy and sell stocks on a daily basis, not appearing in these numbers. Both active institutional investors and day traders would hop on misvalued stocks, either shorting the overvalued or buying the undervalued. It doesn't take that much of the market to control prices, so long as it is the active trading market. The passive market doesn't make frequent trades. They usually only need to buy or sell as money is invested or withdrawn. So while they dominate the ownership stake numbers, they are much lower on the trading volume numbers. TL;DR: there is more than enough active investment by organizations or individuals who would not switch to index funds to offset those that do. Unless that changes, this is not a big issue.",
"title": ""
},
{
"docid": "86065a94b974b282b797961feefbdebc",
"text": "Vanguard (and probably other mutual fund brokers as well) offers easy-to-read performance charts that show the total change in value of a $10K investment over time. This includes the fair market value of the fund plus any distributions (i.e. dividends) paid out. On Vanguard's site they also make a point to show the impact of fees in the chart, since their low fees are their big selling point. Some reasons why a dividend is preferable to selling shares: no loss of voting power, no transaction costs, dividends may have better tax consequences for you than capital gains. NOTE: If your fund is underperforming the benchmark, it is not due to the payment of dividends. Funds do not pay their own dividends; they only forward to shareholders the dividends paid out by the companies in which they invest. So the fair market value of the fund should always reflect the fair market value of the companies it holds, and those companies' shares are the ones that are fluctuating when they pay dividends. If your fund is underperforming its benchmark, then that is either because it is not tracking the benchmark closely enough or because it is charging high fees. The fact that the underperformance you're seeing appears to be in the amount of dividends paid is a coincidence. Check out this example Vanguard performance chart for an S&P500 index fund. Notice how if you add the S&P500 index benchmark to the plot you can't even see the difference between the two -- the fund is designed to track the benchmark exactly. So when IBM (or whoever) pays out a dividend, the index goes down in value and the fund goes down in value.",
"title": ""
},
{
"docid": "71cc4c1825d9e3abe96891c2fe6102df",
"text": "\"Excellent observation! The short answer is that you don't own the firm, you own the right to your share of the profits (or losses) for the period that you worked there. Technically you also have the right to vote to sell or disband the company (known as demutualization). The workers at Equal Exchange voted in a clause to our bylaws to prevent this--basically a \"\"poison pill.\"\" It says that if we ever sold the company we have to pay off any debts, return any investments (at the price paid), and give away any remaining assets to another company dedicated to Fair Trade. The effect is that there is no incentive for us to sell the company, so we don't worry about all the kinds of things you would if you were focused on an \"\"exit strategy.\"\" But in this sense, \"\"ownership\"\" is even more compromised, right? Back to your question, I think the answer is \"\"It depends on what you mean by ownership.\"\" It is certainly not ownership in the conventional sense. I think of it more like a trusteeship. We are stewards of the enterprise while we have the benefits given to active workers, but we have a responsibility not just to maximize our own well-being, but that of the other stakeholders (our suppliers, consumers, investors, our communities, the environment, etc), including the people who worked there before (and left part of the profits in the company as retained earnings) and those that will come after us.\"",
"title": ""
},
{
"docid": "f824112e5846e465882fb442b9ec6dd2",
"text": "\"As an exercise, I want to give this a shot. I'm not involved in a firm that cares about liquidity so all this stuff is outside my purview. As I understand it, it goes something like this: buy side fund puts an order to the market as a whole (all or most possibly exchanges). HFTs see that order hit the first exchange but have connectivity to exchanges further down the pipe that is faster than the buy side fund. They immediately send their own order in, which reaches exchanges and executes before the buy side fund's order can. They immediately put up an ask, and buy side fund's order hits that ask and is filled (I guess I'm assuming the order was a market order from the beginning). This is in effect the HFT front running the buy side fund. Is this accurate? Even if true, whether I have a genuine issue with this... I'm not sure. Has anyone on the \"\"pro-HFT\"\" side written a solid rebuttal to Lewis and Katsuyama that has solid research behind it?\"",
"title": ""
},
{
"docid": "6ee2225d5933fd06bf0dedbffb1a6fcf",
"text": "I'm a bot, *bleep*, *bloop*. Someone has linked to this thread from another place on reddit: - [/r/talkbusiness] [Which mutual fund have you invested in?](https://np.reddit.com/r/talkbusiness/comments/780emx/which_mutual_fund_have_you_invested_in/) [](#footer)*^(If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.) ^\\([Info](/r/TotesMessenger) ^/ ^[Contact](/message/compose?to=/r/TotesMessenger))* [](#bot)",
"title": ""
}
] |
fiqa
|
3e8b22d0bbafe1e78cdea617e2e34988
|
ESPP advantages and disadvantages
|
[
{
"docid": "bc5d03f4ae31e5978697ba056decdfcc",
"text": "The typical deal is you can put 10% of your gross pay into the ESPP. The purchase will occur on the last deposit date, usually a 6 month period, at a 15% discount to the market price. So, the math is something like this: Your return if sold the day it's purchased is not 15%, it's 100/85 or 17.6%. Minor nitpick on my part, I suppose. Also the return is not a 6 month return, as the weekly or bi-weekly deductions are the average between the oldest (6 mo) and the most recent (uh, zero time, maybe a week.) This is closer to 3 months. The annualized rate is actually pretty meaningless since you don't have 4 opportunities to achieve this return, it's important only if the cash flow hit causes you to borrow to support the ESPP purchases. The risk is whether the stock drops the 15% before you can execute the sell to take advantage of the gain. Of course the return is gross, you need to net for taxes. Edit to respond to comment below - When I said meaningless, I meant that you can't take the 17.6%, annualize it to 91.2% per year and think your $1000 will compound to $1912. It's as meaningless as when an investor gets a 10% gain on a stock in one day, and (with 250 trading days per year) decides his $1000 will be worth $2 quadrillion dollars after a year. The 17.6% is significant in that it's available twice per year, for a true 38% return over a year, but if borrowing to help the cash flow, that rate is really over 3 months.",
"title": ""
},
{
"docid": "599b71b9f8923614d6c6d5673b90bddb",
"text": "It would be difficult to answer without knowing specifics about a particular offer. In certain cases, it's definitely great and one could become a millionaire [Google for example]. In other cases one could lose money. In most cases one makes a decent return. As the specifics are not available, in general look out for: Most of these would determine if the plan is good for you to get into.",
"title": ""
},
{
"docid": "196928bfe685a39adecb60dcd4ad2cd5",
"text": "Advantage: more money. The financial tradeoff is usually to your benefit: Given these, for having your money locked up for the average length of the vesting periods (some is locked up for 3 months, some is locked up for nearly 0), you get a 10% return. Overall, it's like a 1.5% bonus for the year, assuming you were to sell everything right away. Of course, whether or not you wish to keep the stock depends on how you value MSFT as an investment. The disadvantage lies in a couple parts:",
"title": ""
},
{
"docid": "97bee22e50c5e9e4c608cbaf1cf7febf",
"text": "You should always always enroll in an espp if there is no lockup period and you can finance the contributions at a non-onerous rate. You should also always always sell it right away regardless of your feelings for the company. If you feel you must hold company stock to be a good employee buy some in your 401k which has additional advantages for company stock. (Gains treated as gains and not income on distribution.) If you can't contribute at first, do as much as you can and use your results from the previous offering period to finance a greater contribution the next period. I slowly went from 4% to 10% over 6 offering periods at my plan. The actual apr on a 15% discount plan is ~90% if you are able to sell right when the shares are priced. (Usually not the case, but the risk is small, there usually is a day or two administrative lockup (getting the shares into your account)) even for ESPP's that have no official lockup period. see here for details on the calculation. http://blog.adamnash.com/2006/11/22/your-employee-stock-purchase-plan-espp-is-worth-a-lot-more-than-15/ Just a note For your reference I worked for Motorola for 10 years. A stock that fell pretty dramatically over those 10 years and I always made money on the ESPP and more than once doubled my money. One additional note....Be aware of tax treatment on espp. Specifically be aware that plans generally withhold income tax on gains over the purchase price automatically. I didn't realize this for a couple of years and double taxed myself on those gains. Fortunately I found out my error in time to refile and get the money back, but it was a headache.",
"title": ""
},
{
"docid": "4b818b8a764737d6e5436931b43c3be9",
"text": "The answer is simple. If your employer is offering you a discount, that is free money. You always take free money, always.",
"title": ""
}
] |
[
{
"docid": "8cb3cc79ade469823657cee0a47b0478",
"text": "I have used TurboTax successfully for a couple of years. In addition to things already mentioned, it has some forums where you can get some simple questions answered (with complex ones it's always better to consult the professional) and it can import some data from your salary provider if you're lucky (some companies are supported, some aren't) - then you save time on filling out W2s, and can allow you to track your donations with sister site ItsDeductible.com, compare data with last year, etc. Not sure how desktop software compares. So far I didn't see any downsides except for, of course, the fact that your information is available online. But in our times most companies offer online access to earnings statements, etc., anyway, and so far the weakest link for the financial information has proven to be retailers, not tax preparers.",
"title": ""
},
{
"docid": "65df9092082134e7c1aca2e76080ff15",
"text": "Disadvantages: Advantages: In my opinion, the convenience and price (free!) of online options make doing your taxes online worth the negligible risks.",
"title": ""
},
{
"docid": "82399605716aff3c59f1db5614b26de3",
"text": "I would base my decision off of regulatory climate and look primarily into eastern Europe to tackle the higher growth climate; let's say Estonia or Lithuania. Estonia ranks better in surveys tracking hours senior managers spend dealing with regulatory issues. Lithuania seems to have the edge in terms of just getting the business started, land purchased and enforcing contracts and cross border trade. They probably have better demographics in terms of workforce. Links: http://www.doingbusiness.org/rankings http://www.nationsencyclopedia.com/WorldStats/ESI-senior-management-time-regulation.html",
"title": ""
},
{
"docid": "0e0a17f4cb11fdeada4c57156bbd9bc1",
"text": "No, there is no real advantage. The discrepancies in how they track the index will (generally) be so small that this provides very, very limited diversification, while increasing the complexity of your investments.",
"title": ""
},
{
"docid": "c091e3281e221f90416b841dccd337be",
"text": "Ok maybe I should have went into further detail but I'm not interested in a single point estimate to compare the different options. I want to look at the comparable NPVs for the two different options for a range of exit points (sell property / exit lease and sell equity shares). I want to graph the present values of each (y-axis being the PVs and x-axis being the exit date) and look at the 'cross-over' point where one option becomes better than the other (i'm taking into account all of the up front costs of the real estate purchase which will be a bit different in the first years). i'm also looking to do the same for multiple real estate and equity scenarios, in all likelihood generate a distribution of cross-over points. this is all theoretical, i'm not really going to take the results to heart. merely an exercise and i'm tangling with the discount rates at the moment.",
"title": ""
},
{
"docid": "6522950c19c9bdd002c6744ecb57c923",
"text": "Gold since the ancient time ( at least when it was founded) has kept its value. for example the french franc currency was considered valuable in the years 1400~ but in 1641 lost its value. However who owned Gold back then still got value. The advantage of having gold is you can convert it to cash easily in the world. it hedges against inflation: it is value rise when inflation happend. Gold has no income,no earnings. its not like a stock or a bond. its an alternative way to store value the Disadvantages of investing in Gold Gold doesnt return income , needs physical storage and insurance, Capital gains tax rates are higher on most gold investments. the best way to invest gold when there is inflation is expected. source",
"title": ""
},
{
"docid": "2f23b324328a3959962de22867d43218",
"text": "\"Like many things, there are pros and cons to using credit cards. The other folks on here have discussed the pros and length, so I'll just quickly summarize: Convenience of not having to carry cash. Delay paying your bills for a month with no penalty. Build your credit rating for a time when you need a big loan, like buying a house or starting a business. Provide easy access to credit for emergencies or special situations. Many credit cards provide \"\"rewards\"\" of various sorts that can effectively reduce the cost of what you buy. Protection against fraud. Extended warranty, often up to one year Damage warranty, covering breakage that might be explicitly excluded from normal warranty. But there are also disadvantages: One of the advantages of credit cards -- easy access to credit -- can also be a disadvantage. If you pay with cash, then when you run out of cash, you are forced to stop buying. But when you pay with credit, you can fall into the trap of buying things that you can't afford. You tell yourself that you'll pay for it when you get that next paycheck, but by the time the paycheck arrives, you have bought more things that you can't afford. Then you have to start paying interest on your credit card purchases, so now you have less money left over to pay off the bills. Many, many people have gotten into a death spiral where they keep piling up credit card debt until they are barely able to pay the interest every month, never mind pay off the original bill. And yes, it's easy to say, \"\"Credit cards are great as long as you use them responsibly.\"\" That may well be true. But some people have great difficulty being responsible about it. If you find that having a credit card in your pocket leads you to just not worry about how much you buy or what it costs, because, hey, you'll just put it on the credit card, then you will likely end up in serious trouble. If, on the other hand, you are just as careful about what you buy whether you are paying cash or using credit, and you never put more on the credit card than you can pay off in full when the bill arrives, then you should be fine.\"",
"title": ""
},
{
"docid": "3ab2573cad4bde03574e290f5e8ed6ac",
"text": "\"I think this is a good question with no single right answer. For a conservative investor, possible responses to low rates would be: Probably the best response is somewhere in the middle: consider riskier investments for a part of your portfolio, but still hold on to some cash, and in any case do not expect great results in a bad economy. For a more detailed analysis, let's consider the three main asset classes of cash, bonds, and stocks, and how they might preform in a low-interest-rate environment. (By \"\"stocks\"\" I really mean mutual funds that invest in a diversified mixture of stocks, rather than individual stocks, which would be even riskier. You can use mutual funds for bonds too, although diversification is not important for government bonds.) Cash. Advantages: Safe in the short term. Available on short notice for emergencies. Disadvantages: Low returns, and possibly inflation (although you retain the flexibility to move to other investments if inflation increases.) Bonds. Advantages: Somewhat higher returns than cash. Disadvantages: Returns are still rather low, and more vulnerable to inflation. Also the market price will drop temporarily if rates rise. Stocks. Advantages: Better at preserving your purchasing power against inflation in the long term (20 years or more, say.) Returns are likely to be higher than stocks or bonds on average. Disadvantages: Price can fluctuate a lot in the short-to-medium term. Also, expected returns are still less than they would be in better economic times. Although the low rates may change the question a little, the most important thing for an investor is still to be familiar with these basic asset classes. Note that the best risk-adjusted reward might be attained by some mixture of the three.\"",
"title": ""
},
{
"docid": "40d3eb1c81f085cd157f373631b1f4c2",
"text": "\"The major pros tend to be: The major cons tend to be: Being in California, you've got state income tax to worry about as well. It might be worth using some of that extra cash to hire someone who knows what they're doing to handle your taxes the first year, at least. I've always maxed mine out, because it's always seemed like a solid way to make a few extra dollars. If you can live without the money in your regular paycheck, it's always seemed that the rewards outweighed the risks. I've also always immediately sold the stock, since I usually feel like being employed at the company is enough \"\"eggs in that basket\"\" without holding investments in the same company. (NB: I've participated in several of these ESPP programs at large international US-based software companies, so this is from my personal experience. You should carefully review the terms of your ESPP before signing up, and I'm a software engineer and not a financial advisor.)\"",
"title": ""
},
{
"docid": "82563d9338f0325f339f1d01260121ea",
"text": "There's no best strategy. Options are just pieces of paper, and if the stock price goes below the strike price - they're worthless. Stocks are actual ownership share, whatever the price is - that's what they're worth. So unless you expect the company stock prices to sky-rocket soon, RSU will probably provide better value. You need to do some math and decide whether in your opinion the stock growth in the next few years justifies betting on ESOP. You didn't say what country you're from, but keep in mind that stock options and RSUs are taxed differently and that can affect your end result as well.",
"title": ""
},
{
"docid": "eb88706a12514094ba86384c8658df76",
"text": "Since you work there, you may have some home bias. You should treat that as any other stock. I sell my ESPP stocks periodically to reduce the over allocation of my portfolio while I keep my ESOP for longer periods.",
"title": ""
},
{
"docid": "f82af4d38eca444773bd68289feb1710",
"text": "I think people in general tend to unnecessarily over-complicate this issue. Here's what I think you should do in any situation like this: First and foremost, put all tax considerations aside and decide whether it makes sense to sell the stock now or hold on to it for the long term based on its merits as an investment. Tax considerations have absolutely nothing to do with whether the stock is a good investment. If you consider all non-tax factors and decide to hold on to it for the long term, then you can use the tax considerations as a very minor input to how long you should hold it - in other words, don't set your time horizon to 17.5 months if waiting another 2 weeks gives you better tax treatment. You're going to pay taxes on your gains no matter what. The only difference is whether you pay capital gains tax or income tax. Granted, the income tax rate is higher, but wouldn't it suck if you pay a LOT less tax only because you have a LOT less value in your stock? So to answer your question - I would say, absolutely not, tax consequences do not make it worthwhile to hold on to your ESPP shares. If you decide to hold on to your ESPP for other reasons (and they better be good ones to put that much free profit at risk), only then should you look at the tax consequences to help fine-tune your strategy.",
"title": ""
},
{
"docid": "370a026942c01c105a8f898c44d99b69",
"text": "The main advantage and disadvantage I can see in a scenario like this are - how savvy and good an investor are you? It's a good way to create below-market average returns if you're not that good at investing and returns way above market average if you are...",
"title": ""
},
{
"docid": "e0e1da3c3c3547ae5780093afe39e3fb",
"text": "Without commenting on your view of the TV market: Let's have a look at the main ways to get negative exposure: 1.Short the stocks Pros: Relatively Easy Cons: Interest rate, costs of shorting, linear bet 2.Options a. Write Calls b. Buy puts Pros: Convexity, leveraged, relatively cheap Cons: Zero Sum bet that expires with time, theta 3.Short Stock, Buy Puts, Write Calls Short X Units of each stock, Write calls on them , use call premiums to finance puts. Pros: 3x the power!, high kickout Cons: Unlimited pain",
"title": ""
},
{
"docid": "463d5ca31f9aa13617f4369749831f69",
"text": "No it's not, not until a disposition. Keep track of the CAD value on the day you receive the inheritance and get an average cost. Then every time you go to the US and spend some money, record the CAD value on the day you spend it. The difference is your profit or loss. There is no capital gain as long as you don't spend it. Now this may seem ridiculous, especially since none of this is reported to the CRA. They realize this and say the first $200 profit or loss is not taxable.",
"title": ""
}
] |
fiqa
|
dd6189ca8c6b851a9599db55b7e44e86
|
Where can I get AEX historical data - Amsterdam?
|
[
{
"docid": "684939ebba51de25344e1ff641d21134",
"text": "\"Try the general stock exchange web page. http://www.aex.nl I did a quick trial myself and was able to download historical data for the AEX index for the last few years. To get to the data, I went to the menu point \"\"Koersen\"\" on the main page and chose \"\"Indices\"\". I then entered into the sub page for the AEX index. There is a price chart window in which you have to choose the tab \"\"view data\"\". Now you can choose the date range you need and then download in a table format such as excel or csv. This should be easy to import into any software. This is the direct link to the sub page: http://www.aex.nl/nl/products/indices/NL0000000107-XAMS/quotes\"",
"title": ""
}
] |
[
{
"docid": "60d7316d8c2a91632dccee51d2cf1ca5",
"text": "Buy Data products from NSE. You will get historical order book. The Live order book may not be available. https://www.nseindia.com/supra_global/content/dotex/data_products.htm This link has all the data products that NSE can provide",
"title": ""
},
{
"docid": "8479415d2f76ac41122f65caeebe24b2",
"text": "Yahoo Finance's Historical Prices section allows you to look up daily historical quotes for any given stock symbol, you don't have to hit a library for this information. Your can choose a desired time frame for your query, and the dataset will include High/Low/Close/Volume numbers. You can then download a CSV version of this report and perform additional analysis in a spreadsheet of your choice. Below is Twitter report from IPO through yesterday: http://finance.yahoo.com/q/hp?s=TWTR&a=10&b=7&c=2013&d=08&e=23&f=2014&g=d",
"title": ""
},
{
"docid": "de1433f15a5657ab6d10c2427bdd38b9",
"text": "As @littleadv and @DumbCoder point out in their comments above, Bloomberg Terminal is expensive for individual investors. If you are looking for a free solution I would recommend Yahoo and Google Finance. On the other side, if you need more financial metrics regarding historic statements and consensus estimates, you should look at the iPad solution from Worldcap, which is not free, but significantly cheaper then Bloomberg and Reuters. Disclosure: I am affiliated with WorldCap.",
"title": ""
},
{
"docid": "b8f00666597667cba3f609b5c26ee232",
"text": "Some countries in European Union are starting to implement credit history sharing, for example now history from polish bureau BIK and German Schufa are mutually available. Similar agreements are planned between polish BIK and bureaus in the Netherlands and United Kingdom.",
"title": ""
},
{
"docid": "914a8d1f0698c2ba87071f40992cf1cb",
"text": "Well your gripe is using historic data to estimate VAR. That is separate topic. Either way however something that happens twice a century cant be considered an outlier and if you choose to use historic data then such things need to be included.",
"title": ""
},
{
"docid": "47e01f887e2e09330e8d0a228ce71e54",
"text": "You need a source of delisted historical data. Such data is typically only available from paid sources. According to my records, AULT (Ault Inc) began as an OTC stock in the 1980s prior it having an official NASDAQ listing. It was delisted on 27 Jan 2006. Its final traded price was $2.94. It was taken over at a price of $2.90 per share by SL Industries. Source: Symbol AULT-200601 within Premium Data US delisted stocks historical price data set available from http://www.premiumdata.net/products/premiumdata/ushistorical.php Disclosure: I am a co-owner of Norgate / Premium Data.",
"title": ""
},
{
"docid": "5596b89a7503739bfe1ed3ba97b4b993",
"text": "Robert Shiller has an on-line page with links to download some historical data that may be what you want here. Center for the Research in Security Prices would be my suggestion for another resource here.",
"title": ""
},
{
"docid": "e77cd1d257a008d29e784d3e629b0e6a",
"text": "Trading data can be had cheaply from: http://eoddata.com/products/historicaldata.aspx The SEC will give you machine readable financial statements for American companies for free, but that only goes back 3 or 4 years. Beyond that, you will have to pay for a rather expensive service like CapitalIQ or CRSP or whatever. Note that you will need considerable programming knowledge to pull this off.",
"title": ""
},
{
"docid": "76e622fc225406dbd70fb144752364dc",
"text": "\"You could use any of various financial APIs (e.g., Yahoo finance) to get prices of some reference stock and bond index funds. That would be a reasonable approximation to market performance over a given time span. As for inflation data, just googling \"\"monthly inflation data\"\" gave me two pages with numbers that seem to agree and go back to 1914. If you want to double-check their numbers you could go to the source at the BLS. As for whether any existing analysis exists, I'm not sure exactly what you mean. I don't think you need to do much analysis to show that stock returns are different over different time periods.\"",
"title": ""
},
{
"docid": "61de25b75f779fd3addc7f1515b344a4",
"text": "\"Though you're looking to repeat this review with multiple securities and events at different times, I've taken liberty in assuming you are not looking to conduct backtests with hundreds of events. I've answered below assuming it's an ad hoc review for a single event pertaining to one security. Had the event occurred more recently, your full-service broker could often get it for you for free. Even some discount brokers will offer it so. If the stock and its options were actively traded, you can request \"\"time and sales,\"\" or \"\"TNS,\"\" data for the dates you have in mind. If not active, then request \"\"time and quotes,\"\" or \"\"TNQ\"\" data. If the event happened long ago, as seems to be the case, then your choices become much more limited and possibly costly. Below are some suggestions: Wall Street Journal and Investors' Business Daily print copies have daily stock options trading data. They are best for trading data on actively traded options. Since the event sounds like it was a major one for the company, it may have been actively traded that day and hence reported in the papers' listings. Some of the print pages have been digitized; otherwise you'll need to review the archived printed copies. Bloomberg has these data and access to them will depend on whether the account you use has that particular subscription. I've used it to get detailed equity trading data on defunct and delisted companies on specific dates and times and for and futures trading data. If you don't have personal access to Bloomberg, as many do not, you can try to request access from a public, commercial or business school library. The stock options exchanges sell their data; some strictly to resellers and others to anyone willing to pay. If you know which exchange(s) the options traded on, you can contact the exchange's market data services department and request TNS and / or TNQ data and a list of resellers, as the resellers may be cheaper for single queries.\"",
"title": ""
},
{
"docid": "e05dcedf1a1bea716785027fabcee543",
"text": "\"Considering the fact that you are so unaware of how to find such data, I find it very very hard to believe that you actually need it. \"\"All trade and finance data for as much tickers and markets as possible.\"\" Wtf does that even mean. You could be referencing thousands of different types of data for any given \"\"ticker\"\" with a statement so vague. What are you looking for?\"",
"title": ""
},
{
"docid": "85297a8d9bd54e5aa6f686aafb566160",
"text": "\"You can find gold historical prices on the kitco site. See the \"\"View Data\"\" button.\"",
"title": ""
},
{
"docid": "40307df9c54994ab683105fdb81fdd78",
"text": "Seair Exim is the best portal for looking Tramadol Import Data. Find more details of Tramadol shipment data to India with price, date, HS codes, major Indian ports, countries, importers, buyers in India, quantity and more is also mentioned on the website.",
"title": ""
},
{
"docid": "7eb31c0f654543057ea12f777a712330",
"text": "At indexmundi, they have some historical data which you can grab from their charts: It only has a price on a monthly basis (at least for the 25 year chart). It has a number of things, like barley, oranges, crude oil, aluminum, beef, etc. I grabbed the data for 25 years of banana prices and here's an excerpt (in dollars per metric ton): That page did not appear to have historical prices for gold, though.",
"title": ""
},
{
"docid": "2649f29b989d8e7f895fca5b3d7d7194",
"text": "\"At the bottom of Yahoo! Finance's S & P 500 quote Quotes are real-time for NASDAQ, NYSE, and NYSE MKT. See also delay times for other exchanges. All information provided \"\"as is\"\" for informational purposes only, not intended for trading purposes or advice. Neither Yahoo! nor any of independent providers is liable for any informational errors, incompleteness, or delays, or for any actions taken in reliance on information contained herein. By accessing the Yahoo! site, you agree not to redistribute the information found therein. Fundamental company data provided by Capital IQ. Historical chart data and daily updates provided by Commodity Systems, Inc. (CSI). International historical chart data, daily updates, fund summary, fund performance, dividend data and Morningstar Index data provided by Morningstar, Inc. Orderbook quotes are provided by BATS Exchange. US Financials data provided by Edgar Online and all other Financials provided by Capital IQ. International historical chart data, daily updates, fundAnalyst estimates data provided by Thomson Financial Network. All data povided by Thomson Financial Network is based solely upon research information provided by third party analysts. Yahoo! has not reviewed, and in no way endorses the validity of such data. Yahoo! and ThomsonFN shall not be liable for any actions taken in reliance thereon. Thus, yes there is a DB being accessed that there is likely an agreement between Yahoo! and the providers.\"",
"title": ""
}
] |
fiqa
|
650451df3819a8b771b2a2ab57126bfe
|
When to start investing in an index fund? Wait for a bear market, use dollar cost-averaging, or another approach?
|
[
{
"docid": "a0b6e828cc624c4765047924ac4790ed",
"text": "\"First: what's your risk tolerance? How long is your investment going to last? If it's a short-term investment (a few years) and you expect to break even (or better) then your risk tolerance is low. You should not invest much money in stocks, even index funds and \"\"defensive\"\" stocks. If, however, you're looking for a long-term investment which you will put money into continually over the next 30 years, the amount of stock you purchase at any given time is pretty small, so the money you might lose by timing the market wrong will also be rather small. Also, you probably do a remarkably poor job of knowing when to buy stocks. If you actually knew how to time the market to materially improve your risk-adjusted returns, you've missed your calling; you should be making six figures or more on Wall Street. :)\"",
"title": ""
},
{
"docid": "66918556280be716c310c89ae0a9a672",
"text": "The fact that you are choosing index fund means you are surely not one of those investors who can correctly judge dips. But buying on dips is still important. You can use a method called Dollar Value Averaging. It is better than Dollar Cost Averaging. Just make sure you apply a lower limit and an upper limit to be more predictable. Suppose you have 10000 to invest. Use limits like minimum 200 investment when index is high, maximum 600 investment when index is down and when index gives normal returns, invest 400. Do this for about 2 years. More than 2 years is not recommended. I myself use this method and benefit a lot.",
"title": ""
}
] |
[
{
"docid": "481cb70786dba38a6d4b93b240f19a87",
"text": "If you're worried about volatility, and you're in mostly long positions, you should be looking to diversify your portfolio (meaning, buying some stocks that will do better in a bear market) if it's not already diverse, but you shouldn't be looking to abandon your positions, unless you anticipate a short-term need for cash. Other than that, you may want to hold off on the short-term positions for a while if you're concerned about volatility, though many traders see volatility as a great time to make money (as there is more movement, there's more opportunity to make money from mispriced stocks in both directions). Unless you think the market will be permanently down due to these reasons, anyway, but I don't see any reason to believe that yet. Even World War Two wasn't enough to permanently hurt the market, after all! Remember that everyone in the market knows what you do. If there were a sure thing that the market was going to crash, it already would have. Conservative positions tend to involve holding onto a well diversified portfolio rather than simply holding onto cash, unless the investor is very conservative (in which case the portfolio should be cash anyway). The fact that you say this is your rainy day fund does make me a little curious, though; typically rainy day funds are better in cash (and not invested) since you might hit that rainy day and need cash quickly (in which case you could take significant losses if the time isn't right).",
"title": ""
},
{
"docid": "4a19eb29e6bbded4886ff2d5b424e236",
"text": "\"I have been considering a similar situation for a while now, and the advice i have been given is to use a concept called \"\"dollar cost averaging\"\", which basically amounts to investing say 10% a month over 10 months, resulting in your investment getting the average price over that period. So basically, option 3.\"",
"title": ""
},
{
"docid": "dd01dc792e5e107c7aa7065b5a85f17e",
"text": "I would read any and all of the John Bogle books. Essentially: We know the market will rise and fall. We just don't know when specifically. For the most part it is impossible to time the market. He would advocate an asset allocation approach to investing. So much to bonds, tbills, S&P500 index, NASDAQ index. In your case you could start out with 10% of your portfolio each in S&P500 and NASDAQ. Had you done that, you would have achieved growth of 17% and 27% respectively. The growth on either one of those funds would have probably dwarfed the growth on the entire rest of your portfolio. BTW 2013 and 2014 were also very good years, with 2015 being mostly flat. In the past you have avoided risk in the market to achieve the detrimental effects of inflation and stagnant money. Don't make the same mistakes going forward.",
"title": ""
},
{
"docid": "589e8e9ab52c413eb5b16076903fd7a3",
"text": "The optimal time period is unambiguously zero seconds. Put it all in immediately. Dollar cost averaging reduces the risk that you will be buying at a bad time (no one knows whether now is a bad or great time), but brings with it reduction in expected return because you will be keeping a lot of money in cash for a long time. You are reducing your risk and your expected return by dollar cost averaging. It's not crazy to trade expected returns for lower risk. People do it all the time. However, if you have a pot of money you intend to invest and you do so over a period of time, then you are changing your risk profile over time in a way that doesn't correspond to changes in your risk preferences. This is contrary to finance theory and is not optimal. The optimal percentage of your wealth invested in risky assets is proportional to your tolerance for risk and should not change over time unless that tolerance changes. Dollar cost averaging makes sense if you are setting aside some of your income each month to invest. In that case it is simply a way of being invested for as long as possible. Having a pile of money sitting around while you invest it little by little over time is a misuse of dollar-cost averaging. Bottom line: forcing dollar cost averaging on a pile of money you intend to invest is not based in sound finance theory. If you want to invest all that money, do so now. If you are too risk averse to put it all in, then decide how much you will invest, invest that much now, and keep the rest in a savings account indefinitely. Don't change your investment allocation proportion unless your risk aversion changes. There are many people on the internet and elsewhere who preach the gospel of dollar cost averaging, but their belief in it is not based on sound principles. It's just a dogma. The language of your question implies that you may be interested in sound principles, so I have given you the real answer.",
"title": ""
},
{
"docid": "6ae1356d942a1f11b3d2191aadab1c0b",
"text": "Placing bets on targeted sectors of the market totally makes sense in my opinion. Especially if you've done research, with a non-biased eye, that convinces you those sectors will continue to outperform. However, the funds you've boxed in red all appear to be actively managed funds (I only double-checked on the first.) There is a bit of research showing that very few active managers consistently beat an index over the long term. By buying these funds, especially since you hope to hold for decades, you are placing bets that these managers maintain their edge over an equivalent index. This seems unlikely to be a winning bet the longer you hold the position. Perhaps there are no sector index funds for the sectors or focuses you have? But if there were, and it was my money that I planned to park for the long term, I'd pick the index fund over the active managed fund. Index funds also have an advantage in costs or fees. They can charge substantially less than an actively managed fund does. And fees can be a big drag on total return.",
"title": ""
},
{
"docid": "972cada0712bdb15c5249e2fca6cd7a2",
"text": "Disclosure - I love Jack Bogle. Jack basically invented the index fund, and as a result, let the common investor have an opportunity to choose a long term return of (S&P-.05%) instead of losing nearly 2% that many funds in that day charged. The use of index investing has saved investors many billions of dollars. The 1% round trip, total cost to buy/sell, was common. Fees for trading have since dropped. I happen to use Schwab who charges $9 for a trade. On $100,000, this is not .5% ($500) but less than .01%. I think it's safe to say that billion dollar mutual funds are paying even less for trades that I do. I believe Jack's example here is a combination of old data and hyperbole. The cost is not so much for the trades, per se, but for the people managing the fund. An index fund has a manager of course, but it's pretty much run by a computer.",
"title": ""
},
{
"docid": "9ba51d2d9ec2c4cf2b1e53d4321ceaf5",
"text": "\"Funds - especially index funds - are a safe way for beginning investors to get a diversified investment across a lot of the stock market. They are not the perfect investment, but they are better than the majority of mutual funds, and you do not spend a lot of money in fees. Compared to the alternative - buying individual stocks based on what a friend tells you or buying a \"\"hot\"\" mutual fund - it's a great choice for a lot of people. If you are willing to do some study, you can do better - quite a bit better - with common stocks. As an individual investor, you have some structural advantages; you can take significant (to you) positions in small-cap companies, while this is not practical for large institutional investors or mutual fund managers. However, you can also lose a lot of money quickly in individual stocks. It pays to go slow and to your homework, however, and make sure that you are investing, not speculating. I like fool.com as a good place to start, and subscribe to a couple of their newsletters. I will note that investing is not for the faint of heart; to do well, you may need to do the opposite of what everybody else is doing; buying when the market is down and selling when the market is high. A few people mentioned the efficient market hypothesis. There is ample evidence that the market is not efficient; the existence of the .com and mortgage bubbles makes it pretty obvious that the market is often not rationally valued, and a couple of hedge funds profited in the billions from this.\"",
"title": ""
},
{
"docid": "afdd5a936be2a9b0e538321fa88b1cd4",
"text": "There are multiple ETFs which inversely track the common indices, though many of these are leveraged. For example, SDS tracks approximately -200% of the S&P 500. (Note: due to how these are structured, they are only suitable for very short term investments) You can also consider using Put options for the various indices as well. For example, you could buy a Put for the SPY out a year or so to give you some fairly cheap insurance (assuming it's a small part of your portfolio). One other option is to invest against the market volatility. As the market makes sudden swings, the volatility goes up; this tends to be true more when it falls than when it rises. One way of invesing in market volatility is to trade options against the VIX.",
"title": ""
},
{
"docid": "364ef9c8cb65d47d63f4f94816cb29d7",
"text": "There are a number of scholarly articles on the subject including a number at the end of the Vanguard article you reference. However, unfortunately like much of financial research you can't look at the articles without paying quite a bit. It is not easy to make a generic comparison between lump-sum and dollar cost averaging because there are many ways to do dollar cost averaging. How long do you average over? Do you evenly average or exponentially put the money to work? The easiest way to think about this problem though is does the extra compounding from investing more of the money immediately outweigh the chance that you may have invested all the money when the market is overvalued. Since the market is usually near the correct value investing in lump sum will usually win out as the Vanguard article suggests. As a side note, while using DCA on a large one time sum of money is generally not optimal, if you have a consistent salary DCA by frequently investing a portion of your salary has been frequently shown to be a very good idea of long periods over saving up a bunch of money and investing it all at once. In this case you get the compounding advantage of investing early and you avoid investing a large chunk of money when the market is overvalued.",
"title": ""
},
{
"docid": "0e8fefe281a9f811bfd8f1f21c19ed49",
"text": "If you define dollar cost cost averaging as investing a specific dollar amount over a certain fixed time frame then it does not work statistically better than any other strategy for getting that money in the market. (IE Aunt Ruth wants to invest $60,000 in the stock market and does it $5000 a month for a year.) It will work better on some markets and worse on others, but on average it won't be any better. Dollar cost averaging of this form is effectively a bet that gains will occur at the end of the time period rather than the beginning, sometimes this bet will pay off, other times it won't. A regular investment contribution of what you can afford over an indefinite time period (IE 401k contribution) is NOT Dollar Cost Averaging but it is an effective investment strategy.",
"title": ""
},
{
"docid": "663374eb1366efd15357a239d1becb56",
"text": "Thanks for the advice. I will look into index funds. The only reason I was interested in this stock in particular is that I used to work for the company, and always kept an eye on the stock price. I saw that their stock prices recently went down by quite a bit but I feel like I've seen this happen to them a few times over the past few years and I think they have a strong catalogue of products coming out soon that will cause their stock to rise over the next few years. After not being able to really understand the steps needed to purchase it though, I think I've learned that I really don't know enough about the stock system in general to make any kind of informed decisions about it and should probably stick to something lower-risk or at least do some research before making any ill-informed decisions.",
"title": ""
},
{
"docid": "99a35d8a21693b605106176989414fed",
"text": "This is Rob Bennett, the fellow who developed the Valuation-Informed Indexing strategy and the fellow who is discussed in the comment above. The facts stated in that comment are accurate -- I went to a zero stock allocation in the Summer of 1996 because of my belief in Robert Shiller's research showing that valuations affect long-term returns. The conclusion stated, that I have said that I do not myself follow the strategy, is of course silly. If I believe in it, why wouldn't I follow it? It's true that this is a long-term strategy. That's by design. I see that as a benefit, not a bad thing. It's certainly true that VII presumes that the Efficient Market Theory is invalid. If I thought that the market were efficient, I would endorse Buy-and-Hold. All of the conventional investing advice of recent decades follows logically from a belief in the Efficient Market Theory. The only problem I have with that advice is that Shiller's research discredits the Efficient Market Theory. There is no one stock allocation that everyone following a VII strategy should adopt any more than there is any one stock allocation that everyone following a Buy-and-Hold strategy should adopt. My personal circumstances have called for a zero stock allocation. But I generally recommend that the typical middle-class investor go with a 20 percent stock allocation even at times when stock prices are insanely high. You have to make adjustments for your personal financial circumstances. It is certainly fair to say that it is strange that stock prices have remained insanely high for so long. What people are missing is that we have never before had claims that Buy-and-Hold strategies are supported by academic research. Those claims caused the biggest bull market in history and it will take some time for the widespread belief in such claims to diminish. We are in the process of seeing that happen today. The good news is that, once there is a consensus that Buy-and-Hold can never work, we will likely have the greatest period of economic growth in U.S. history. The power of academic research has been used to support Buy-and-Hold for decades now because of the widespread belief that the market is efficient. Turn that around and investors will possess a stronger belief in the need to practice long-term market timing than they have ever possessed before. In that sort of environment, both bull markets and bear markets become logical impossibilities. Emotional extremes in one direction beget emotional extremes in the other direction. The stock market has been more emotional in the past 16 years than it has ever been in any earlier time (this is evidenced by the wild P/E10 numbers that have applied for that entire time-period). Now that we are seeing the losses that follow from investing in highly emotional ways, we may see rational strategies becoming exceptionally popular for an exceptionally long period of time. I certainly hope so! The comment above that this will not work for individual stocks is correct. This works only for those investing in indexes. The academic research shows that there has never yet in 140 years of data been a time when Valuation-Informed Indexing has not provided far higher long-term returns at greatly diminished risk. But VII is not a strategy designed for stock pickers. There is no reason to believe that it would work for stock pickers. Thanks much for giving this new investing strategy some thought and consideration and for inviting comments that help investors to understand both points of view about it. Rob",
"title": ""
},
{
"docid": "be6485d1e027582bd54cfed4272ca86a",
"text": "\"Hope springs eternal in the human breast. No actively managed fund has beaten the indices over a long period of time, but over shorter periods, actively managed funds have beaten the indices quite often, sometimes quite spectacularly, and sometimes even for many years in a row. Examples from the past include Fidelity Magellan and Legg Mason Value Trust. So people buy actively managed funds hoping to cash in on such good performance. The difficulty is, of course, that many people don't even think about investing in a fund until it is listed in some \"\"Top Forty Funds of last year\"\" compilation, and for many funds, they have already peaked, and new buyers are often disappointed. Some people who invested earlier plan on getting out of the fund before the fund falls flat on its face, and fewer even succeed in doing so. As to why 401k plans often have high-cost actively managed funds, there are several reasons. A most important one is that there are numerous companies that act as administrators of 401k programs and these companies put together package deals of 401k programs (funds, administrative costs etc), and small employers perforce have to choose from one of these packages. Second, there are various rules that have come into existence since the first days of 401k (and 403b) programs such as the investment choices must include funds of different types, and actively managed funds (large cap, small cap etc) are one of the choices that must be offered. Gone are the days when the only choice was a variable annuity offered by the insurance company administering the 401k program. Finally, program participants also have hopes (cf. opening sentence) and used to demand that the 401k program offer a few actively managed funds, not just index funds.\"",
"title": ""
},
{
"docid": "90a0d7e413f92d7ff344b6cf2db64f1f",
"text": "Dollar cost averaging is beneficial if you don't have the money to make large investments but are able to add to your holding over time. If you can buy the same monetary amount at regular intervals over time, your average cost per share will be lower than the stock's average value over that time. This won't necessarily get you the best price, but it will get you, on the whole, a good price and will enable you to increase your holdings over time. If you're doing frequent trading on a highly volatile stock, you don't want to use this method. A better strategy is to buy the dips: Know the range, and place limit orders toward the bottom of the range. Then place limit orders to sell toward the high end of the range. If you do it right, you might be able to build up enough money to buy and sell increasing numbers of shares over time. But like any frequent trader, you'll have to deal with transaction fees; you'll need to be sure the fees don't eat all your profit.",
"title": ""
},
{
"docid": "6e5f2a3a2b0ef383a43d09b194a521ad",
"text": "\"Congratulations on deciding to save money and choosing to invest it. One thing to know about mutual funds including index funds is that they typically require a minimum investment of a few thousand dollars, $3000 being a typical amount, unless the investment is in an IRA in which case $1000 might be a minimum. In some cases, automated monthly investments of $50 or $100 might need to be set up if you are beginning with a small balance. There is nothing wrong with your approach. You now should go and look at the various requirements for specific index funds. The Fidelity and Vanguard families are good choices and both offer very low-cost index funds to choose from, but different funds can have different requirements regarding minimum investments etc. You also have a choice of which index you want to follow, the S&P 500 Index, MidCap Indexes, Small-Cap Indexes, Total Stock Market Indexes etc., but your choice might be limited until you have more money to invest because of minimum investment rules etc. Most important, after you have made your choice, I urge you to not look every day, or even every month, to see how your investment is doing. You will save yourself a lot of anxiety and will save yourself from making wrong decisions. Far too many investors ignore the maxim \"\"Buy Low, Sell High\"\" and pull money out of what should be long-term investments at the first flicker of a downturn and end up buying high and selling low. Finally, the time is approaching when most stock funds will be declaring dividends and capital gains distributions. If you invest now, you may end up with a paper profit on which you will have to pay taxes (in non-tax-advantaged accounts) on your 2012 tax return (this is called \"\"buying a dividend\"\"), and so you might want to spend some time investigating now, but actually make the investment in late December after your chosen fund has made its distributions (the date for this will be on the fund's web site) or in early 2013.\"",
"title": ""
}
] |
fiqa
|
17bdcb47d5338969d30eca1f3c8988b4
|
Calculating a stock's price target
|
[
{
"docid": "7af4f32798568d7e60f0dbc247e02a37",
"text": "The price-earnings ratio is calculated as the market value per share divided by the earnings per share over the past 12 months. In your example, you state that the company earned $0.35 over the past quarter. That is insufficient to calculate the price-earnings ratio, and probably why the PE is just given as 20. So, if you have transcribed the formula correctly, the calculation given the numbers in your example would be: 0.35 * 4 * 20 = $28.00 As to CVRR, I'm not sure your PE is correct. According to Yahoo, the PE for CVRR is 3.92 at the time of writing, not 10.54. Using the formula above, this would lead to: 2.3 * 4 * 3.92 = $36.06 That stock has a 52-week high of $35.98, so $36.06 is not laughably unrealistic. I'm more than a little dubious of the validity of that formula, however, and urge you not to base your investing decisions on it.",
"title": ""
}
] |
[
{
"docid": "5d876cb085eda6e8eea31f3493f64d58",
"text": "You want to buy when the stock market is at an all-time low for that day. Unfortunately, you don't know the lowest time until the end of the day, and then you, uh can't buy the stock... Now the stock market is not random, but for your case, we can say that effectively, it is. So, when should you buy the stock to hopefully get the lowest price for the day? You should wait for 37% of the day, and then buy when it is lower than it has been for all of that day. Here is a quick example (with fake data): We have 18 points, and 37% of 18 is close to 7. So we discard the first 7 points - and just remember the lowest of those 7. We bear in mind that the lowest for the first 37% was 5. Now we wait until we find a stock which is lower than 5, and we buy at that point: This system is optimal for buying the stock at the lowest price for the day. Why? We want to find the best position to stop automatically ignoring. Why 37%? We know the answer to P(Being in position n) - it's 1/N as there are N toilets, and we can select just 1. Now, what is the chance we select them, given we're in position n? The chance of selecting any of the toilets from 0 to K is 0 - remember we're never going to buy then. So let's move on to the toilets from K+1 and onwards. If K+1 is better than all before it, we have this: But, K+1 might not be the best price from all past and future prices. Maybe K+2 is better. Let's look at K+2 For K+2 we have K/K+1, for K+3 we have K/K+2... So we have: This is a close approximation of the area under 1/x - especially as x → ∞ So 0 + 0 + ... + (K/N) x (1/K + 1/K+1 + 1/K+2 ... + 1/N-1) ≈ (K/N) x ln(N/K) and so P(K) ≈ (K/N) x ln(N/K) Now to simplify, say that x = K/N We can graph this, and find the maximum point so we know the maximum P(K) - or we can use calculus. Here's the graph: Here's the calculus: To apply this back to your situation with the stocks, if your stock updates every 30 seconds, and is open between 09:30 and 16:00, we have 6.5 hours = 390 minutes = 780 refreshes. You should keep track of the lowest price for the first 289 refreshes, and then buy your stock on the next best price. Because x = K/N, the chance of you choosing the best price is 37%. However, the chance of you choosing better than the average stock is above 50% for the day. Remember, this method just tries to mean you don't loose money within the day - if you want to try to minimise losses within the whole trading period, you should scale this up, so you wait 37% of the trading period (e.g. 37% of 3 months) and then select. The maths is taken from Numberphile - Mathematical Way to Choose a Toilet. Finally, one way to lose money a little slower and do some good is with Kiva.org - giving loans to people is developing countries. It's like a bank account with a -1% interest - which is only 1% lower than a lot of banks, and you do some good. I have no affiliation with them.",
"title": ""
},
{
"docid": "4c0eb9d6fadbe1fe4754c9d470eabf64",
"text": "well there are many papers on power spot price prediction, for example. It depends on what level of methodology you would like to use. Linear regression is one of the basic steps, then you can continue with more advanced options. I'm a phd student studying modelling the energy price (electricity, gas, oil) as stochastic process. Regarding to your questions: 1. mildly speaking, it's really hard, due to its random nature! (http://www.dataversity.net/is-there-such-a-thing-as-predictive-analytics/) 2. well, i would ask what kind of measure of success you mean? what level of predicted interval one could find successful enough? 3. would you like me to send you some of the math-based papers on? 4. as i know, the method is to fully capture all main characteristics of the price. If it's daily power price, then these are mean-reversion effect, high volatility, spike, seasonality (weekly, monthly, yearly). Would you tell me what kind of method you're using? Maybe we can discuss some shared ideas? Anna",
"title": ""
},
{
"docid": "0d008a892deb44faa5fcc7a59cdb2cb0",
"text": "\"I'll give the TLDR answer. 1) You can't forecast the price direction. If you get it right you got lucky. If you think you get it right consistently you are either a statistical anomaly or a victim of confirmation bias. Countless academic studies show that you can not do this. 2) You reduce volatility and, importantly, left-tail risk by going to an index tracking ETF or mutual fund. That is, Probability(Gigantic Loss) is MUCH lower in an index tracker. What's the trade off? The good thing is there is NO tradeoff. Your expected return does not go down in the same way the risk goes down! 3) Since point (1) is true, you are wasting time analysing companies. This has the opportunity cost of not earning $ from doing paid work, which can be thought of as a negative return. \"\"With all the successful investors (including myself on a not-infrequent basis) going for individual companies directly\"\" Actually, academic studies show that individual investors are the worst performers of all investors in the stock market.\"",
"title": ""
},
{
"docid": "7617e14cd3d865fab29e1444486990d8",
"text": "Well i dont know of any calculator but you can do the following 1) Google S&P 500 chart 2) Find out whats the S&P index points (P1) on the first date 3) Find out whats the S&P index points (P2) on the second date 4) P1 - P2 = result",
"title": ""
},
{
"docid": "ad583b8150b66387306f405e29f9831a",
"text": "The average price would be $125 which would be used to compute your basis. You paid $12,500 for the stock that is now worth $4,500 which is a loss of $8,000 overall if you sell at this point.",
"title": ""
},
{
"docid": "5f99c60c56919e92f08c683b1e2d5532",
"text": "A rough estimate of the money you'd need to take a position in a single stock would be: In the case of your Walmart example, the current share price is 76.39, so assuming your commission is $7, and you'd like to buy, say, 3 shares, then it would cost approximately (76.39 * 3) + 7 = $236.17. Remember that the quoted price usually refers to 100-share lots, and your broker may charge you a higher commission or other fees to purchase an odd lot (less than 100 shares, usually). I say that the equation above gives an approximate minimum because However, I second the comments of others that if you're looking to invest a small amount in the stock market, a low cost mutual fund or ETF, specifically an index fund, is a safer and potentially cheaper option than purchasing individual stocks.",
"title": ""
},
{
"docid": "59cb85ca6365148f787ab8d328ae0bd3",
"text": "\"One idea: If you came up with a model to calculate a \"\"fair price range\"\" for a stock, then any time the market price were to go below the range it could be a buy signal, and above the range it could be a sell signal. There are many ways to do stock valuation using fundamental analysis tools and ratios: dividend discount model, PEG, etc. See Wikipedia - Stock valuation. And while many of the inputs to such a \"\"fair price range\"\" calculation might only change once per quarter, market prices and peer/sector statistics move more frequently or at different times and could generate signals to buy/sell the stock even if its own inputs to the calculation remain static over the period. For multinationals that have a lot of assets and income denominated in other currencies, foreign exchange rates provide another set of interesting inputs. I also think it's important to recognize that with fundamental analysis, there will be extended periods when there are no buy signals for a stock, because the stocks of many popular, profitable companies never go \"\"on sale\"\", except perhaps during a panic. Moreover, during a bull market and especially during a bubble, there may be very few stocks worth buying. Fundamental analysis is designed to prevent one from overpaying for a stock, so even if there is interesting volume and price movement for the stock, there should still be no signal if that action happens well beyond the stock's fair price. (Otherwise, it isn't fundamental analysis — it's technical analysis.) Whereas technical analysis can, by definition, generate far more signals because it largely ignores the fundamentals, which can make even an overvalued stock's movement interesting enough to generate signals.\"",
"title": ""
},
{
"docid": "e74ea038c1bca2f3ddaca4d7d7d23a6f",
"text": "\"Finding the \"\"optimal\"\" solution (and even defining what optimal is) would probably take a lot of searching of all the possible combinations of stocks you could buy, given that you can't buy fractional shares. But I'd guess that a simple \"\"greedy\"\" algorithm should get you close enough. For any given portfolio state, look at which stock is furthest below the target size - e.g. in your example, S3 is 3.5% away whereas S1 is only 3.1% away and S2 is over-sized. Then decided to buy one stock of S3, recalculate the current proportions, and repeat until you can't buy more stocks because you've invested all the money. If you have to pay a transaction fee for each kind of stock you purchase you might want to calculate this in larger lot sizes or just avoid making really small purchases.\"",
"title": ""
},
{
"docid": "1e090411bf34d3e1a21c664640f3d881",
"text": "Graphs are nothing but a representation of data. Every time a trade is made, a point is plotted on the graph. After points are plotted, they are joined in order to represent the data in a graphical format. Think about it this way. 1.) Walmart shuts at 12 AM. 2.)Walmart is selling almonds at $10 a pound. 3.) Walmart says that the price is going to reduce to $9 effective tomorrow. 4.) You are inside the store buying almonds at 11:59 PM. 5.) Till you make your way up to the counter, it is already 12:01 AM, so the store is technically shut. 6.) However, they allow you to purchase the almonds since you were already in there. 7.) You purchase the almonds at $9 since the day has changed. 8.) So you have made a trade and it will reflect as a point on the graph. 9.) When those points are joined, the curves on the graph will be created. 10.) The data source is Walmart's system as it reflects the sale to you. ( In your case the NYSE exchange records this trade made). Buying a stock is just like buying almonds. There has to be a buyer. There has to be a seller. There has to be a price to which both agree. As soon as all these conditions are met, and the trade is made, it is reflected on the graph. The only difference between the graphs from 9 AM-4 PM, and 4 PM-9 AM is the time. The trade has happened regardless and NYSE(Or any other stock exchange) has recorded it! The graph is just made from that data. Cheers.",
"title": ""
},
{
"docid": "0fabf85cd931ba89b9c27fcb7b04bb9b",
"text": "\"To my knowledge, there's no universal equation, so this could vary by individual/company. The equation I use (outside of sentiment measurement) is the below - which carries its own risks: This equations assumes two key points: Anything over 1.2 is considered oversold if those two conditions apply. The reason for the bear market is that that's the time stocks generally go on \"\"sale\"\" and if a company has a solid balance sheet, even in a downturn, while their profit may decrease some, a value over 1.2 could indicate the company is oversold. An example of this is Warren Buffett's investment in Wells Fargo in 2009 (around March) when WFC hit approximately 7-9 a share. Although the banking world was experiencing a crisis, Buffett saw that WFC still had a solid balance sheet, even with a decrease in profit. The missing logic with many investors was a decrease in profits - if you look at the per capita income figures, Americans lost some income, but not near enough to justify the stock falling 50%+ from its high when evaluating its business and balance sheet. The market quickly caught this too - within two months, WFC was almost at $30 a share. As an interesting side note on this, WFC now pays $1.20 dividend a year. A person who bought it at $7 a share is receiving a yield of 17%+ on their $7 a share investment. Still, this equation is not without its risks. A company may have a solid balance sheet, but end up borrowing more money while losing a ton of profit, which the investor finds out about ad-hoc (seen this happen several times). Suddenly, what \"\"appeared\"\" to be a good sale, turns into a person buying a penny with a dollar. This is why, to my knowledge, no universal equation applies, as if one did exist, every hedge fund, mutual fund, etc would be using it. One final note: with robotraders becoming more common, I'm not sure we'll see this type of opportunity again. 2009 offered some great deals, but a robotrader could easily be built with the above equation (or a similar one), meaning that as soon as we had that type of environment, all stocks fitting that scenario would be bought, pushing up their PEs. Some companies might be willing to take an \"\"all risk\"\" if they assess that this equation works for more than n% of companies (especially if that n% returns an m% that outweighs the loss). The only advantage that a small investor might have is that these large companies with robotraders are over-leveraged in bad investments and with a decline, they can't make the good investments until its too late. Remember, the equation ultimately assumes a person/company has free cash to use it (this was also a problem for many large investment firms in 2009 - they were over-leveraged in bad debt).\"",
"title": ""
},
{
"docid": "26add6882c3b0f92d535fd869f8d55ee",
"text": "\"Market caps is just the share price, multiplied by the number of shares. It doesn't represent any value (if people decide to pay more or less for the shares, the market cap goes up or down). It does represent what people think the company is worth. NAV sounds very much like book value. It basically says \"\"how much cash would we end up with if we sold everything the company owns, paid back all the debt, and closed down the business? \"\" Since closing down the business is rarely a good idea, this underestimates the value of the business enormously. Take a hairdresser who owns nothing but a pair of scissors, but has a huge number of repeat customers, charges $200 for a haircut, and makes tons of money every year. The business has a huge value, but NAV = price of one pair of used scissors.\"",
"title": ""
},
{
"docid": "eb3b91a7d2eadc3537f0d83721756f61",
"text": "The main question is, how much money you want to make? With every transaction, you should calculate the real price as the price plus costs. For example, if you but 10 GreatCorp stock of £100 each, and the transaction cost is £20 , then the real cost of buying a single share is in fact buying price of stock + broker costs / amount bought, or £104 in this case. Now you want to make a profit so calculate your desired profit margin. You want to receive a sales price of buying price + profit margin + broker costs / amount bought. Suppose that you'd like 5%, then you'll need the price per stock of my example to increase to 100 + 5% + £40 / 10 = £109. So you it only becomes worth while if you feel confident that GreatCorp's stock will rise to that level. Read the yearly balance of that company to see if they don't have any debt, and are profitable. Look at their dividend earning history. Study the stock's candle graphs of the last ten years or so, to find out if there's no seasonal effects, and if the stock performs well overall. Get to know the company well. You should only buy GreatCorp shares after doing your homework. But what about switching to another stock of LovelyInc? Actually it doesn't matter, since it's best to separate transactions. Sell your GreatCorp's stock when it has reached the desired profit margin or if it seems it is underperforming. Cut your losses! Make the calculations for LovelyCorp's shares without reference to GreatCorp's, and decide like that if it's worth while to buy.",
"title": ""
},
{
"docid": "5aa3f904bf8a057a8e5e4f1f7d9de354",
"text": "There isn't a formula like that, there is only the greed of other market participants, and you can try to predict how greedy those participants will be. If someone decided to place a sell order of 100,000 shares at $5, then you can buy an additional 100,000 shares at $5. In reality, people can infer that they might be the only ones trying to sell 100,000 shares right then, and raise the price so that they make more money. They will raise their sell order to $5.01, $5.02 or as high as they want, until people stop trying to buy their shares. It is just a non-stop auction, just like on ebay.",
"title": ""
},
{
"docid": "a9d3a69f8a6b441e6dc66b013eb677a9",
"text": "id like to start by saying youre still doing this yourself, and i dont actually have all the info required anyway, dont send it but >[3] Descriptive Statistical Measures: Provide a thorough discussion of the meaning and interpretation of the four descriptive >statistical measures required in your analysis: (1) Arithmetic Mean, (2) Variance, (3) Standard Deviation and (4) Coefficient of >Variation. For example, how are these measures related to each other? In order to develop this discussion, you may want to >consult chapters 2 and 3 of your textbook. This topic is an important part of your report. can be easily interpreted, im guessing the mean is simply just the observed (and then projected stock price for future models) the standard deviation determines the interval in which the stock price fluctuates. so you have like a curve, and then on this curve theirs a bunch of normal distributions modeling the variance of the price plotted against the month also the coefficient of variation is just r^2 so just read up on that and relate it to the meaning of it to the numbers you have actually my stats are pretty rusty so make sure you really check into these things but otherwise the formulas for part 4 is simple too. you can compare means of a certain month using certain equations, but there are different ones for certain situations you can test for significance by comparing the differences of the means and if its outside of your alpha level then it probably means your company is significantly different from the SP index. (take mu of SP - mu of callaway) you can also find more info on interpreting the two different coefficients your given if you look up comparing means of linear regression models or something",
"title": ""
},
{
"docid": "8c4294b7324da19af5e25ba706f728e5",
"text": "Are you sure this is not a scam. It is expensive to transfer 10 EUR by SWIFT. It will cost 30 EUR in Banks fees. If this is genuine ask them to use remittance service or western union or you open a PayPal account and ask them to transfer money.",
"title": ""
}
] |
fiqa
|
b246943cedfec60c96d4e3bbf3ae5ff9
|
Wardrobe: To Update or Not? How-to without breaking the bank
|
[
{
"docid": "c7d168ed78c1c948aafc5d6811738ca9",
"text": "New clothes isn't exactly an emergency expense :) so I would strongly suggest that you budget for it on a monthly basis. This doesn't mean you have to go spend the money every month, just put a reasonable amount of money into the clothes budget/savings every month and when you need a new shirt or two, take the money out of the saved money and go shopping. If you buy a piece or two of good quality clothing at a time you'd also not run into the situation where all your clothes fall apart at the same time.",
"title": ""
},
{
"docid": "3077b7fdcc203875f2c0c50aa165afb4",
"text": "Sounds more of a question for the fine people at StyleForum.net but i would suggest to start looking carefully at the quality of the fabrics: once you start studying the subject you will quickly recognize a solid shirt from a cheap one. That'll help you save money in the long term. Also keeping it simple (by choosing classic color tones and patterns) will make your wardrobe more resistant to the fashion du jour.",
"title": ""
},
{
"docid": "3f44dabb347f4359f735ab41f24a1900",
"text": "\"I buy new clothes when the old ones fall apart, literally. When jeans get holes in the knees, they're relegated to gardening or really messy jobs. Shirts go until they're worn so much that I can't reasonably wear them to work any more. Sounds like your \"\"dress code\"\" at work is about like mine (also a software engineer). I've found that the Dickies jeans and work pants are sturdy, long lasting, fit in reasonably at the workplace, and are very inexpensive. If you know that you're going to need to replace some pants or shirts, wait for a sale to roll around at a local store, and then stock up. I don't specifically budget for clothes since I spend so little. But I'd be at the bottom of anybody's list in terms of giving fashion advice...\"",
"title": ""
},
{
"docid": "b5784f5173fee940085b18abefd8ac43",
"text": "The best way to save on clothes is up to you. I have friends who save all year for two yearly shopping trips to update anything that may need updating at the time. By allowing themselves only two trips, they control the money spent. Bring it in cash and stop buying when you run out. On the other hand in my family we shop sales. When we determine that we need something we wait until we find a sale. When we see an exceptionally good sale on something we know we will need (basic work dress shoes, for example), we'll purchase it and save it until the existing item it is replacing has worn out. Our strategy is to know what we need and buy it when the price is right. We tend to wait on anything that isn't on sale until we can find the right item at a price we like, which sometimes means stretching the existing piece of clothing it is replacing until well after its prime. If you've got a list you're shopping from, you know what you need. The question becomes: how will you control your spending best? Carefully shopping sales and using coupons, or budgeting for a spree within limits?",
"title": ""
},
{
"docid": "696bd093b1a2a3df646fc7d66ad651d1",
"text": "If you budget for cloths and save up the money, you may be able to take advantage of sales when they are on. However only buy what you will use! You need to ask yourself what value you put on cloths compared to other things you can spend the money on. Also would you rather have money in the bank encase you need it rather than lots of cloths in the wardrobe?",
"title": ""
},
{
"docid": "8cd6fe269b21bef280e212439f3a5ae5",
"text": "The way I handle clothing purchases, is I save a little bit with each paycheck but don't commit to spending each month. I wait until I find the exact item I need or know I will need in the near future. I have a list of things to look for so I don't get off track and blow my budget. And each time I consider hitting Starbucks or buying a random something at Target, I think which is a better investment - a great pair of pants that will work for me for a decade, or a latte? Thank you for linking to me. Your question is one many people have. I feel that clothing should be purchased slowly, with care. If you do it this you will buy items that don't need to be replaced every two years, and will maintain style and quality longer. :)",
"title": ""
},
{
"docid": "2f58d0396cd16b27b628a3410ef9fa92",
"text": "We have a ton of student loan debt (mostly mine) and right now, I'm on a strict 'replace' only budget. I have some shirts I put elbow holes in that I'm only keeping around as a reminder to replace them. I wait until there is a deal of some sort (50% off or BOGO Free) unless I really need it - a white dress shirt for job interviews for instance. Outside of that, make it a line item in your budget and decide when you will spend it. For example, budget $60/mo for it, but only spend it when it reaches $180 or $300 or either of those amounts AND a sale (memorial day is the next big shopping sale after Easter). It is totally up to you. Waiting to replace two shirts (gray and green) and a pair of black dress pants.",
"title": ""
}
] |
[
{
"docid": "41b672feae4a9d69a896ca23a684cf0c",
"text": "Your question is rather direct, but I think there is some underlying issues that are worth addressing. One How to save and purchase ~$500 worth items This one is the easy one, since we confront it often enough. Never, ever, ever buy anything on credit. The only exception might be your first house, but that's it. Simply redirect the money you would spend in non necessities ('Pleasure and entertainment') to your big purchase fund (the PS4, in this case). When you get the target amount, simply purchase it. When you get your salary use it to pay for the monthly actual necessities (rent, groceries, etc) and go through the list. The money flow should be like this: Two How to evaluate if a purchase is appropriate It seems that you may be reluctant to spend a rather chunky amount of money on a single item. Let me try to assuage you. 'Expensive' is not defined by price alone, but by utility. To compare the price of items you should take into account their utility. Let's compare your prized PS4 to a soda can. Is a soda can expensive? It quenches your thirst and fills you with sugar. Tap water will take your thirst away, without damaging your health, and for a fraction of the price. So, yes, soda is ridiculously expensive, whenever water is available. Is a game console expensive? Sure. But it all boils down to how much do you end up using it. If you are sure you will end up playing for years to come, then it's probably good value for your money. An example of wrongly spent money on entertainment: My friends and I went to the cinema to see a movie without checking the reviews beforehand. It was so awful that it hurt, even with the discount price we got. Ultimately, we all ended up remembering that time and laughing about how wrong it went. So it was somehow, well spent, since I got a nice memory from that evening. A purchase is appropriate if you get your money's worth of utility/pleasure. Three Console and computer gaming, and commendation of the latter There are few arguments for buying a console instead of upgrading your current computer (if needed) except for playing console exclusives. It seems unlikely that a handful of exclusive games can justify purchasing a non upgradeable platform unless you can actually get many hours from said games. Previous arguments to prefer consoles instead of computers are that they work out of the box, capability to easily connect to the tv, controller support... have been superseded by now. Besides, pc games can usually be acquired for a lower price through frequent sales. More about personal finance and investment",
"title": ""
},
{
"docid": "7b93f475143325a24e6da6926526c528",
"text": "\">a totally unsustainable level I get what you're saying, but this isn't the whole story. For most of the history of civilization, clothes were really, really expensive, so most people had only a few well-made outfits that they repaired and took good care of. It would be unsustainable for our modern lifestyle, but given the scope of human history up until around 80 years ago, totally normal. In fact, you could say that the way the modern clothing industry works is probable \"\"totally unsustainable,\"\" since land is being ruined through cotton production, and in all likelihood we *won't* be able to grow as much cotton as we do now, 50 years from now.\"",
"title": ""
},
{
"docid": "060b0390d33fdce98e2f00e3c14994fc",
"text": "Girls love to [Buy Designer Hangbags Online](http://www.dianaekelly.com) and purses in formal also as informal events. They’re now a required fashion accessory, and girls prefer to buy bags that are complimentary with their wardrobe and personality. Bags can now be purchased at auctions, since celebrities often place their collections up for auction.",
"title": ""
},
{
"docid": "e52bc2668eb149758d54afde4e70f428",
"text": "You need a budget. You need to know how much you make and how much you spend. How much you earn and what you choose to spend you money on is your choice. You have your own tolerance for risk and your own taste and style, so lifestyle and what you own isn't something that we can answer. The key to your budget is to really understand where you money goes. Maybe you are the sort of person who needs to know down to the penny, maybe you are a person who rounds off. Either way you should have some idea. How should I make a budget? and How can I come up with a good personal (daily) budget? Once you know what you budget is, here are some pretty standard steps to get started. Each point is a full question in of itself, but these are to give you a place to start thinking and learning. You might have other priorities like a charity or other organizations that go into your priority like. Regardless of your career path and salary, you will need a budget to understand where you money is, where it goes, and how you can reach your goals and which goals are reasonable to have.",
"title": ""
},
{
"docid": "512437529b166e2053b78bda5b3fc410",
"text": "First, on-line you mostly and vastly buy things that don't need to be tried first to see if they fit. Even in clothing, casual shirts, jeans, socks, underwear and shoes don't need to be tried first. As for shipping costs, not if you are Amazon prime, and if you pay for shipping, it's cheaper than driving your car to the store and possibly paying for parking. Not to mention that time = money. It's the old retailers fault too: very basic selection of only items that sell for high margin and always issues with inventory. **I lost count of how many times I went to a store and they did not have the shirt or pants I wanted in the size I wanted.**",
"title": ""
},
{
"docid": "1ee3149b12c0eb37a8beb933962a0205",
"text": "I recently made the switch to keeping track of my finance (Because I found an app that does almost everything for me). Before, my situation was fairly simple: I was unable to come up with a clear picture of how much I was spending vs saving (altho I had a rough idea). Now I here is what it changes: What I can do now: Is it useful ? Since I don't actually need to save more than I do (I am already saving 60-75% of my income), 1) isn't important. Since I don't have any visibility on my personal situation within a few years, 2) and 3) are not important. Conclusion: Since I don't actually spend any time building theses informations I am happy to use this app. It's kind of fun. If I did'nt had that tool... It would be a waste of time for me. Depends on your situation ? Nb: the app is Moneytree. Works only in Japan.",
"title": ""
},
{
"docid": "f15443c1dd914c79d58468cdfb959590",
"text": "Until we get free returns from all online retailers, I am still shopping at malls and physical stores. Some stuff just doesn't fit you and retailers don't offer enough measurements to really do you justice. The most important thing in clothes is fit, and you have to try something on to validate that.",
"title": ""
},
{
"docid": "de8c18e220f160ff30cd91f8f5309b2e",
"text": "\"There is no objective \"\"should\"\". You need to be clear why you're tracking these numbers, and the right answer will come out of that. I think the main reason an individual would add up their assets and net worth is to get a sense of whether they are \"\"making progress\"\" or whether they are saving enough money, or perhaps whether they are getting close to the net worth at which they can make some life change. Obviously shares or other investment property ought to be counted in that. Buying small-medium consumer goods like furniture or electronics may improve your life but it's not especially improving your financial position. Accounting for them with little $20 or $200 changes every month or year is not necessarily useful. Things like cars are an intermediate case because firstly they're fairly large chunks of money and secondly they commonly are things people sell on for nontrivial amounts of money and you can reasonably estimate the value. If for instance I take $30k out of my bank account and buy a new car, how has my net worth changed? It would be too pessimistic to say I'm $30k worse off. If I really needed the money back, I could go and sell the car, but not for $30k. So, a good way to represent this is an immediate 10-20% cost for off-the-lot depreciation of the car, and then another 12% every year (or 1% every month). If you're tracking lifestyle assets that you want to accumulate, I think monetary worth is not the best scale, because it's only weakly correlated with the value you get out of them. Case in point: you probably wouldn't buy a second-hand mattress, and they have pretty limited resale value. Financially, the value of the mattress collapses as soon as you get it home, but the lifestyle benefit of it holds up just fine for eight years or so. So if there are some major purchases (say >$1000) that you want to make, and you want to track it, what I would do is: make a list of things you want to buy in the future, and then tick them off when you either do buy them, or cross them out when you decide you actually don't want them. Then you have something to motivate saving, and you have a chance to think it over before you make the purchase. You can also look back on what seemed to be important to you in the past and either feel satisfied you achieved what you wanted, or you can discover more about yourself by seeing how your desires change. You probably don't want to so much spend $50k as you want to buy a TV, a dishwasher, a trip to whereever...\"",
"title": ""
},
{
"docid": "0977eb1ea7f87d0209e0dfee94cc32b0",
"text": "Sounds like you're a man, so you're in luck. Our formalwear all looks similar enough that you can get by on a very short rotation. You can buy 1 pair of decent slacks in a versatile color like navy or grey with a pair of brown shoes with matching belt then have as little as 2 button down shirts (white and light blue). You can help keep the button downs clean by wearing an undershirt. This outfit can even overlap your interview outfit if you want to save more (especially if you want a good jacket/sport coat). The real key is to just not pick anything flashy and nobody will ever notice. You'll be running to the dry cleaners every single weekend, but you won't have much in terms of up-front costs. For women though I have no clue how they manage this stuff.",
"title": ""
},
{
"docid": "7b06018eea438bc6fa824eb18425e01f",
"text": "You can save a bit by getting an interior cabin, but it can be a bit weird to be in a room with NO window and NO natural daylight. It's strangely really easy to lose all track of time, especially if you don't set an alarm. We spent very little time in our stateroom outside of sleeping, or changing clothes etc. So IMHO there's no reason to overspend on the stateroom. OTOH, A room along the outside of the ship will cost more, but might be worth the cost difference. You don't need a full balcony or any of that, however a simple window even if partially occluded by something, really changes the feel of the room and makes it a lot less 'cave' like. JohnFx covered just about everything else I'd have had to say.",
"title": ""
},
{
"docid": "d1ec8530127342d42b5dea70184732c4",
"text": "Buy a lot of best quality of products. We are happy to help you. Just visit Budget Closeouts and order any item you love to get it on your doorstep. We have many categorized items of General Merchandise for personal uses, daily uses, apparel, fashioned clothing, watches, kitchen accessories. You can purchase toys and much more for your infant. There is a branding clothing including towel and other wear. New fashioned and artificial jewelry for women available at our site at low cost.",
"title": ""
},
{
"docid": "b6282e3f8f1250824493ca2c1516ab5b",
"text": "Google Maps and Craig’s List are easy wins and free. I would offer free inspections and estimates. What about getting into one of those new mover mailings. That is when most people will be updating their fixtures.",
"title": ""
},
{
"docid": "4015a67ac8479112a93c6116fbb474bf",
"text": "However, we would also like to include on our budget the actual cost of the furniture when we buy it. That would be double-counting. When it's time to buy the new kit, just pay for it directly from savings and then deduct that amount from the Furniture Cash asset that you'd been adding to every month.",
"title": ""
},
{
"docid": "9490f794ef758e3b30cb8fd4480f2d8c",
"text": "Capitalism works best when there is transparency. Your secret formula for wealth in the stocks should be based on a fair and free market, as sdg said, it is your clever interpretation of the facts, not the facts themselves. The keyword is fair. Secrets are useful for manufacturing or production, which is only a small part of capitalism. Even then we had to devise a system to protect ideas (patents, trademarks and copyrights) because as they succeed in the market, their secrecy goes away quickly.",
"title": ""
},
{
"docid": "b2f64b01661f14f9e1080f97219715e8",
"text": "I think it is just semantics, but this example demonstrates what they mean by that: If you put $100 in a CD today, it will grow and you will be able to take out a greater amount plus the original principal at a later time. If you put $100 extra on your house payment today, you may save some money in the long run, but you won't have an asset that you wouldn't otherwise have at the end of the term that you can draw on without selling the property. But of course, you can't live on the street, so you need another house. So ultimately you can't easily realize the investment. If you get super technical, you could probably rationalize it as an investment, just like you could call clipping coupons investing, but it all comes down to what your financial goals are. What the advisers are trying to tell you is that you shouldn't consider paying down your mortgage early as an acceptable substitute for socking away some money for retirement or other future expenses. House payments for a house you live in should be considered expenses, in my opinion. So my view is that paying off a note early is just a way of cutting expenses.",
"title": ""
}
] |
fiqa
|
b836c0f62742be1fa4effe35b70cd3d1
|
What's the smartest way to invest money gifted to a child?
|
[
{
"docid": "16e25911a45c2f58774a7d7359982862",
"text": "I was in a similar situation with my now 6 year old. So I'll share what I chose. Like you, I was already funding a 529. So I opened a custodial brokerage account with Fidelity and chose to invest in very low expense index fund ETFs which are sponsored by Fidelity, so there are no commissions. The index funds have a low turnover as well, so they tend to be minimal on capital gains. As mentioned in the other answer, CDs aren't paying anything right now. And given your long time to grow, investing in the stock market is a decent bet. However, I would steer clear of any insurance products. They tend to be heavy on fees and low on returns. Insurance is for insuring something not for investing.",
"title": ""
},
{
"docid": "9b9c15c76218abb213142e4a14b9442f",
"text": "American Century has their Heritage Fund: https://www.americancentury.com/sd/mobile/fund_facts_jstl?fund=30 It has a good track record. Here are all the mutual funds from American Century: https://www.americancentury.com/content/americancentury/direct/en/fund-performance/performance.html A mutual fund is a good wayway to go as it is not subject to fluctuations throughout the day whereas an ETF is.",
"title": ""
},
{
"docid": "0cce0f6388d7800faa381baa79671493",
"text": "CDs pay less than the going rate so that the banks can earn money. Investing is risky right now due to the inaction of the Fed. Try your independent life insurance agent. You could get endowment life insurance. It would pay out at age 21. If you decide to invest it yourself try to buy a stable equity fund. My 'bedrock' fund is PGF. It pays dividends each month and is currently yealding 5.5% per year. Scottrade has a facility to automatically reinvest the dividend each month at no commission. http://www.marketwatch.com/investing/Fund/PGF?CountryCode=US",
"title": ""
}
] |
[
{
"docid": "e1616d8bf5ea75501f47408abdac52ee",
"text": "\"Although my kid just turned 5, he's learning the value of money now, which should help him in the future. First thing, teach him that you exchange money for goods and services. Let him see the bills, and explain what they're for (i.e. \"\"I pay ISP Co to give us Internet; that lets us watch Youtube and Netflix, as well as play games with Grandma on your GameStation\"\"). After a little while, they will see where it goes, and why. Then you have your automatic bills, such as mortgage payments. I make a habit of taking out the cash after I get paid, and my son comes with me to the bank where I deposit it again (I get paid monthly, so it's only one extra withdraw). He can physically see the money, and understand that if the stack is gone, it's gone. Now that he is understanding things cost money, he wants to make money himself. He volunteers to help clean up the kitchen and vacuum rooms in the house, usually without being asked. I give him a dollar or two for the simple chores like that. Things like cleaning his room or his own mess, he does not get paid for. He puts all his money into his piggy bank, and he has some goals in mind: a big fire truck, a police helicopter, a pool, a monster truck, a boat. Remember he's only 5. He has his goals, and we have the money he's been saving up. We calculate how many times he needs to vacuum the living room, or clean up dishes, to get there, and he realizes it takes a long time. He looks for other ways to make money around the house, and we come up with solutions together. I am hoping in a year or two that I can show him my investments and get him to understand why they make or lose money. I want to get him in to the habit of investing a little bit every few months, then every month, to help his income grow, even if he can't touch the money quite yet.\"",
"title": ""
},
{
"docid": "aea888d082dde7b0ae9ba723fe69f1ca",
"text": "\"given your time frame I'm not sure if investing in a 529 is your best option. If you're investing in a 529 you may have to deal with market volatility and the amount you invest over the course of three years could be worth less than what you had initially invested when it comes to your child's college education. The main idea of starting a plan like a 529 is the time-frame for your investments to grow. You also have the option of \"\"pre-paying\"\" your child's college, but that has restrictions. Most of the state sponsored pre-pay plans limit you to state schools if that wasn't obvious. Also, the current political situation is tricky, and may influence the cost of education in ~3-4 years, but I'm not sure this is the proper place for that discussion. Also, as far as the viability of these, it depends state-by-state. I live in Illinois and don't think I would count on a payout given our current financial situation. You could, however, look into paying tuition now for a state school and it will be risk free in terms of inflation, but again, it's hard to anticipate the political scope of this. They also have private pre-pay plans, but that would limit your child's university options just as the state pre-pay. Check out this investopedia article on 529 plans, it's basic but will give you a high level overview. Bankrate has an overview as well.\"",
"title": ""
},
{
"docid": "8459f004f4e0af10ecbb3300600c0704",
"text": "\"First - for anyone else reading - An IRA that has no beneficiary listed on the account itself passes through the will, and this eliminates the opportunity to take withdrawals over the beneficiaries' lifetimes. There's a five year distribution requirement. Also, with a proper beneficiary set up on the IRA account the will does not apply to the IRA. An IRA with me as sole beneficiary regardless of the will saying \"\"all my assets I leave to the ASPCA.\"\" This is also a warning to keep that beneficiary current. It's possible that one's ex-spouse is still on IRA or 401(k) accounts as beneficiary and new spouse is in for a surprise when hubby/wife passes. Sorry for the tangent, but this is all important to know. The funneling of a beneficiary IRA through a trust is not for amateurs. If set up incorrectly, the trust will not allow the stretch/lifetime withdrawals, but will result in a broken IRA. Trusts are not cheap, nor would I have any faith in any attorney setting it up. I would only use an attorney who specializes in Trusts and Estate planning. As littleadv suggested, they don't have to be minors. It turns out that the expense to set up the trust ($1K-2K depending on location) can help keep your adult child from blowing through a huge IRA quickly. I'd suggest that the trust distribute the RMDs in early years, and a higher amount, say 10% in years to follow, unless you want it to go just RMD for its entire life. Or greater flexibility releasing larger amounts based on life events. The tough part of that is you need a trustee who is willing to handle this and will do it at a low cost. If you go with Child's name only, I don't know many 18/21 year old kids who would either understand the RMD rules on IRAs or be willing to use the money over decades instead of blowing it. Edit - A WSJ article Inherited IRAs: a Sweet Deal and my own On my Death, Please, Take a Breath, an article that suggests for even an adult, education on how RMDs work is a great idea.\"",
"title": ""
},
{
"docid": "aa74f600145202151e5f547f789b0d7d",
"text": "\"Smart money (Merriam-Webster, Wiktionary) is simply a term that refers to the money that successful investors invest. It can also refer to the successful investors themselves. When someone tells you to \"\"follow the smart money,\"\" they are generally telling you to invest in the same things that successful investors invest in. For example, you might decide to invest in the same things that Warren Buffett invests in. However, there are a couple of problems with blindly following someone else's investments without knowing what you are doing. First, you are not in the same situation that the expert is in. Warren Buffett has a lot of money in a lot of places. He can afford to take some chances that you might not be able to take. So if you choose only one of his investments to copy, and it ends up being a loser, he is fine, but you are not. Second, when Warren Buffett makes large investments, he affects the price of stocks. For example, Warren Buffett's company recently purchased $1 Billion worth of Apple stock. As soon as this purchase was announced, the price of Apple stock went up 4% from people purchasing the stock trying to follow Warren Buffett. That having been said, it is a good idea to watch successful investors and learn from what they do. If they see a stock as something worth investing in, find out what it is that they see in that company.\"",
"title": ""
},
{
"docid": "6da4f2f93e76033d15a828d5afbe534e",
"text": "\"First off, leaving money in a 529 account is not that bad, since you may always change the beneficiary to most any blood relative. So if you have leftovers, you don't HAVE to pay the 10% penalty if you have a grandchild, for instance, that can use it. But if you would rather have the money out, then you need a strategy to get it out that is tax efficient. My prescription for managing a situation like this is not to pay directly out of the 529 account, but instead calculate your cost of education up-front and withdraw that money at the beginning of the school year. You can keep it in a separate account, but that's not necessary. The amount you withdraw should be equal to what the education costs, which may be estimated by taking the budget that the school publishes minus grants and scholarships. You should have all of those numbers before the first day of school. This is amount $X. During the year, write all the checks out of your regular account. At the end of the school year, you should expect to have no money left in the account. I presume that the budget is exactly what you will spend. If not, you might need to make a few adjustments, but this answer will presume you spend exactly $X during the fall and the spring of the next year. In order to get more out of the 529 without paying penalties, you are allowed to remove money without penalty, but having the gains taxed ($y + $z). You have the choice of having the 529 funds directed to the educational institution, the student, or yourself. If you direct the funds to the student, the gains portion would be taxed at the student's rate. Everyone's tax situation is different, and of course there is a linkage between the parent's taxes and student's taxes, but it may be efficient to have the 529 funds directed to the student. For instance, if the student doesn't have much income, they might not even be required to file income tax. If that's the case, they may be able to remove an amount, $y, from the 529 account and still not need to file. For instance, let's say the student has no unearned income, and the gains in the 529 account were 50%. The student could get a check for $2,000, $1,000 would be gains, but that low amount may mean the student was not required to file. Or if it's more important to get more money out of the account, the student could remove the total amount of the grants plus scholarships ($y + $z). No penalty would be due, just the taxes on the gains. And at the student's tax rate (generally, but check your own situation). Finally, if you really want the money out of the account, you could remove a check ($y + $z + $p). You'd pay tax on the gains of the sum, but penalty of 10% only on the $p portion. This answer does not include the math that goes along with securing some tax credits, so if those credits still are around as you're working through this, consider this article (which requires site sign-up). In part, this article says: How much to withdraw - ... For most parents, it will be 100% of the beneficiary’s qualified higher education expenses paid this year—tuition, fees, books, supplies, equipment, and room and board—less $4,000. The $4,000 is redirected to the American Opportunity Tax Credit (AOTC),... When to withdraw it - Take withdrawals in the same calendar year that the qualified expenses were paid. .... Designating the distributee - Since it is usually best that the Form 1099-Q be issued to the beneficiary, and show the beneficiary’s social security number, I prefer to use either option (2) or (3) [ (2) a check made out to the account beneficiary, or (3) a check made out to the educational institution] What about scholarships? - The 10 percent penalty on a non-qualified distribution from a 529 plan is waived when the excess distribution can be attributed to tax-free scholarships. While there is no direct guidance from the IRS, many tax experts believe the distribution and the scholarship do not have to match up in the same calendar year when applying the penalty waiver. If you're curious about timing (taking non-penalty grants and scholarship money out), there is this link, which says you \"\"probably\"\" are allowed to accumulate grants and scholarship totals, for tax purposes, over multiple years.\"",
"title": ""
},
{
"docid": "ef0d0989aefd08ec51ec39b83d0616d8",
"text": "\"If the investments are in a non-retirement, taxable account, there's not much you can do to avoid short-term capital gains if you sell now. Ways to limit short-term capital gains taxes: Donate -- you can donate some of the stock to charity (before selling it). Transfer -- you can give some of the stock to, say, a family member in a lower tax bracket. But there are tons of rules, gift limits, and won't work for little kids or full time students. They would still pay taxes at their own rate. Protect your gains by buying puts. Wait it out until the long-term capital gains rate kicks in. This allows you to lock in your gains now (but you won't benefit from potential future appreciation.) Buying puts also costs $, so do the ROI calculation. (You could also sell a call and buy a put at the same time and lock in your gains for certain, but the IRS often looks at that as locking in the short-term capital gain, so be careful and talk to a tax professional if you are considering that method.) Die. There's a \"\"step-up\"\" basis on capital gains for estates. source: http://www.forbes.com/2010/07/30/avoid-capital-gains-tax-anschutz-personal-finance-baldwin-tax-strategy.html\"",
"title": ""
},
{
"docid": "010f909268669b49b39dba8403b72e70",
"text": "Firstly, there is also a lifetime gift+estate tax allowance. If the father's estate, including other gifts given in his lifetime, is unlikely to exceed that allowance, it might be simplest simply to give the whole amount now and count it against the allowance. Right now the allowance is $5.34M, but that seems quite a big political football and it's the allowance when you die that matters. Looking back at past values for the allowance, $1M seems like a pretty safe amount to bet on. If you want to avoid/minimize the use of that allowance, I would make a loan structured as a mortgage that will have $14K payments each year (which can then be forgiven). The points in Rick Goldstein's answer about an appropriate rate, and being able to give more if more notional donors and recipients can be used, also apply. So for example in the first year hand over $200K at 3.5% and immediately forgive $14K. The next year, forgive the interest charge of $6.5K and capital of $7.5K. Given the age of the daughter, I guess the father might well die before its all paid off that way, leaving some residue to be forgiven by the estate (and thus potentially incurring estate taxes). There might also be state gift/estate taxes to consider. Edited to reflect 2014 gift and lifetime exclusion limits.",
"title": ""
},
{
"docid": "da4d9bd8bb3891fc78d8965d83723ad1",
"text": "TL:DR: You should read something like The Little Book of Common Sense Investing, and read some of the popular questions on this site. The main message that you will get from that research is that there is an inescapable connection between risk and reward, or to put it another way, volatility and reward. Things like government bonds and money market accounts have quite low risk, but also low reward. They offer a nearly guaranteed 1-3%. Stocks, high-risk bonds, or business ventures (like your soda and vending machine scheme) may return 20% a year some years, but you could also lose money, maybe all you've invested (e.g., what if a vandal breaks one of your machines or the government adds a $5 tax for each can of soda?). Research has shown that the best way for the normal person to use their money to make money is to buy index funds (these are funds that buy a bunch of different stocks), and to hold them for a long time (over 10-15 years). By buying a broad range of stocks, you avoid some of the risks of investing (e.g., if one company's stock tanks, you don't lose very much), while keeping most of the benefits. By keeping them for a long time, the good years more than even out the bad years, and you are almost guaranteed to make ~6-7%/year. Buying individual stocks is a really, really bad idea. If you aren't willing to invest the time to become an expert investor, then you will almost certainly do worse than index funds over the long run. Another option is to use your capital to start a side business (like your vending machine idea). As mentioned before, this still has risks. One of those risks is that it will take more work than you expect (who will find places for your vending machines? Who will fill them? Who will hire those who fill them? etc.). The great thing about an index fund is that it doesn't take work or research. However, if there are things that you want to do, that take capital, this can be a good way to make more income.",
"title": ""
},
{
"docid": "d4204f26bc88bab658ce2be226976e79",
"text": "\"Since I, personally, agree with the investment thesis of Peter Schiff, I would take that sum and put it with him in a managed account, and leave it there. I'm not sure how to find a firm that you like the investment strategy of. I think that it's too complicated to do as a side thing. Someone needs to be spending a lot of time researching various instruments and figuring out what is undervalued or what is exposed to changing market trends or whatever. I basically just want to give my money to someone and say \"\"I agree with your investment philosophy, let me pay you to manage my money, too.\"\" No one knows who is right, of course. I think Schiff is right, so that's where I would put the amount of money you're talking about. If you disagree with his investment philosophy, this doesn't really make any sense to do. For that amount of money, though, I think firms would be willing to sit down with you and sell you their services. You could ask them how they would diversify this money given the goals that you have for it, and pick one that you agree with the most.\"",
"title": ""
},
{
"docid": "7c7dbf0512932aa995f8d4924466f134",
"text": "\"Here's what I suggest... A few years ago, I got a chunk of change. Not from an inheritance, but stock options in a company that was taken private. We'd already been investing by that point. But what I did: 1. I took my time. 2. I set aside a chunk of it (maybe a quarter) for taxes. you shouldn't have this problem. 3. I set aside a chunk for home renovations. 4. I set aside a chunk for kids college fund 5. I set aside a chunk for paying off the house 6. I set aside a chunk to spend later 7. I invested a chunk. A small chunk directly in single stocks, a small chunk in muni bonds, but most just in Mutual Funds. I'm still spending that \"\"spend later\"\" chunk. It's about 10 years later, and this summer it's home maintenance and a new car... all, I figure it, coming out of some of that money I'd set aside for \"\"future spending.\"\"\"",
"title": ""
},
{
"docid": "d88b143f604b061c9ef2d7da84ec1e71",
"text": "\"Others have given some good answers. I'd just like to chime in with one more option: treasury I-series bonds. They're linked to an inflation component, so they won't lose value (in theory). You can file tax returns for your children \"\"paying\"\" taxes (usually 0) on the interest while they're minors, so they appreciate tax-free until they're 18. Some of my relatives have given my children money, and I've invested it this way. Alternatively, you can buy the I-bonds in your own name. Then if you cash them out for your kids' education, the interest is tax-free; but if you cash them out for your own use, you do have to pay taxes on the interest.\"",
"title": ""
},
{
"docid": "98308db7064246b27f37cdf304800bf8",
"text": "There are two types of 529 programs. One where you put money aside each month. The one offered by your state may give you a tax break on you deposits. You can pick the one from any state, if you like their options better. During the next 18 years the focus the investment changes from risky to less risky to no risk. This happens automatically. The money can be used for tuition, room, board, books, fees. The 2nd type of 529 is also offered by a state but it is geared for a big lump sum payment when the child is young. This will cover full tuition and fees (not room and board, or books) at a state school. The deal is not as great if they child wants to go out of state, or you move, or they want to go to a private school. You don't lose everything, but you will have to make up the shortfall at the last minute. There are provisions for scholarship money. If you kid goes to West Point you haven't wasted the money in the 529. The money in either plan is ignored while calculating financial aid. Other options such as the Coverdell Education Savings account also exist. But they don't have the options and state tax breaks. Accounts in the child's name can impact the amount of financial aid offered, plus they could decide to spend the money on a car. The automatic investment shift for most of the state 529 plans does cover your question of how much risk to take. There are also ways to transfer the money to other siblings if one decides not to go to college. Keep in mind that the funds don't have to be spent as soon as they turn 18, they can wait a few years before enrolling in college.",
"title": ""
},
{
"docid": "a4261f1668d674baea43a770ae8649a6",
"text": "If you plan on holding the money for 15 years, until your daughter turns 21, then advanced algebra tells me she is 6 years old. I think the real question is, what do you intend for your daughter to get out of this? If you want her to get a real return on her money, Mike Haskel has laid out the information to get you started deciding on that. But at 6, is part of the goal also teaching her about financial stewardship, principles of saving, etc.? If so, consider the following: When the money was physically held in the piggy bank, your daughter had theoretical control over it. She was exercising restraint, for delayed gratification (even if she did not really understand that yet, and even if she really didn't understand money / didn't know what she would do with it). By taking this money and putting it away for her, you are taking her out of the decision making - unless you plan on giving her access to the account, letting her decide when to take it out. Still, you could talk her through what you're doing, and ask her how she feels about it. But perhaps she is too young to understand what committing the money away until 21 really means. And if, for example, she wants to buy a bike when she is 10, do you want her to see the fruits of her saved money? Finally, consider that if you (or you & your daughter, depending on whether you want her to help in the decision) decide to put the money in a financial institution in some manner, the risk you are taking on may need to be part of the lesson for her. If you want to teach the general principles of saving, then putting it in bonds/CD's/Savings etc., may be sufficient, even if inflation lowers the value of the money. If you want to teach principles of investing, then perhaps consider waiting until she can understand why you are doing that. To a kid, I think the principles of saving & delayed gratification can be taught, but the principles of assuming risk for greater reward, is a bit more complex.",
"title": ""
},
{
"docid": "7ba5c8e77be27b5bbb0c9e0ac99adff3",
"text": "\"@MrChrister - Savings is a great idea. Coudl also give them 1/2 the difference, rather than the whole difference, as then you both get to benefit... Also, a friend of mine had the Bank of Dad, where he'd keep his savings, and Dad would pay him 100% interest every year. Clearly, this would be unsustainable after a while, but something like 10% per month would be a great way to teach the value of compounding returns over a shorter time period. I also think that it's critical how you respond to things like \"\"I want that computer/car/horse/bike/toy\"\". Just helping them to make a plan on how to get there, considering their income (and ways to increase it), savings, spending and so on. Help them see that it's possible, and you'll teach them a worthwhile lesson.\"",
"title": ""
},
{
"docid": "f3bede6ba8aa81ad89f53ed375f4c18d",
"text": "MrChrister makes some good points, but I saw his invitation to offer a counter opinion. First, there is a normal annual deposit limit of $13,000 per parent or donee. This is the gift limit, due to rise to $14,000 in 2013. If your goal is strictly to fund college, and this limit isn't an issue for you, the one account may be fine unless both kids are in school at the same time. In that case, you're going to need to change beneficiaries every year to assign withdrawals properly. But, as you mention, there's gift money that your considering depositing to the account. In this case, there's really a legal issue. The normal 529 allows changes in beneficiary, and gifts to your child need to be held for that child in an irrevocable arrangement such as a UTMA account. There is a 529 flavor that provides for no change of beneficiary, a UTMA 529. Clearly, in that case, you need separate accounts. In conclusion, I think the single account creates more issues than it potentially solves. If the true gift money from others is minimal, maybe you should just keep it in a regular account. Edit - on further reflection, I strongly suggest you keep the relatives' gifts in a separate account, and when the kids are old enough to have legitimate earned income, use this money to open and deposit to Roth IRAs. They can deposit the lesser of their earned income or $5000 in 2012, $5500 in 2013. This serves two goals - avoiding the risk of gift money being 'stolen' from one child for benefit of the other, and putting it into an account that can help your children long term, but not impact college aid as would a simple savings or brokerage account.",
"title": ""
}
] |
fiqa
|
83e72a4a89efc27b47602a6ccdb6369b
|
Superannuation: When low risk options have higher return, what to do?
|
[
{
"docid": "cf07ca33a791c12a4ea5d40efce05453",
"text": "The long term view you are referring to would be over 30 to 40 years (i.e. your working life). Yes in general you should be going for higher growth options when you are young. As you approach retirement you may change to a more balanced or capital guaranteed option. As the higher growth options will have a larger proportion of funds invested into higher growth assets like shares and property, they will be affected by market movements in these asset classes. So when there is a market crash like with the GFC in 2007/2008 and share prices drop by 40% to 50%, then this will have an effect on your superannuation returns for that year. I would say that if your fund was invested mainly in the Australian stock market over the last 7 years your returns would still be lower than what they were in mid-2007, due to the stock market falls in late 2007 and early 2008. This would mean that for the 7 year time frame your returns would be lower than a balanced or capital guaranteed fund where a majority of funds are invested in bonds and other fixed interest products. However, I would say that for the 5 and possibly the 10 year time frames the returns of the high growth options should have outperformed the balanced and capital guaranteed options. See examples below: First State Super AMP Super Both of these examples show that over a 5 year period or less the more aggressive or high growth options performed better than the more conservative options, and over the 7 year period for First State Super the high growth option performed similar to the more conservative option. Maybe you have been looking at funds with higher fees so in good times when the fund performs well the returns are reduced by excessive fees and when the fund performs badly in not so good time the performance is even worse as the fees are still excessive. Maybe look at industry type funds or retail funds that charge much smaller fees. Also, if a fund has relatively low returns during a period when the market is booming, maybe this is not a good fund to choose. Conversely, it the fund doesn't perform too badly when the market has just crashed, may be it is worth further investigating. You should always try to compare the performance to the market in general and other similar funds. Remember, super should be looked at over a 30 to 40 year time frame, and it is a good idea to get interested in how your fund is performing from an early age, instead of worrying about it only a few years before retirement.",
"title": ""
}
] |
[
{
"docid": "cc3b53420f83deaefdcd21bacc9b616d",
"text": "Modern portfolio theory has a strong theoretical background and its conclusions on the risk/return trade-off have a lot of good supporting evidence. However, the conclusions it draws need to be used very carefully when thinking about retirement investing. If you were really just trying to just pick the one investment that you would guess would make you the most money in the future then yes, given no other information, the riskiest asset would be the best one. However, for most people the goal retirement investing is to be as sure as possible to retire comfortably. If you were to just invest in a single, very risky asset you may have the highest expected return, but the risk involved would mean there might be a good chance you money may not be there when you need it. Instead, a broad diversified basket of riskier and safer assets leaning more toward the riskier investments when younger and the safer assets when you get closer to retirement tends to be a better fit with most people's retirement goals. This tends to give (on average) more return when you are young and can better deal with the risk, but dials back the risk later in life when your investment portfolio is a majority of your wealth and you can least afford any major swings. This combines the lessons of MPT (diversity, risk/return trade-off) in a clearer way with common goals of retirement. Caveat: Your retirement goals and risk-tolerance may be very different from other peoples'. It is often good to talk to (fee-only) financial planner.",
"title": ""
},
{
"docid": "44eb02cae8302ba335d2032af7a43460",
"text": "You can only lose your 7%. The idea that a certain security is more volatile than others in your portfolio does not mean that you can lose more than the value of the investment. The one exception is that a short position has unlimited downside, but i dont think there are any straight short mutual funds.",
"title": ""
},
{
"docid": "638e5dffc189949a5b4ba471ef3f81ab",
"text": "First thing to know about investing is that you make money by taking risks. That means the possibility of losing money as well as making it. There are low risk investments that pretty much always pay out but they don't earn much. Making $200 a month on $10,000 is about 26% per year. That's vastly more than you are going to earn on low risk assets. If you want that kind of return, you can invest in a diversified portfolio of equities through an equity index fund. Some years you may make 26% or more. Other years you may make nothing or lose that much or more. On average you may earn maybe 7%-10% hopefully. Overall, investing is a game of making money over long horizons. It's very useful for putting away your $10k now and having hopefully more than that when it comes time to buy a house or retire or something some years into the future. You have to accept that you might also end up with less than $10K in the end, but you are more likely to make money than to use it. What you describe doesn't seem like a possible situation. In developed markets, you can't reliably expect anything close to the return you desire from assets that are unlikely to lose you money. It might be time to re-evaluate your financial goals. Do you want spending money now, or do you want to invest for use down the road?",
"title": ""
},
{
"docid": "1aa8e87a1881bf344bdfee7c4c4e4eb5",
"text": "For a time period as short as a matter of months, commercial paper or bonds about to mature are the highest returning investments, as defined by Benjamin Graham: An investment operation is one which, upon thorough analysis, promises safety of principal and a satisfactory return. Operations not meeting these requirements are speculative. There are no well-known methods that can be applied to cryptocurrencies or forex for such short time periods to promise safety of principal. The problem is that with $1,500, it will be impossible to buy any worthy credit directly and hold to maturity; besides, the need for liquidity eats up the return, risk-adjusted. The only alternative is a bond ETF which has a high probability of getting crushed as interest rates continue to rise, so that fails the above criteria. The only alternative for investment now is a short term deposit with a bank. For speculation, anything goes... The best strategy is to take the money and continue to build up a financial structure: saving for risk-adjusted and time-discounted future annual cash flows. After the average unemployment cycle is funded, approximately six or so years, then long-term investments should be accumulated, internationally diversified equities.",
"title": ""
},
{
"docid": "114919b2d796acd6c72888553ba2b2f3",
"text": "Sorry to be boring but you have the luxury of time and do not need high-risk investments. Just put the surplus cash into a diversified blue-chip fund, sit back, and enjoy it supporting you in 50 years time. Your post makes me think you're implicitly assuming that since you have a very high risk tolerance you ought to be able to earn spectacular returns. Unfortunately the risks involved are extremely difficult to quantify and there's no guarantee they're fairly discounted. Most people would intuitively realise betting on 100-1 horses is a losing proposition but not realise just how bad it is. In reality far fewer than one in a thousand 100-1 shots actually win.",
"title": ""
},
{
"docid": "de2f8020f2afe5a02fa537ebb9f85250",
"text": "\"To be completely honest, I think that a target of 10-15% is very high and if there were an easy way to attain it, everyone would do it. If you want to have such a high return, you'll always have the risk of losing the same amount of money. Option 1 I personally think that you can make the highest return if you invest in real estate, and actively manage your property(s). If you do this well with short term rental and/or Airbnb I think you can make healthy returns BUT it will cost a lot of time and effort which may diminish its appeal. Think about talking to your estate agent to find renters, or always ensuring your AirBnB place is in good nick so you get a high rating and keep getting good customers. If you're looking for \"\"passive\"\" income, I don't think this is a good choice. Also make sure you take note of karancan's point of costs. No matter what you plan for, your costs will always be higher than you think. Think about water damage, a tenant that breaks things/doesn't take care of stuff etc. Option 2 I think taking a loan is unnecessarily risky if you're in good financial shape (as it seems), unless you're gonna buy a house with a mortgage and live in it. Option 3 I think your best option is to buy bonds and shares. You can follow karancan's 100 minus your age rule, which seems very reasonable (personally I invest all my money in shares because that's how my father brought me up, but it's really a matter of taste. Both can be risky though bonds are usually safer). I think I should note that you cannot expect a return of 10% or more because, as everyone always says, if there were a way to guarantee it, everyone would do it. You say you don't have any idea how this works so I'd go to my bank and ask them. You probably have access to private banking so that should mean someone will be able to sit you down and talk you through. Also look at other banks that have better rates and/or pretend you're leaving your bank to negotiate a better deal. If I were you I'd invest in blue chips (big international companies listed on the main indeces (DAX, FTSE 100, Dow Jones)), or (passively managed) mutual funds/ETFs that track these indeces. Just remember to diversify by country and industry a bit. Note: i would not buy the vehicles/plans that my bank (no matter what they promise, and they promise a lot) suggest because if you do that then the bank always takes a cut off your money. TlDr, dont expect to make 10-15% on a passive investment and do what a lot of others do: shares and bonds. Also make sure you get a lot of peoples opinions :)\"",
"title": ""
},
{
"docid": "2eb5c9d745da2fe15810ffd0b2fd4451",
"text": "Hedging - You have an investment and are worried that the price might drop in the near future. You don't want to sell as this will realise a capital gain for which you may have to pay capital gains tax. So instead you make an investment in another instrument (sometimes called insurance) to offset falls in your investment. An example may be that you own shares in XYZ. You feel the price has risen sharply over the last month and is due for a steep fall. So you buy some put option over XYZ. You pay a small premium and if the price of XYZ falls you will lose money on the shares but will make money on the put option, thus limiting your losses. If the price continues to go up you will only lose the premium you paid for the option (very similar to an insurance policy). Diversification - This is when you may have say $100,000 to invest and spread your investments over a portfolio of shares, some units in a property fund and some bonds. So you are spreading your risks and returns over a range of products. The idea is if one stock or one sector goes down, you will not lose a large portion of your investment, as it is unlikely that all the different sectors will all go down at the same time.",
"title": ""
},
{
"docid": "e268ba5a1f0eabe9d353910aa6c5858e",
"text": "If you want higher returns you may have to take on more risk. From lowest returns (and usually lower risk) to higher returns (and usually higher risk), Bank savings accounts, term deposits, on-line savings accounts, offset accounts (if you have a mortgage), fixed interest eg. Bonds, property and stock markets. If you want potentially higher returns then you can go for derivatives like options or CFDs, FX or Futures. These usually have higher risks again but as with any investments some risks can be partly managed. Also, CMC Markets charges $11 commission up to $10,000 trade. This is actually quite a low fee - based on your $7,000, $22 for in and out of a position would be less than 0.32% (of course you might want to buy into more than one company - so your brokerage would be slightly higher). Still this is way lower than full service brokerage which could be $100 or more in and then again out again. What ever you decide to do, get yourself educated first.",
"title": ""
},
{
"docid": "073cb8a7fb44788cd73b350958d3e45c",
"text": "\"This is basically what financial advisers have been saying for years...that you should invest in higher risk securities when you are young and lower risk securities when you get older. However, despite the fact that this is taken as truth by so many financial professionals, financial economists have been unable to formulate a coherent theory that supports it. By changing the preferences of their theoretical investors, they can get solutions like putting all your investments in a super safe asset until you get to a minimum survival level for retirement and then investing aggressively and many other solutions. But for none of the typically assumed preferences does investing aggressively when young and becoming more conservative as you near retirement seem to be the solution. I'm not saying there can be no such preferences, but the difficulty in finding them makes me think maybe this idea is not actually correct. Couple of problems with your intuition that you should think about: It's not clear that things \"\"average out\"\" over time. If you lose a bunch of money in some asset, there's no reason to think that by holding that asset for a while you will make back what you lost--prices are not cyclical. Moreover, doesn't your intuition implicitly suggest that you should transition out of risky securities as you get older...perhaps after having lost money? You can invest in safe assets (or even better, the tangency portfolio from your graph) and then lever up if you do want higher risk/return. You don't need to change your allocation to risky assets (and it is suboptimal to do so--you want to move along the CAL, not the curve). The riskiness of your portfolio should generally coincide (negatively) with your risk-aversion. When you are older and more certain about your life expectancy and your assets, are you exposed to more or less risks? In many cases, less risks. This means you would choose a more risky portfolio (because you are more sure you will have enough to live on until death even if your portfolio takes a dive). Your actual portfolio consists both of your investments and your human capital (the present value of your time and skills). When you are young, the value of this capital changes significantly with market performance so you already have background risk. Buying risky securities adds to that risk. When you are old, your human capital is worth little, so your overall portfolio becomes less risky. You might want to compensate by increasing the risk of your investments. EDIT: Note that this point may depend on how risky your human capital is (how likely it is that your wage or job prospects will change with the economy). Overall the answer to your question is not definitively known, but there is theoretical evidence that investing in risky securities when young isn't optimal. Having said that, most people do seem to invest in riskier securities when young and safer when they are older. I suspect this is because with life experience people become less optimistic as they get older, not because it is optimal to do so. But I can't be sure.\"",
"title": ""
},
{
"docid": "050c767b77c61494380662aa4b300d36",
"text": "\"Investing is always a matter of balancing risk vs reward, with the two being fairly strongly linked. Risk-free assets generally keep up with inflation, if that; these days advice is that even in retirement you're going to want something with better eturns for at least part of your portfolio. A \"\"whole market\"\" strategy is a reasonable idea, but not well defined. You need to decide wheher/how to weight stocks vs bonds, for example, and short/long term. And you may want international or REIT in the mix; again the question is how much. Again, the tradeoff is trying to decide how much volatility and risk you are comfortable with and picking a mix which comes in somewhere around that point -- and noting which assets tend to move out of synch with each other (stock/bond is the classic example) to help tune that. The recommendation for higher risk/return when you have a longer horizon before you need the money comes from being able to tolerate more volatility early on when you have less at risk and more time to let the market recover. That lets you take a more aggressive position and, on average, ger higher returns. Over time, you generally want to dial that back (in the direction of lower-risk if not risk free) so a late blip doesn't cause you to lose too much of what you've already gained... but see above re \"\"risk free\"\". That's the theoretical answer. The practical answer is that running various strategies against both historical data and statistical simulations of what the market might do in the future suggests some specific distributions among the categories I've mentioned do seem to work better than others. (The mix I use -- which is basically a whole-market with weighting factors for the categories mentioned above -- was the result of starting with a general mix appropriate to my risk tolerance based on historical data, then checking it by running about 100 monte-carlo simulations of the market for the next 50 years.)\"",
"title": ""
},
{
"docid": "97d606e1bf5eedca0cde9f1fecfc9618",
"text": "\"This is basically martingale, which there is a lot of research on. Basically in bets that have positive expected value such as inflation hedged assets this works better over the long term, than bets that have negative expected value such as table games at casinos. But remember, whatever your analysis is: The market can stay irrational longer than you can stay solvent. Things that can disrupt your solvency are things such as options expiration, limitations of a company's ability to stay afloat, limitations in a company's ability to stay listed on an exchange, limitations on your borrowings and interest payments, a finite amount of capital you can ever acquire (which means there is a limited amount of times you can double down). Best to get out of the losers and free up capital for the winners. If your \"\"trade\"\" turned into an \"\"investment\"\", ditch it. Don't get married to positions.\"",
"title": ""
},
{
"docid": "338d9cb346b9483fc89bc09494a8563f",
"text": "Mary Holm, who has a column on money in the NZ Herald, had this to say on the matter: Research shows that over and over again, the top dogs in one decade can be bottom dogs the next decade, and vice versa. Past performance really is no guide to the future. ... Fees are much less likely than returns to change over time. And low fees make a big difference to the long-term growth of your account. So just how low are your fund's fees? Perhaps that means the most sensible bet would be to pick a fund with good (but not necessarily best) historical returns but also with low fees. Personally, I've also included ethics as a factor in the decision, and I chose the ethical investment option offered by Superlife, hoping that it makes a small difference as explained here. (I have no affiliation with Superlife)",
"title": ""
},
{
"docid": "a45d1335104ace690d1de07daca77cc3",
"text": "\"I'd question whether a guaranteed savings instrument underperforming the stock market really is a risk, or not? Rather, you reap what you sow. There's a trade-off, and one makes a choice. If one chooses to invest in a highly conservative, low-risk asset class, then one should expect lower returns from it. That doesn't necessarily mean the return will be lower — stock markets could tank and a CD could look brilliant in hindsight — but one should expect lower returns. This is what we learn from the risk-return spectrum and Modern Portfolio Theory. You've mentioned and discounted inflation risk already, and that would've been one I'd mention with respect to guaranteed savings. Yet, one still accepts inflation risk in choosing the 3% CD, because inflation isn't known in advance. If inflation happened to be 2% after the fact, that just means the risk didn't materialize. But, inflation could have been, say, 4%. Nevertheless, I'll try and describe the phenomenon of significantly underperforming a portfolio with more higher-risk assets. I'd suggest one of: Perhaps we can sum those up as: the risk of \"\"investing illiteracy\"\"? Alternatively, if one were actually fully aware of the risk-reward spectrum and MPT and still chose an excessive amount of low-risk investments (such that one wouldn't be able to attain reasonable investing goals), then I'd probably file the risk under psychological risk, e.g. overly cautious / excessive risk aversion. Yet, the term \"\"psychological risk\"\", with respect to investing, encompasses other situations as well (e.g. chasing high returns.) FWIW, the risk of underperformance also came to mind, but I think that's mostly used to describe the risk of choosing, say, an actively-managed fund (or individual stocks) over a passive benchmark index investment more likely to match market returns.\"",
"title": ""
},
{
"docid": "ec424b8304b09e414879c974e3e7db78",
"text": "\"You are conflating two different types of risk here. First, you want to invest money, and presumably you're not looking at the \"\"lowest risk, lowest returns\"\" end of the spectrum. This is an inherently risky activity. Second, you are in a principal-agent relationship with your advisor, and are exposed to the risk of your advisor not maximizing your profits. A lot has been written on principal-agent theory, and while incentive schemes exist, there is no optimal solution. In your case, you hope that your agent will start maximizing your profits if they are 100% correlated with his profits. While this idea is true (at least according to standard economic theory, you could find exceptions in behavioral economics and in reality), it also forces the agent to participate in the first risk. From the point of view of the agent, this does not make sense. He is looking to render services and receive income for it. An agent with integrity is certainly prepared to carry the risk of his own incompetence, just like Apple is prepared to replace your iPhone should it not start one day. But the agent is not prepared to carry additional risks such as the market risk, and should not be compelled to do so. It is your risk, a risk you personally take by deciding to play the investment gamble, and you cannot transfer it to somebody else. Of course, what makes the situation here more difficult than the iPhone example is that market-driven losses cannot be easily distinguished from incompetent-agent losses. So, there is no setup in which you carry the market risk only and your agent carries the incompetence risk only. But as much as you want a solution in which the agent carries all risk, you probably won't find an agent willing to sign such a contract. So you have to simply accept that both the market risk and the incompetence risk are inherent to being an investor. You can try to mitigate your own incompetence by having an advisor invest for you, but then you have to accept the risk of his incompetence. There is no way to depress the total incompetence risk to zero.\"",
"title": ""
},
{
"docid": "c849f182aee1eb0b098b8e7111a4a1b7",
"text": "I think you may be confused on terminology here. Financial leverage is debt that you have taken on, in order to invest. It increases your returns, because it allows you to invest with more money than what you actually own. Example: If a $1,000 mutual fund investment returns $60 [6%], then you could also take on $1,000 of debt at 3% interest, and earn $120 from both mutual fund investments, paying $30 in interest, leaving you with a net $90 [9% of your initial $1,000]. However, if the mutual fund 'takes a nose dive', and loses money, you still need to pay the $30 interest. In this way, using financial leverage actually increases your risk. It may provide higher returns, but you have the risk of losing more than just your initial principle amount. In the example above, imagine if the mutual fund you owned collapsed, and was worth nothing. Now, you would have lost $1,000 from the money you invested in the first place, and you would also still owe $1,000 to the bank. The key take away is that 'no risk' and 'high returns' do not go together. Safe returns right now are hovering around 0% interest rates. If you ever feel you have concocted a mix of options that leaves you with no risk and high returns, check your math again. As an addendum, if instead what you plan on doing is investing, say, 90% of your money in safe(r) money-market type funds, and 10% in the stock market, then this is a good way to reduce your risk. However, it also reduces your returns, as only a small portion of your portfolio will realize the (typically higher) gains of the stock market. Once again, being safer with your investments leads to less return. That is not necessarily a bad thing; in fact investing some part of your portfolio in interest-earning low risk investments is often advised. 99% is basically the same as 100%, however, so you almost don't benefit at all by investing that 1% in the stock market.",
"title": ""
}
] |
fiqa
|
acfc2ce541a94b3b0b1996682e46d5e4
|
Can I save our credit with a quickie divorce?
|
[
{
"docid": "5016a4a2d397b4ae8ad6ee30a58fc3f1",
"text": "If you're not insolvent, doing something like this is both a moral and legal hazard: When you are insolvent, the tax and moral hazard issues can be a non-issue. Setting up a scenario that makes you appear to be insolvent is where the fraud comes in. If you decide to go down this road, spend a few thousand dollars on competent legal advice.",
"title": ""
},
{
"docid": "26f799670bf8a32dc2cc09fa3609cb0e",
"text": "My advice to you? Act like responsible adults and owe up to your financial commitments. When you bought your house and took out a loan from the bank, you made an agreement to pay it back. If you breach this agreement, you deserve to have your credit score trashed. What do you think will happen to the $100K+ if you decide to stiff the bank? The bank will make up for its loss by increasing the mortgage rates for others that are taking out loans, so responsible borrowers get to subsidize those that shirk their responsibilities. If you were in a true hardship situation, I would be inclined to take a different stance. But, as you've indicated, you are perfectly able to make the payments -- you just don't feel like it. Real estate fluctuates in value, just like any other asset. If a stock I bought drops in value, does the government come and bail me out? Of course not! What I find most problematic about your plan is that not only do you wish to breach your agreement, but you are also looking for ways to conceal your breach. Please think about this. Best of luck with your decision.",
"title": ""
}
] |
[
{
"docid": "cd7b2260cf22b2b28ded192e30046001",
"text": "\"I can only share with you my happened with my wife and I. First, and foremost, if you think you need to protect your assets for some reason then do so. Be open and honest about it. If we get a divorce, X stays with me, and Y stays with you. This seems silly, even when your doing it, but it's important. You can speak with a lawyer about this stuff as you need to, but get it in writing. Now I know this seems like planning for failure, but if you feel that foo is important to you, and you want to retain ownership of foo no mater what, then you have to do this step. It also works both ways. You can use, with some limitations, this to insulate your new family unit from your personal risks. For example, my business is mine. If we break up it stays mine. The income is shared, but the business is mine. This creates a barrier that if someone from 10 years ago sues my business, then my wife is protected from that. Keep in mind, different countries different rules. Next, and this is my advise. Give up on \"\"his and hers\"\" everything. It's just \"\"ours\"\". Together you make 5400€ decide how to spend 5400€ together. Pick your goals together. The pot is 5400€. End of line. It doesn't matter how much from one person or how much from another (unless your talking about mitigating losses from sick days or injuries or leave etc.). All that matters is that you make 5400€. Start your budgeting there. Next setup an equal allowance. That is money, set aside for non-sense reasons. I like to buy video games, my wife likes to buy books. This is not for vacation, or stuff together, but just little, tiny stuff you can do for your self, without asking \"\"permission\"\". The number should be small, and equal. Maybe 50€. Finally setup a budget. House Stuff 200€, Car stuff 400€. etc. etc. then it doesn't matter who bought the house stuff. You only have to coordinate so that you don't both buy house stuff. After some time (took us around 6 months) you will find out how this works and you can add on some rules. For example, I don't go to Best Buy alone. I will spend too much on \"\"house stuff\"\". My wife doesn't like to make the budget, so I handle that, then we go over it. Things like that.\"",
"title": ""
},
{
"docid": "a6635e399ceaee7d6596e7459a9a69b3",
"text": "As per Chad's request, I recommend that you keep at least one card in each name as primary card holder, with the spouse being the secondary card holder, most easily done by each adding the spouse as the secondary holder to his/her own card. Since credit reporting is usually in the name of the primary credit card holder, this allows both to continue to have credit history, important when the marriage ends (in death or divorce as the case may be). When you travel, each should carry only the cards on which he/she is the primary card holder; not all cards. This helps in case of a wallet or purse being stolen; you have to report only one set of cards as lost and request their replacement, and you have a set of cards that you can use in the mean time (as long as you are not in different places when the loss occurs).",
"title": ""
},
{
"docid": "401f7428ed931f735623b09ea8b9897f",
"text": "\"Here's what my wife and I did. First, we stopped using credit cards and got rid of all other expenses that we absolutely didn't need. A few examples: cable TV, home phone, high end internet - all shut off. We changed our cell phone plan to a cheap one and stopped going out to restaurants or bars. We also got rid of the cars that had payments on them and replaced them with ones we paid cash for. Probably the most painful thing for me was selling a 2 year old 'vette and replacing it with a 5 year old random 4 door. Some people might tell you don't do this because older cars need repairs. Fact is, nearly all cars are going to need repairs. It's just a matter of whether you are also making payments on it when they need them and if you can discipline yourself enough to save up a bit to cover those. After doing all this the only payments we had to make were for the house (plus electric/gas/water) and the debt we had accumulated. I'd say that if you have the option to move back into your parent's house then do it. Yes, it will suck for a while but you'll be able to pay everything off so much faster. Just make sure to help around the house. Ignore the guys saying that this tanks your score and will make getting a house difficult. Although they are right that it will drop your score the fact is that you aren't in any position to make large purchases anyway and won't be for quite some time, so it really doesn't matter. Your number one goal is to dig yourself out of this hole, not engage in activity that will keep you in it. Next, if you are only working part time then you need to do one of two things. Either get a full time job or go find a second part time one. The preference is obviously on the first, which you should be able to do in your spare time. If, for some reason, you don't have the tech skills necessary to do this then go find any part time job you can. It took us about 3 years to finally pay everything (except the house) off - we owed a lot. During that time everything we bought was paid for in cash with the vast majority of our money going to pay off those accounts. Once the final account was paid off, I did go ahead and get a credit card. I made very minor purchases on it - mostly just gas - and paid it off a few days before it was due each month. Every 4 months they increased my limit. After around 18 months of using that one card my credit score was back in the 700+ range and with no debt other than the mortgage. *note: I echo what others have said about \"\"Credit Repair\"\" companies. Anything they can do, you can too. It's a matter of cutting costs, living within your means and paying the bills. If the interest rates are killing you, then try to get a consolidation loan. If you can't do that then negotiate settlements with them, just get everything in writing prior to making a payment on it if you go this route. BTW, make sure you actually can't pay them before attempting to settle.\"",
"title": ""
},
{
"docid": "737a84c075b317740b52a0f932e0261a",
"text": "\"It is possible to achieve a substitute for refinancing, but because of the \"\"short\"\" life of cars at least relative to housing, there are no true refinancings. First, the entire loan will not be able to be refinanced. The balance less approximately 80% of the value of the car will have to be repaid. Cars depreciate by something like 20% per year, so $2,000 will have to be repaid. Now, you should be able to get a loan if your boyfriend has good credit, but the interest rate will not drop too much further from the current loan's rate because of your presumably bad credit rating, assumed because of your current interest rate. While this is doable, this is not a good strategy if you intend to have a long term relationship. One of the worst corruptors of a relationship is money. It will put a strain on your relationship and lower the odds of success. The optimal strategy, if the monthly payments are too high, is to try to sell the car so to buy a cheaper car. The difficulty here is that the bank will not allow this if balance of the loan exceeds the proceeds from the sale, so putting as much money towards paying the balance to allow a sale is best. As a side note, please insure your car against occurrences such as theft and damage with a deductible low enough to justify the monthly payment. It is a terrible position to have a loan, no car, and no collateral against the car.\"",
"title": ""
},
{
"docid": "b8168abb311c5dc9717a049d9a4fb9ca",
"text": "I would not be concerned about the impact to your credit rating. You already have an excellent credit score, and the temporary change to your utilization will have minimal impact to your score. If you really need to make this $2500 purchase and you have the money in the bank to pay for it, I would not recommend borrowing this money. Only put it on the credit card if you plan on paying it off in full without paying interest. Let me ask you this: Why do you want to keep this $2500 in the bank? It certainly isn't earning you anything significant. My guess is that you'd like to keep it there for an emergency. Well, is this $2500 purchase an emergency? If it is necessary, then spend the money. If not, then save up the money until you have enough to make the purchase. It doesn't make sense to keep money for an emergency in the bank, but then when one comes up, to leave the money in the bank and pay interest on your emergency purchase. If you make this emergency purchase and another emergency comes up, you can always (if necessary) borrow the money at that time. It doesn't make sense to borrow money before you need it. That having been said, I would encourage you to build up your emergency fund so that you have enough money in there to handle things like this without completely depleting your savings account. 3 to 6 months of expenses is the general recommendation for your emergency fund. Then if something unplanned comes up, you'll have the money in the bank without having to borrow and pay interest.",
"title": ""
},
{
"docid": "89739766c7339ba2a9cc64de0444c12d",
"text": "I know you say you are aware of secured and unsecured debt and you've made your decision. Did you do the numbers? You will pay 44k over the life of the mortgage for that 24k (Based on 4.5% APR mortgage). Once you refinance your mortgage, do you plan on using credit for a while? Lots of Americans are hyperfocused on credit scores. The only times it affects your life are when you finance something, when you apply to rent a house or apartment, and sometimes when you apply for a job. Credit score should not be a factor in this decision. You're borrowing the money at a lower rate to pay off the high rate cards because you want to pay less in interest. Considering #1 is there any reason NOT to pay off the cards immediately, if not sooner?",
"title": ""
},
{
"docid": "79febff37005fe840f1be5912c0f914c",
"text": "\"You say Also I have been the only one with an income in our household for last 15 years, so for most of our marriage any debts have been in my name. She has a credit card (opened in 1999) that she has not used for years and she is also a secondary card holder on an American Express card and a MasterCard that are both in my name (she has not used the cards as we try to keep them only for emergencies). This would seem to indicate that the dealer is correct. Your wife has no credit history. You say that you paid off her student loans some years back. If \"\"some years\"\" was more than seven, then they have dropped off her credit report. If that's the most recent credit activity, then she effectively has none. Even if you get past that, note that she also doesn't have any income, which makes her a lousy co-signer. There's no real circumstance where you couldn't pay for the car but she could based on the historical data. She would have to get a job first. Since they had no information on her whatsoever, they probably didn't even get to that.\"",
"title": ""
},
{
"docid": "fdc4fb5e150939da5af1384a61a75eeb",
"text": "On the face, this appears a sound method to manage long run cumulative interest, but there are some caveats. Maxing out credit cards will destroy your credit rating. You will receive no more reasonable offers for credit, only shady ones. Though your credit rating will rise the moment you bring the balance back down to 10%, even with high income, it's easy to overshoot the 8 months, and then a high interest rate kicks in because of the low credit rating. Further, maxing out credit cards will encourage credit card lenders to begin cutting limits and at worse demand early payment. Now, after month 6 hits, your financial payment obligations skyrocket. A sudden jolt is never easy to manage. This will increase risk of missing a payment, a disaster for such hair line financing. In short, the probability of decimating your financial structure is high for very little benefit. If you are confident that you can pay off $4,000 in 8 months then simply apply those payments to the student loan directly, cutting out the middle man. Your creditors will be pleased to see your total liabilities fall at a high rate while your utilization remains small, encouraging them to offer you more credit and lower rates. The ideal credit card utilization rate is 10%, so it would be wise to use that portion to repay the student loans. Building up credit will allow you to use the credit as an auxiliary cushion when financial disaster strikes. Keeping an excellent credit rating will allow you to finance the largest home possible for your money. Every percentage point of mortgage interest can mean the difference between a million USD home and a $750,000 one.",
"title": ""
},
{
"docid": "4515ff7c68751854ae690a9c5f902ff0",
"text": "If you hadn't done it already cut up the cards. Don't close the accounts because it could hurt your credit score even more. Switching some or all of the CC debt onto low rate cards, or a debt consolidation loan is a way that some people use to reduce their credit card payments. The biggest risk is that you become less aggressive with the loan payback. If you were planning on paying $800 in minimum payments,plus $200 extra each month; then still pay $1000 with the new loan and remaining credit cards. Another risk is that you start overusing the credit card again, because you have available credit on the card that was paid off with the loan. The third risk, which you haven't proposed, occurs when people switch unsecured credit card dept, to a secured 2nd mortgage debt. This then puts the family home at risk.",
"title": ""
},
{
"docid": "0c5a5ed7bb766e7dc97275d21ffc8f2e",
"text": "I know one piece of information that can help you (in a macabe sort of way) - from what my wife has told me, if your partner dies, you are not responsible for paying for their debts, especially student loans. I expect the same thing for credit cards - if someone were to happen to charge $2,000 on their credit card and get hit by a bus, the credit card company can cajole and plead for you to pay for it, but you have no legal requirement to do so. Unfortunately I do not have as much information about as if you spouse is living.",
"title": ""
},
{
"docid": "377cac873084e349792849a9b7b8c278",
"text": "Some already mentioned that you could pay with your savings and use the credit card as an emergency buffer. However, if you think there is a reasonable chance that your creditcard gets revoked and that you need cash quickly, here is a simple alternative:",
"title": ""
},
{
"docid": "598153a7fcb075f9ecd75da3e70bcd10",
"text": "Why not use the money and pay the cards off? You say you'll have no money to your name, and while that's true, you do have $36,000 in available credit should an emergency arise. If it were me, I'd pay them off, make every effort to live on the cash I have without using credit and leave the cards open as a source of emergency funds (new home theaters are not emergencies!) until I got enough savings built up to not have to use credit at all.",
"title": ""
},
{
"docid": "b635afd94d43e82e31a07520949534a0",
"text": "This is probably a good time to note that credit is not a liquid asset, and not an emergency fund. Credit can be revoked or denied at any time, and Murphy's law states that you may have issues with credit when everything else goes wrong too.",
"title": ""
},
{
"docid": "93cfc7f27a3b137773cb171345b602eb",
"text": "I doubt it. If you have a good track record with your car loan, that will count for a lot more than the fact that you don't have it anymore. When you look for a house, your debt load will be lower without the car loan, which may help you get the mortgage you want. Just keep paying your credit card bills on time and your credit rating will improve month by month.",
"title": ""
},
{
"docid": "fc8424217a86294ba50e8a485dea0f79",
"text": "\"Pay cash. You have the cash to pay for it now, but God forbid something happen to you or your wife that requires you to dip into that cash in the future. In such an event, you could end up paying a lot more for your home theater than you planned. The best way to keep your consumer credit card debt at zero (and protect your already-excellent credit) is to not add to the number of credit cards you already have. At least in the U.S., interest rates on saving accounts of any sort are so low, I don't think it's worthwhile to include as a deciding factor in whether not you \"\"borrow\"\" at 0% instead of buying in cash.\"",
"title": ""
}
] |
fiqa
|
54fc90af4bc951665cf987c934eac1cb
|
Investment property refinance following a low appraisal?
|
[
{
"docid": "310ce16fe56a397df07b40162c76b9cb",
"text": "Definitely don't borrow from your 401K. If you quit or get laid off, you have to repay the whole amount back immediately, plus you are borrowing from your opportunity cost. The stock market should be good at least through the end of this year. As one of the commentators already stated, have you calculated your net savings by reducing the interest rate? You will be paying closing costs and not all of these are deductible (only the points are). When calculating the savings, you have to ask yourself how long you will be hanging on the property? Are you likely to be long term landlords, or do you have any ideas on selling in the near future? You can reduce the cost and principal by throwing the equivalent of one to two extra mortgage payments a year to get the repayment period down significantly (by years). In this way, you are not married to a higher payment (as you would be if you refinanced to a 15 year term). I would tend to go with a) eat the appraisal cost, not refinance, and b) throw extra money towards principal to get the term of the loan to be reduced.",
"title": ""
},
{
"docid": "5544fdca206f4e03ede8bb80dc6724ce",
"text": "The new payment on $172,500 3.5% 15yr would be $1233/mo compared to $1614/mo now (26 bi-weekly payments, but 12 months.) Assuming the difference is nearly all interest, the savings is closer to $285/mo than 381. Note, actual savings are different, the actual savings is based on the difference in interest over the year. Since the term will be changing, I'm looking at cash flow, which is the larger concern, in my opinion. $17,000/285 is 60 months. This is your break even time to payoff the $17000, higher actually since the $17K will be accruing interest. I didn't see any mention of closing costs or other expenses. Obviously, that has to be factored in as well. I think the trade off isn't worth it. As the other answers suggest, the rental is too close to break-even now. The cost of repairs on two houses is an issue. In my opinion, it's less about the expenses being huge than being random. You don't get billed $35/mo to paint the house. You wake up, see too many spots showing wear, and get a $3000 bill. Same for all high cost items, Roof, HVAC, etc. You are permitted to borrow 50% of your 401(k) balance, so you have $64K in the account. I don't know your age, this might be great or a bit low. I'd keep saving, not putting any extra toward either mortgage until I had an emergency fund that was more than sufficient. The fund needs to handle the unexpected expenses as well as the months of unemployment. In general, 6-9 months of these expenses is recommended. To be clear, there are times a 401(k) loan can make sense. I just don't see that it does now. (Disclaimer - when analyzing refis there are two approaches. The first is to look at interest saved. After all, interest is the expense, principal payments go right to your balance sheet. The second is purely cash flow, in which case one might justify a higher rate, and going from 15 to 30 years, but freeing up cash that can be better deployed. Even though the rate goes up say 1/2%, the payment drops due to the term. Take that savings and deposit to a matched 401(k) and the numbers may work out very well. I offer this to explain why the math above may not be consistent with other answers of mine.)",
"title": ""
},
{
"docid": "2cc46901239f5e135da6f7c06f1514cc",
"text": "If I was you I would not borrow from my 401K and shred the credit card offer. Both are very risky ventures, and you are already in a situation that is risky. Doing either will increase your risk significantly. I'd also consider selling the rental house. You seem to be cutting very close on the numbers if you can't raise 17K in cash to refi the house. What happens if you need a roof on the rental, and an HVAC in your current home? My assumption is that you will not sell the home, okay I get it. I would recommend either giving your tenant a better deal then the have now, or something very similar. Having a good tenant is an asset.",
"title": ""
}
] |
[
{
"docid": "26fe51a5cce3f558975a4af6b421c388",
"text": "It's the physiological impact it has. If you took a loan to buy a home for $500k and you recently had it appraised for $450k you are more inclined to skip out on any home remodeling projects. Also as ShakeyBobWillis pointed out it can be to your financial benefit to just walk away from an underwater mortgage. This creates even more glut in the marketplace.",
"title": ""
},
{
"docid": "53eba26870fc7db41c037231c4ffb043",
"text": "Properties do in fact devaluate every year for several reasons. One of the reasons is that an old property is not the state of the art and cannot therefore compete with the newest properties, e.g. energy efficiency may be outdated. Second reason is that the property becomes older and thus it is more likely that it requires expensive repairs. I have read somewhere that the real value depreciation of properties if left practically unmaintained (i.e. only the repairs that have to absolutely be performed are made) is about 2% per year, but do not remember the source right now. However, Properties (or more accurately, the tenants) do pay you rent, and it is possible in some cases that rent more than pays for the possible depreciation in value. For example, you could ask whether car leasing is a poor business because cars depreciate in value. Obviously it is not, as the leasing payments more than make for the value depreciation. However, I would not recommend properties as an investment if you have only small sums of money. The reasons are manyfold: So, as a summary: for large investors property investments may be a good idea because large investors have the ability to diversify. However, large investors often use debt leverage so it is a very good question why they don't simply invest in stocks with no debt leverage. For small investors, property investments do not often make sense. If you nevertheless do property investments, remember the diversification, also in time. So, purchase different kinds of properties and purchase them in different times. Putting a million USD to properties at one point of time is very risky, because property prices can rise or fall as time goes on.",
"title": ""
},
{
"docid": "a93375dd629bb7b0f7fcb45086cbc5e3",
"text": "You can't transfer mortgages when you purchase a new property. You can purchase a new property now, or you can refinance your current property now and leverage yourself as far as possible while rates are low. The higher rates you are worried about may not be as bad as you think. With higher interest rates, that may put downward pressure on housing prices, or when rates do rise, it may simply move from historic lows to relative lows. I had a mortgage at 4.25% that I never bothered refinancing even though rates went much lower because the savings in interest paid (minus my tax deduction for mortgage interest) didn't amount to more than the cost of refinancing. If rates go back up to 5%, that will still be very affordable.",
"title": ""
},
{
"docid": "001ad7f8030aa55b992aab75c2bd3b7d",
"text": "This is one way in which the scheme could work: You put your own property (home, car, piece of land) as a collateral and get a loan from a bank. You can also try to use the purchased property as security, but it may be difficult to get 100% loan-to-value. You use the money to buy a property that you expect will rise in value and/or provide rent income that is larger than the mortgage payment. Doing some renovations can help the value rise. You sell the property, pay back the loan and get the profits. If you are fast, you might be able to do this even before the first mortgage payment is due. So yeah, $0 of your own cash invested. But if the property doesn't rise in value, you may end up losing the collateral.",
"title": ""
},
{
"docid": "c9d3f1bead17de6945a64498d5259afc",
"text": "When evaluating a refinance, it all comes down to the payback. Refinancing costs money in closing costs. There are different reasons for refinancing, and they all have different methods for calculating payback. One reason to finance is to get a lower interest rate. When determining the payback time, you calculate how long it would take to recover your closing costs with the amount you save in interest. For example, if the closing costs are $2,000, your payback time is 2 years if it takes 2 years to save that amount in interest with the new interest rate vs. the old one. The longer you hold the mortgage after you refinance, the more money you save in interest with the new rate. Generally, it doesn't pay to refinance to a lower rate right before you sell, because you aren't holding the mortgage long enough to see the interest savings. You seem to be 3 years away from selling, so you might be able to see some savings here in the next three years. A second reason people refinance is to lower their monthly payment if they are having trouble paying it. I see you are considering switching from a 15 year to a 30 year; is one of your goals to reduce your monthly payment? By refinancing to a 30 year, you'll be paying a lot of interest in your first few years of payments, extending the payback time of your lower interest rate. A third reason people refinance is to pull cash out of their equity. This applies to you as well. Since you are planning on using it to remodel the home you are trying to sell, you have to ask yourself if the renovations you are planning will payoff in the increased sale price of your home. Often, renovations don't increase the value of their home as much as they cost. You do renovations because you will enjoy living in the renovated home, and you get some of your money back when you sell. But sometimes you can increase the value of your home by enough to cover the cost of the renovation. Talk to a real estate agent in your area to get their advice on how much the renovations you are talking about will increase the value of your home.",
"title": ""
},
{
"docid": "942b47628a48cf9290710921e24a9e53",
"text": "Sorry, this isn't terribly helpful and I would post this as a comment but I'm new and apparently can't. Some considerations: 7% seems awfully high. Check SoFi and see if you can't refinance at a rate low(er) enough rate so that you won't be paying so much interest. How does reinvesting 10k into the company compare to paying off loans? 1.5 years in, you've paid down a lot of interest already... We would need a lot of particulars to give you specific advice, probably more than you're willing to give over the internet. Who does the financials for you business? They should be able to give you advice, or at least build the models specific for your situation to help you make a decision.",
"title": ""
},
{
"docid": "27467d97abb9b009915037eef77ffc99",
"text": "Umm actually asking to be refinanced at a lower rate *IS* asking them to forgive/give up part of the mortgage. Peoples greed in getting themselves into upside down mortgages are why we have problems, not the banks not helping them out enough.",
"title": ""
},
{
"docid": "5c43452e1ba8c504782fc670b3a45e16",
"text": "There is no relationship between the government appraisal and the mortgage appraisal. The loan appraisal is done by a lender to determine if the property value is in agreement with the loan amount. The government appraisal is done to determine how much to charge you in taxes. They use the values of residences and commercial property to get their operating budget each year. They also set the rate to generate the amount of income money they need. If they cut all appraisals in half, they would just double the rate. In some jurisdictions the government appraises every year, in other places every three years. Some only when the property is sold. In some jurisdictions the maximum increase or decrease in government appraisal is set by law. But then they reset after the house is sold. That being said. Use this time to review the appraisal from the government. They may have facts wrong. They may think you have a pool, or more bedrooms or a garage, when you don't. Some jurisdictions use an automated process, others do a more detailed/individual process. If there was a mistake ten years ago with the description it will never get caught unless you complain. Check with the governemnt website for how to appeal. Some have windows of opportunity for an appeal.",
"title": ""
},
{
"docid": "83ca3111536cc207caff9c31882d4746",
"text": "Don't buy a house as an investment; buy it if/because it's the housing you want to live in. Don't improve a house as an investment; improve it if/because that makes it more comfortable for you to live in it. It's a minor miracle when a home improvement pays back anything close to what it cost you, unless there are specific things that really need to be done (or undone), or its design has serious cosmetic or functional issues that might drive away potential buyers. A bit of websearching will find you much more realistic estimates of typical/average payback on home improvements. Remember that contractors are tempted to overestimate this. (The contractor I've used, who seems to be fairly trustworthy, doesn't report much more than 60% for any of the common renovations. And yes, that's really 60%, not 160%.)",
"title": ""
},
{
"docid": "4343d69b98c82e18a62d67a5bf7d42d0",
"text": "This shows the impact of the inquiries. It's from Credit Karma, and reflects my inquiries over the past two years. In my case, I refinanced 2 properties and the hit is after this fact, so my score at 766 is lower than when approved. You can go to Credit Karma and see how your score was impacted. If in fact the first inquiry did this, you have cause for action. In court, you get more attention by having sufficient specific data to support your claim, including your exact damages.",
"title": ""
},
{
"docid": "b11a00537c257f650ed6a54ae8d0c128",
"text": "I'm not sure about your first two options. But given your situation, a variant of option three seems possible. That way you don't have to throw away your appraisal, although it's possible that you'll need to get some kind of addendum related to the repairs. You also don't have your liquid money tied up long term. You just need to float it for a month or two while the repairs are being done. The bank should be able to preapprove you for the loan. Note that you might be better off without the loan. You'll have to pay interest on the loan and there's extra red tape. I'd just prefer not to tie up so much money in this property. I don't understand this. With a loan, you are even more tied up. Anything you do, you have to work with the bank. Sure, you have $80k more cash available with the loan, but it doesn't sound like you need it. With the loan, the bank makes the profit. If you buy in cash, you lose your interest from the cash, but you save paying the interest on the loan. In general, the interest rate on the loan will be higher than the return on the cash equivalent. A fourth option would be to pay the $15k up front as earnest money. The seller does the repairs through your chosen contractor. You pay the remaining $12.5k for the downpayment and buy the house with the loan. This is a more complicated purchase contract though, so cash might be a better option. You can easily evaluate the difficulty of the second option. Call a different bank and ask. If you explain the situation, they'll let you know if they can use the existing appraisal or not. Also consider asking the appraiser if there are specific banks that will accept the appraisal. That might be quicker than randomly choosing banks. It may be that your current bank just isn't used to investment properties. Requiring the previous owner to do repairs prior to sale is very common in residential properties. It sounds like the loan officer is trying to use the rules for residential for your investment purchase. A different bank may be more inclined to work with you for your actual purchase.",
"title": ""
},
{
"docid": "bc29100c3e89b4db2e5cfe70a2a70094",
"text": "The loan you will just have to get by applying to a bunch of banks or hiring someone (a broker) to line up bank financing on your behalf for a point on the loan. FHA is for your first house that you live in and allows you to get 97.5% loan to cost financing. That isn't for investment properties. However, FHA loans do exist for multifamily properties under section 207/223F. Your corporations should be SPEs so they don't affect each other. In the end, its up to you if you think it makes sense for all the single family homes to be in one portfolio. May make it easier to refi if you put all the properties in a cross collateralized pool for the bank to lend against. There is also no requirement for how long a corporation has been in existence for a loan. The loan has a claim on the property so it's pretty safe. So long as you haven't committed fraud before, they won't care about credit history.",
"title": ""
},
{
"docid": "0abf18cc25a8320ef87516be5b2300af",
"text": "I would not claim to be a personal expert in rental property. I do have friends and family and acquaintances who run rental units for additional income and/or make a full time living at the rental business. As JoeTaxpayer points out, rentals are a cash-eating business. You need to have enough liquid funds to endure uncertainty with maintenance and vacancy costs. Often a leveraged rental will show high ROI or CAGR, but that must be balanced by your overall risk and liquidity position. I have been told that a good rule-of-thumb is to buy in cash with a target ROI of 10%. Of course, YMMV and might not be realistic for your market. It may require you to do some serious bargain hunting, which seems reasonable based on the stagnant market you described. Some examples: The main point here is assessing the risk associated with financing real estate. The ROI (or CAGR) of a financed property looks great, but consider the Net Income. A few expensive maintenance events or vacancies will quickly get you to a negative cash flow. Multiply this by a few rentals and your risk exposure is multiplied too! Note that i did not factor in appreciation based on OP information. Cash Purchase with some very rough estimates based on OP example Net Income = (RENT - TAX - MAINT) = $17200 per year Finance Purchase rough estimate with 20% down Net Income = (RENT - MORT - TAX - MAINT) = $7500 per year",
"title": ""
},
{
"docid": "9682391181e29e0ff28ebdd867c816e5",
"text": "Your credit rating will rise once the loan is repaid or paid regularly (in time). It will not get back to normal instantly. If the property is dead weight you may want to sell it so your credit score will increase in the medium term.",
"title": ""
},
{
"docid": "8b275e691ec7a18fdd938a7a96fa7a9b",
"text": "In regards to the legal recourse, no there is none. Also, despite your frustrations with Citi, it may not be their fault. Mortgage companies are now forced to select appraisers (essentially at random) through 3rd party Appraisal Resource Companies (ARCs). This randomization mandate from the government was issued in order to combat fraud, but it is really causing more trouble for homeowners because it took away appraiser accountability. Basically, there's nothing we can do to fire an appraiser anymore. I've had appraiser do terrible jobs, just blatantly wrong, and have gone the distance with the dispute process only to find they won't change the value. My favorite real-life example came from an appraiser who got the bedroom count wrong (4 instead of 5); yet he took pictures of 5 bedrooms. The one he excluded he stated it shouldn't count because it didn't have a closet. Problem is, it DID have a closet. I had the homeowner take pictures of all of the closets in his house, and send them in. He still refused to change the count. After close to 2 months of the dispute process, the ARC came in and changed the count, but did not chagne the value, stating that the room count didn't increase the sqft, and there would be no adjustment in value. I was floored. The only solution we had was to wait for the appraisal to expire, then order it again; which we did. The new appraiser got the count right, and surprisingly (not really), it came in at the right value... In regards to the value necessary to avoid MI, they are likely using 80%, but it's not based on your current balance vs the value, it's based on the new loan amount (which will include costs, prepaids, skipped mortgage payments, etc) vs the value. Here are your options: Get a new appraisal. If you are confident the value is wrong, go somewhere else and get a new appraisal. Restructure the loan. Any competent Loan Officer would have noticed that you are very close to 80%, and should have offer you the option of splitting the mortgage into a 1st and 2nd loan. Keeping the first loan at 80%, and taking out a 2nd for the difference would avoid MI. Best Regards, Jared Newton",
"title": ""
}
] |
fiqa
|
00db690d858912c13bd2380cfd1a692b
|
The board of directors in companies
|
[
{
"docid": "cc022dee1f20d890acc672671bf68137",
"text": "Boards of Directors are required for corporations by nearly all jurisdictions. Some jurisdictions have almost self-defeating requirements however, such as in tax havens. Boards of Directors are compensated by the company for which they sit. Historically, they have set their own compensation almost always with tight qualitative legal bounds, but in the US, that has now changed, so investors now set Director compensation. Directors are typically not given wages or salary for work but compensation for expenses. For larger companies, this is semantics since compensation averages around one quarter of a million of USD. Regulations almost always proscribe agencies such as other corporations from sitting on boards and individuals convicted of serious crimes as well. Some jurisdictions will even restrict directories to other qualities such as solvency. While directors are elected by shareholders, their obligations are normally to the company, and each jurisdiction has its own set of rules for this. Almost always, directors are forbidden from selling access to their votes. Directors are almost always elected by holders of voting stock after a well-publicized announcement and extended time period. Investors are almost never restricted from sitting on a board so long as they meet the requirements described above.",
"title": ""
}
] |
[
{
"docid": "1ca3c5ec07188a8c92c46fb578d192c7",
"text": "Converting the comment from @MD-Tech into answer How or where could I find info about publicly traded companies about how stock owner friendly their compensation schemes are for their board and officers? This should be available in the annual report, probably in a directors' remunerations section for most companies",
"title": ""
},
{
"docid": "6c1b357082bb4a761064d615f1f858a1",
"text": "Except: it's a material concern at every company. If the senior executives all quit at the same time, this is going to be problematic no matter what company we're discussing. I wouldn't be surprised if most 10-Ks have similar generic language.",
"title": ""
},
{
"docid": "8678ed4f912e6edb926d4ad3c93d5ea7",
"text": "Shareholders have voting rights, and directors have fiduciary obligations to shareholders. Sure, shareholders have rights to the dividends, but stock confers decisionmaking powers. I'm not really sure what your answer to this is, or how you are differentiating the concept of ownership from this.",
"title": ""
},
{
"docid": "3f66d5baa80fec1f570bf779849b435e",
"text": "Also keep note - some companies have a combined CEO/Chairman of the board role. While he/she would not be allowed to negotiate contracts or stock plans, some corporate governance analysts advocate for the separation of the roles to remove any opportunity for the CEO to unduly influence the board. This could be the case for dysfunctional boards. However, the alternate camps will say that the combined role has no negative effect on shareholder returns. SEC regulations require companies to disclose negotiations between the board and CEO (as well as other named executives) for contracts, employee stock plans, and related information. Sometimes reading the proxy statement to find out, for example, how many times the board meets a year, how many other boards a director serves on, and if the CEO sits on any other board (usually discouraged to serve on more than 2) will provide some insight into a well-run (or not well-run) board.",
"title": ""
},
{
"docid": "e1f1fdd5eb17dbef71fb241a2edc2e2f",
"text": "She was there for 5 years. Someone brought in to sell the company is going to do it in 12-18 months, 24 max. She is probably a good #2 but not CEO material. There are many people like that who need some direction from the person above them but aren't good actually being the top person. I'd blame the board for not realizing this as much as I'd blame her.",
"title": ""
},
{
"docid": "00d21b3746e0c66b39ff8538ccd42fcd",
"text": "\"Owning more than 50% of a company's stock normally gives you the right to elect a majority, or even all of a company's (board of) directors. Once you have your directors in place, you can tell them who to hire and fire among managers. There are some things that may stand in the way of your doing this. First, there may be a company bylaw that says that the directors can be replaced only one \"\"class\"\" at a time, with three or four \"\"classes.\"\" Then it could take you two or three years to get control of the company. Second, there may be different classes of shares with different voting rights, so if e.g. \"\"A\"\" shares controlled by the founding family gives them ten votes, and \"\"B\"\" shares owned by the other shareholders, you may have a majority of total shares and be outvoted by the \"\"A\"\" shares.\"",
"title": ""
},
{
"docid": "5ebe9fa4ee74084e85bce4600ba68755",
"text": "Oh the company I work for now. Victim of a poor CEO that has turned into poor leadership across the board. There are directors I'd love to meet, act very interested in why they do things the way they do. The interest wouldn't even be faked. I'm genuinely curious how someone could have so many stupid ideas. Then tell them they suck and they're fired.",
"title": ""
},
{
"docid": "11e1ebe3d71db1e3366bbc19928f5024",
"text": "\"The usual pattern is that shareholders don't run companies in a practical sense, so \"\"if someone was just simply rich to buy > 50%, but does not know how to handle the company\"\" doesn't change anything. In large companies, the involvement of shareholders is limited to a few votes on key issues such as allocating profit (how much to keep in company vs pay in dividends) and choosing board members. And board members also don't run the company - they oversee how the company is being run, and choose executives who will actually run the company. If a rich person simply buys 50% and doesn't desire to get personally involved, then they just vote for whatever board members seem apropriate and forget about it.\"",
"title": ""
},
{
"docid": "440e6d89d62ad2fff4d6628f0c06caf1",
"text": "your request was fine. Business is multi-disciplinary and requires seeing things from many aspects, changing your perspective regularly. Our CEO changes which dimensions to evaluate his business every six months - at the top is porfitable growth, then every aspect of the business that influences that outcome is flipped and re-examined. He's been remarkably successful his entire career",
"title": ""
},
{
"docid": "c0a75c6f74188ba156f3b7ab5fda265f",
"text": "First, the stock does represent a share of ownership and if you have a different interpretation I'd like to see proof of that. Secondly, when the IPO or secondary offering happened that put those shares into the market int he first place, the company did receive proceeds from selling those shares. While others may profit afterward, it is worth noting that more than a few companies will have secondary offerings, convertible debt, incentive stock options and restricted stock that may be used down the road that are all dependent upon the current trading share price in terms of how useful these can be used to fund operations, pay executives and so forth. Third, if someone buys up enough shares of the company then they gain control of the company which while you aren't mentioning this case, it is something to note as some individuals buy stock so that they can take over the company which happens. Usually this has more of an overall plan but the idea here is that getting that 50%+1 control of the company's voting shares are an important piece to things here.",
"title": ""
},
{
"docid": "4d023fb18dfd4ed07201165c868ccdc2",
"text": "\"You own a fractional share of the company, maybe you should care enough to at least read the proxy statements which explain the pro and con position for each of the issues you are voting on. That doesn't seem like too much to ask. On the other hand, if you are saying that the people who get paid to be knowledgeable about that stuff should just go make the decisions without troubling you with the details, then choose the option to go with their recommendations, which are always clearly indicated on the voting form. However, if you do this, it might make sense to at least do some investigation of who you are voting onto that board. I guess, as mpenrow said, you could just abstain, but I'm not sure how that is any different than just trashing the form. As for the idea that proxy votes are tainted somehow, the one missing piece of that conspiracy is what those people have to gain. Are you implying that your broker who has an interest in you making money off your investments and liking them would fraudulently cast proxy votes for you in a way that would harm the company and your return? Why exactly would they do this? I find your stance on the whole thing a bit confusing though. You seem to have some strong opinions on corporate Governance, but at the same time aren't willing to invest any effort in the one place you have any control over the situation. I'm just sayin.... Update Per the following information from the SEC Website, it looks like the meaning of a proxy vote can vary depending on the mechanics of the specific issue you are voting on. My emphasis added. What do \"\"for,\"\" \"\"against,\"\" \"\"abstain\"\"and \"\"withhold\"\" mean on the proxy card or voter instruction form? Depending on what you are voting on, the proxy card or voting instruction form gives you a choice of voting \"\"for,\"\" \"\"against,\"\" or \"\"abstain,\"\" or \"\"for\"\" or \"\"withhold.\"\" Here is an explanation of the differences: Election of directors: Generally, company bylaws or other corporate documents establish how directors are elected. There are two main types of ways to elect directors: plurality vote and majority vote. A \"\"plurality vote\"\" means that the winning candidate only needs to get more votes than a competing candidate. If a director runs unopposed, he or she only needs one vote to be elected, so an \"\"against\"\" vote is meaningless. Because of this, shareholders have the option to express dissatisfaction with a candidate by indicating that they wish to \"\"withhold\"\" authority to vote their shares in favor of the candidate. A substantial number of \"\"withhold\"\" votes will not prevent a candidate from getting elected, but it can sometimes influence future decisions by the board of directors concerning director nominees. A \"\"majority vote\"\" means that directors are elected only if they receive a majority of the shares voting or present at the meeting. In this case, you have the choice of voting \"\"for\"\" each nominee, \"\"against\"\" each nominee, or you can \"\"abstain\"\" from voting your shares. An \"\"abstain\"\" vote may or may not affect a director's election. Each company must disclose how \"\"abstain\"\" or \"\"withhold\"\" votes affect an election in its proxy statement. This information is often found toward the beginning of the proxy statement under a heading such as \"\"Votes Required to Adopt a Proposal\"\" or \"\"How Your Votes Are Counted.\"\" Proposals other than an election of directors: Matters other than voting on the election of directors, like voting on shareholder proposals, are typically approved by a vote of a majority of the shares voting or present at the meeting. In this situation, you are usually given the choice to vote your shares \"\"for\"\" or \"\"against\"\" a proposal, or to \"\"abstain\"\" from voting on it. Again, the effect of an \"\"abstain\"\" vote may depend on the specific voting rule that applies. The company's proxy statement should again disclose the effect of an abstain vote.\"",
"title": ""
},
{
"docid": "c783ef9f0ca268bb0df24e9258cb74e7",
"text": "\"The list of the public companies is available on the regulatory agencies' sites usually (for example, in the US, you can look at SEC filings). Otherwise, you can check the stock exchange listings, which show all the public companies traded on that exchange. The shareholders, on the other hand, are normally not listed and not published. You'll have to ask the company, and it probably won't tell you (and won't even know them all as many shares are held in the \"\"street name\"\" of the broker).\"",
"title": ""
},
{
"docid": "480c0c63c7be67c322e10fa1df83fa21",
"text": "In reality, shareholders have very few rights other than the right to profits and the right to vote on a board. In general, a proxy fight to replace the board is complicated and expensive, so unless the interested parties buy close to 50% of the shares it's unlikely to be successful. Furthermore, a lot of the shares are held by insiders and institutions. I suppose if a large group of shareholders got together and demanded this, the existing directors may listen and give in to avoid unhappy shareholders being a general annoyance. That seems pretty unlikely unless the stake gets large. There's a great episode of NPR Planet Money [board games](http://www.npr.org/sections/money/2017/07/19/538141248/episode-594-board-games) which talks about one man's struggle to get the company to take some action.",
"title": ""
},
{
"docid": "2054be436fb48e9b1d7e8b24b853b05c",
"text": "That's not what is entirely happening. It's two separate situations. They don't have equal voting and some are able to vote more than once. The two investors want to keep it that way while the rest want to implement an even voting system. The two investors have been asked to drop their lawsuit against the old CEO since he's no longer with the company but it's implied that they will continue to sue him because he still has influence and the ability to elect new board members which he recently added two. Also it's disengrnous to say just the two investors. They are being asked to do this by the shareholders.",
"title": ""
},
{
"docid": "14619bc463724498d6b497feefe972a7",
"text": "I'm really unsure what you are trying to tell me. I don't see how knowing CEOs would aid me in forming an opinion on this issue. Your second statement is simply foolish, shares of a company, represent ownership. Therefore shareholders are the owners. These shareholders elect a board, this board acts like a proxy between the managers (CEO's) and the owners (shareholders). This is how every public company operates. The problem that arises is that managers have an incentive to act in their own best interests, not in the interests of shareholders. So to solve this manager compensation is aligned with company performance so that if the shareholders are better off the managers are better off.",
"title": ""
}
] |
fiqa
|
f9f69462fe89e7a64f044c556b7b891d
|
Paying extra on a mortgage. How much can I save? [duplicate]
|
[
{
"docid": "49a781a3bae9668300c377a3d032826f",
"text": "Can I pay $12,000 extra once a year or $1000 every month - which option is better? Depends when. If you mean 12K now vs 1K a month over the next 12 months, repeating this each year, now wins. If you mean saving 1K a month for 12 months then doing a lumpsum, the 1K a month wins. Basically, a sooner payment saves you more money than a later payment. The first option does sound better, but for a 30 year mortgage, is it that significant? Your number one issue is that you have a thirty year mortgage. The interest you pay on it is monstrous. For the 30 year term, you pay around 500K in interest. A 15-year mortgage is 300K cheaper (only 200K in interest will be paid). The monthly payment would be 1250 more. How much money and years on a mortgage can I save? When is the best time to pay? At the end of each year? You can knock off about a dozen years. Save I think ~250K. You can find mortgage calculators online or talk to your mortgage advisor to play around with the numbers.",
"title": ""
},
{
"docid": "588d7deabaf5f7eb299ccaad1bf4760c",
"text": "\"When is the best time to pay? At the end of each year? If you save $1,000 each month at 1% so as to pay $12,000 at EOY on a 4.75% loan, you've lost \"\"4.75% - 1% = 3.75%\"\" over that year. (And that's presuming you put the money in a \"\"high yield\"\" online savings account.) Thus, the best time to pay is as soon as you have the money. EDIT: This all assumes that you have an emergency fund (more than the bare minimum $1K), zero other debt with a higher rate than 4.75% and that you are getting the full company match from 401(k).\"",
"title": ""
},
{
"docid": "f9e48a11308d97a1a6dc2bc0223e38dd",
"text": "Paying $12,000 in lump sumps annually will mean a difference of about $250 in interest vs. paying $1,000 monthly. If front-load the big payment, that saves ~$250 over paying monthly over the year. If you planned to save that money each month and pay it at the end, then it would cost you ~$250 more in mortgage interest. So that's how much money you would have to make with that saved money to offset the cost. Over the life of the loan the choice between the two equates to less than $5,000. If you pay monthly it's easy to calculate that an extra $1,000/month would reduce the loan to 17 years, 3 months. That would give you a savings of ~$400,000 at the cost of paying $207,000 extra during those 17 years. Many people would suggest that you invest the money instead because the annual growth rates of the stock market are well in excess of your 4.375% mortgage. What you decide is up to you and how conservative your investing strategy is.",
"title": ""
},
{
"docid": "cd51a668e742c6de73d4920cb374457f",
"text": "If you're truly ready to pay an extra $1000 every month, and are confident you'll likely always be able to, you should refinance to a 15 year mortgage. 15 year mortgages are typically sold at around a half a point lower interest rates, meaning that instead of your 4.375% APR, you'll get something like 3.875% APR. That's a lot of money over the course of the mortgage. You'll end up paying around a thousand a month more - so, exactly what you're thinking of doing - and not only save money from that earlier payment, but also have a lower interest rate. That 0.5% means something like $25k less over the life of the mortgage. It's also the difference in about $130 or so a month in your required payment. Now of course you'll be locked into making that larger payment - so the difference between what you're suggesting and this is that you're paying an extra $25k in exchange for the ability to pay it off more slowly (in which case you'd also pay more interest, obviously, but in the best case scenario). In the 15 year scenario you must make those ~$4000 payments. In the 30 year scenario you can pay ~$2900 for a while if you lose your job or want to go on vacation or ... whatever. Of course, the reverse is also true: you'll have to make the payments, so you will. Many people find enforced savings to be a good strategy (myself among them); I have a 15 year mortgage and am happy that I have to make the higher payment, because it means I can't spend that extra money frivolously. So what I'd do if I were you is shop around for a 15 year refi. It'll cost a few grand, so don't take one unless you can save at least half a point, but if you can, do.",
"title": ""
},
{
"docid": "d4b7468c13377d6426ec88fbe8010119",
"text": "How much can I save? Depends on inflation and what other investment opportunities you have. It could end up costing you millions. Can I pay $12,000 extra once a year or $1000 every month - which option is better? It depends on how risk adverse you are. The first option does sound better, but for a 30 year mortgage, is it that significant? How much of your time is it going to cost you to do it every month? What is keeping you from doing it every day? How much is your time worth to you. Giving the bank its money sooner is always better than giving it it's money from a saving interest perspective. When is the best time to pay? See above.",
"title": ""
}
] |
[
{
"docid": "c00d295dd92b63c56bd599f579d7ac83",
"text": "\"So, let's take a mortgage loan that allows prepayment without penalty. Say I have a 30 year mortgage and I have paid it for 15 years. By the 16th year almost all the interest on the 30 year loan has been paid to the bank This is incorrect thinking. On a 30 year loan, at year 15 about 2/3's of the total interest to be paid has been paid, and the principal is about 1/3 lower than the original loan amount. You may want to play with some amortization calculators that are freely available to see this in action. If you were to pay off the balance, at that point, you would avoid paying the remaining 1/3 of interest. Consider a 100K 30 year mortgage at 4.5% In month two the payment breaks down with $132 going to principal, and $374 going to interest. If, in month one, you had an extra $132 and directed it to principal, you would save $374 in interest. That is a great ROI and why it is wonderful to get out of debt as soon as possible. The trouble with this is of course, is that most people can barely afford the mortgage payment when it is new so lets look at the same situation in year 15. Here, $271 would go to principal, and $235 to interest. So you would have to come up with more money to save less interest. It is still a great ROI, but less dramatic. If you understand the \"\"magic\"\" of compounding interest, then you can understand loans. It is just compounding interest in reverse. It works against you.\"",
"title": ""
},
{
"docid": "0c1aca7e474f451da3c645ecb68b2c99",
"text": "\"Check to see if you can do this at this point. When I was refinancing the mortgage on my last house, I put a sizable \"\"extra\"\" chunk down like you're planning to do, but the amount of the loan had already gone through the processing (for lack of a more specific term). My extra money went toward principal, but my payment was still the same as if I hadn't put any extra down. If you find out that it's too far along in the process, not to worry: extra emergency fund isn't a bad thing to have.\"",
"title": ""
},
{
"docid": "a5c55a0be58149978b91ecb8eba52a1b",
"text": "To get a good estimate, go here or other similar sites and see. But basically, yes, you can save yourself a whole lot of money just by paying extra every month. One note though, do make sure you are specifying that you want the money to go towards principal, not escrow or toward prepaying interest.",
"title": ""
},
{
"docid": "039cc579a85a6ad914607b922112d2e7",
"text": "A point that hasn't been mentioned is whether paying down the mortgage sooner will get you out of unnecessary additional costs, such as PMI or a lender's requirement that you carry flood insurance on the outstanding mortgage balance, rather than the actual value/replacement cost of the structures. (My personal bugbear: house worth about $100K, while the bare land could be sold for about twice that, so I'm paying about 50% extra for flood insurance.) May not apply to your loan-from-parents situation, but in the general case it should be considered. FWIW, in your situation I'd probably invest the money.",
"title": ""
},
{
"docid": "b5ce0e715bbecbe660d6f410a6281b97",
"text": "There is a way to get a reasonable estimate of what you still owe, and then the way to get the exact value. When the loan started they should have given you amortization table that laid out each payment including the principal, interest and balance for each payment. If there are any other fees included in the payment those also should have been detailed. Determine how may payments you have maid: did you make the first payment on day one, or the start of the next month? Was the last payment the 24th, or the next one? The table will then tell you what you owe after your most recent payment. To get the exact value call the lender. The amount grows between payment due to the interest that is accumulating. They will need to know when the payment will arrive so they can give you the correct value. To calculate how much you will save do the following calculation: payment = monthly payment for principal and interest paymentsmade =Number of payments made = 24 paymentsremaining = Number of payments remaining = 60 - paymentsmade = 60-24 = 36 instantpayoff = number from loan company savings = (payment * paymentsremaining ) - instantpayoff",
"title": ""
},
{
"docid": "9a74ce917b8bba32d778ccb34fe977c9",
"text": "Depending on your bank you may receive an ACH discount for doing automatic withdrawals from a deposit account at that bank. Now, this depends on your bank and you need to do independent research on that topic. As far as dictating what your extra money goes towards each month (early payments, principal payments, interest payments) you need to discuss that with your bank. I'm sure it's not too difficult to find. In my experience most banks, so long as you didn't sign a contract on your mortgage where you're penalized for sending additional money, will apply extra money toward early payments, and not principal. I would suggest calling them. I know for my student loans I have to send a detailed list of my loans and in what order I want my extra payments toward each, otherwise it will be considered an early payment, or it will be spread evenly among them all.",
"title": ""
},
{
"docid": "1827ae9d7531f5c95ec90dbe4cc78465",
"text": "It will take a bit of sacrifice. First, I'd review spending. Between you and all your family, try to separate the 'needs' from the 'wants' and cut out 75% of the 'wants.' Second, there's almost always part time work that can help raise extra money. Even if it's a small fraction of your current hourly pay, every bit you and your family can throw at that 30% debt will help get rid of it and help you get to the point when you can refinance the mortgage.",
"title": ""
},
{
"docid": "a91162ee9e8f5503c8522a539fd8149b",
"text": "\"Basically, the easiest way to do this is to chart out the \"\"what-ifs\"\". Applying the amortization formula (see here) using the numbers you supplied and a little guesswork, I calculated an interest rate of 3.75% (which is good) and that you've already made 17 semi-monthly payments (8 and a half months' worth) of $680.04, out of a 30-year, 720-payment loan term. These are the numbers I will use. Let's now suppose that tomorrow, you found $100 extra every two weeks in your budget, and decided to put it toward your mortgage starting with the next payment. That makes the semi-monthly payments $780 each. You would pay off the mortgage in 23 years (making 557 more payments instead of 703 more). Your total payments will be $434,460, down from $478.040, so your interest costs on the loan were reduced by $43,580 (but, my mistake, we can't count this amount as money in the bank; it's included in the next amount of money to come in). Now, after the mortgage is paid off, you have $780 semi-monthly for the remaining 73 months of your original 30-year loan (a total of $113,880) which you can now do something else with. If you stuffed it in your mattress, you'd earn 0% and so that's the worst-case scenario. For anything else to be worth it, you must be getting a rate of return such that $100 payments, 24 times a year for a total of 703 payments must equal $113,880. We use the future value annuity formula (here): v = p*((i+1)n-1)/i, plugging in v ($113880, our FV goal), $100 for P (the monthly payment) and 703 for n (total number of payments. We're looking for i, the interest rate. We're making 24 payments per year, so the value of i we find will be 1/24 of the stated annual interest rate of any account you put it into. We find that in order to make the same amount of money on an annuity that you save by paying off the loan, the interest rate on the account must average 3.07%. However, you're probably not going to stuff the savings from the mortgage in your mattress and sleep on it for 6 years. What if you invest it, in the same security you're considering now? That would be 146 payments of $780 into an interest-bearing account, plus the interest savings. Now, the interest rate on the security must be greater, because you're not only saving money on the mortgage, you're making money on the savings. Assuming the annuity APR stays the same now vs later, we find that the APR on the annuity must equal, surprise, 3.75% in order to end up with the same amount of money. Why is that? Well, the interest growing on your $100 semi-monthly exactly offsets the interest you would save on the mortgage by reducing the principal by $100. Both the loan balance you would remove and the annuity balance you increase would accrue the same interest over the same time if they had the same rate. The main difference, to you, is that by paying into the annuity now, you have cash now; by paying into the mortgage now, you don't have money now, but you have WAY more money later. The actual real time-values of the money, however, are the same; the future value of $200/mo for 30 years is equal to $0/mo for 24 years and then $1560/mo for 6 years, but the real money paid in over 30 years is $72,000 vs $112,320. That kind of math is why analysts encourage people to start retirement saving early. One more thing. If you live in the United States, the interest charges on your mortgage are tax-deductible. So, that $43,580 you saved by paying down the mortgage? Take 25% of it and throw it away as taxes (assuming you're in the most common wage-earner tax bracket). That's $10895 in potential tax savings that you don't get over the life of the loan. If you penalize the \"\"pay-off-early\"\" track by subtracting those extra taxes, you find that the break-even APR on the annuity account is about 3.095%.\"",
"title": ""
},
{
"docid": "dc7c29ec9bea9a43301c74368da3f678",
"text": "By rounding my house payments up to the nearest $50.00, my 30 year mortgage was paid off in 7 years. Initially, my mortgage payment was roughly $600, $50 going to principal and $550 going to interest (banker's profits). By paying $650, I was actually doubling the amount I was paying on principal. Since interest is computed as a function (percentage) of the outstanding principal balance, the amount of my fixed payment that went to interest decreased each month, and the amount that went to principal increased. In 7 years I owned my home free and clear, and started putting the money I had been putting into the mortgage payments into investments. A rule of thumb I have discovered is that it takes half the time to save money to meet a goal that it takes to pay off the same debt.",
"title": ""
},
{
"docid": "552cf0cf29a72c23f41e4ca40e19724a",
"text": "At this time there is one advantage of having a 30 year loan right now over a 15 year loan. The down side is you will be paying 1% higher interest rate. So the question is can you beat 1% on the money you save every month. So Lets say instead of going with 15 year mortgage I get a 30 and put the $200 monthly difference in lets say the DIA fund. Will I make more on that money than the interest I am losing? My answer is probably yes. Plus lets factor in inflation. If we have any high inflation for a few years in the middle of that 30 not only with the true value of what you owe go down but the interest you can make in the bank could be higher than the 4% you are paying for your 30 year loan. Just a risk reward thing I think more people should consider.",
"title": ""
},
{
"docid": "8481a2039b2bc140fa374e80e6830c32",
"text": "If there's no prepayment penalty, and if the extra is applied to principal rather than just toward later payments, then paying extra saves you money. Paying more often, by itself, doesn't. Paying early within a single month (ie, paying off the loan at the same average rate) doesn't save enough you be worth considering",
"title": ""
},
{
"docid": "ad33a976edb517e8395a66c4212ed499",
"text": "First of all, you should absolutely put money into savings until you have at least a 6 month cushion, and preferably longer. It doesn't matter if you get 0% interest in your savings and have a high interest rate mortgage, the cushion is still more important. Once you have a nice emergency fund, you can then consider the question of whether to pay more towards the mortgage if the numbers make sense. However, in my opinion, it's not just a straight comparison of interest rates. In other words, if your savings account gives you 1% and your mortgage is 5%, that's still not an automatic win for the mortgage. The reason is that by putting the money into your mortgage, you're locking it up and can't access it. To me, money in the hand is worth a lot more than money that's yours on paper but not easily accessible. I don't know the math well enough, but you don't really need the math. Just keep in mind that you have to weight the present value of putting that money into savings vs the future value of putting it into your mortgage and paying less interest at some point in the future. Do the math and see how much you will save by paying the mortgage down faster, but also keep in mind that future money is worth less than present money. A LOT less if you suddenly have an emergency or decide on a major purchase and need the money, but then have to jump through hoops to get to it. To me, you need to save a considerable amount by paying down the mortgage, and also understand that your money is getting locked away, for it to make sense.",
"title": ""
},
{
"docid": "a61d931bd678a82dee92f5c87219b9da",
"text": "The principal and interest are fixed, no matter how much money you throw at them. This is not correct. If I pay an extra $1000 in principal this month, then my mortgage balance is decreased. So slightly less interest accrues before my next payment. That means my next payment will be slightly more toward principal and slightly less toward interest than it would have been if I hadn't made an extra principal payment. This means that my principal will eventually drop to zero earlier than it would have if I had not made the extra payment, and I will end up making fewer total payments than I would have without the extra principal payments. Of course, the effect is even stronger if I make regular extra payments rather than a single one. Like paying off any debt, you can consider this payment essentially a risk free investment paying whatever is the interest rate on that debt. You know that by making this payment, you reduce your interest payments over the coming years by the interest rate on that amount. Edit: In comments you said, you will pay your mortgage off earlier but you won't drop the amount required to pay each month. Look at a mortgage amortization table to see this. This isn't because of the amortization table, it's because of the contract terms between you and the lender. After you make an extra principal payment, a new amortization schedule has to be calculated one way or another. It would be possible to re-calculate a new reduced monthly payment keeping the number of payments remaining fixed. Or you can calculate a new repayment schedule keeping the total monthly payment fixed and reducing the number of payments. It happens the banks prefer to do the 2nd of these rather than the first, so that's the terms they offer when lending. Perhaps someone more knowledgeable can comment on why they prefer that. In any case, by reducing your principal you improve your personal balance sheet and build equity in the mortgaged property so that, for example, if you sell you'll keep more of the proceeds and use less to pay off your loan.",
"title": ""
},
{
"docid": "ade1a70a1ee0761e9bad174726ff779e",
"text": "\"I've heard that the bank may agree to a \"\"one time adjustment\"\" to lower the payments on Mortgage #2 because of paying a very large payment. Is this something that really happens? It's to the banks advantage to reduce the payments in that situation. If they were willing to loan you money previously, they should still be willing. If they keep the payments the same, then you'll pay off the loan faster. Just playing with a spreadsheet, paying off a third of the mortgage amount would eliminate the back half of the payments or reduces payments by around two fifths (leaving off any escrow or insurance). If you can afford the payments, I'd lean towards leaving them at the current level and paying off the loan early. But you know your circumstances better than we do. If you are underfunded elsewhere, shore things up. Fully fund your 401k and IRA. Fill out your emergency fund. Buy that new appliance that you don't quite need yet but will soon. If you are paying PMI, you should reduce the principal down to the point where you no longer have to do so. That's usually more than 20% equity (or less than an 80% loan). There is an argument for investing the remainder in securities (stocks and bonds). If you itemize, you can deduct the interest on your mortgage. And then you can deduct other things, like local and state taxes. If you're getting a higher return from securities than you'd pay on the mortgage, it can be a good investment. Five or ten years from now, when your interest drops closer to the itemization threshold, you can cash out and pay off more of the mortgage than you could now. The problem is that this might not be the best time for that. The Buffett Indicator is currently higher than it was before the 2007-9 market crash. That suggests that stocks aren't the best place for a medium term investment right now. I'd pay down the mortgage. You know the return on that. No matter what happens with the market, it will save you on interest. I'd keep the payments where they are now unless they are straining your budget unduly. Pay off your thirty year mortgage in fifteen years.\"",
"title": ""
},
{
"docid": "cced62672ce272ec276ac0b921d159f8",
"text": "In general, saving money should be prioritized over extra debt payments. Every dollar that you spend paying down a debt will decrease the amount of principal owed; this will directly decrease the future interest payments you will make. However, as time goes on, you are dealing with a smaller and smaller set of principal; additionally, it is assumed that your income will grow (or at least keep pace with inflation), making the debt more bearable. On the other hand, every dollar you save (or invest) now will increase your future income - also making the future debt more bearable. Not only that, but the longer you save, the more value to you get from having saved, meaning you should save as early as possible. Finally, the benefits of paying down the mortgage early end when the mortgage is completely paid off, while the benefits of saving will continue (and even grow) after the house is owned free and clear. That is, if you have an extra $100,000 to put into the mortgage during the life of the loan, you could sink that into the mortgage and see it disappear, or you could invest it, and reap the dividends for the rest of your life. Caveat emptor: behavior trumps numbers. This only works if you will actually be disciplined about saving the extra money rather than paying off debt. If you're the kind of person for whom money burns a hole in your pocket until you spend it, then use it on debt. But if you are able to save and invest that money, you will be better off in the long run.",
"title": ""
}
] |
fiqa
|
07f5e4a0c29ee6c85ab20d596132234b
|
If I'm cash-flow negative, should I dollar-cost-average the money from my bonus over the entire year?
|
[
{
"docid": "6263bcb569ac81cf55099b6957a8bc54",
"text": "\"Essentially, your question is \"\"lump sum vs DCA\"\" and your tags reflect that. In the long run, lump sum, say a Jan 2 deposit each year, will beat DCA by about 1/2 the average annual market return. $12,000 will see a 10% return, vs, $1,000/month over the year seeing 6%. What hurts is when the market tanks in the first half of the year and you think DCA would have helped. This is a 'feeling' issue, not a math problem. But. By the time you have $100K invested, the difference of DCA vs lump sum with new money fades, as new deposits are small compared to the funds invested. By then, you need to know your target allocation and deposit to keep that allocation with new money.\"",
"title": ""
},
{
"docid": "0aeeee908b0718dd8905df1decf1431b",
"text": "You will maximize your expected wealth by investing all the money you intend to invest, as soon as you have it available. Don't let the mythos of dollar cost averaging induce you to allocate more much money to a savings account than is optimal. If you want the positive expected return of the market, don't put your money in a savings account. That's especially true now, when you are certainly earning a negative real interest rate on your savings account. Dollar cost averaging and putting all your money in at the beginning would have the same expected return except that if you put all your money in earlier, it spends more time in the market, so your expected return is higher. Your volatility is also higher (because your savings account would have very low volatility) but your preference for investment tells me that you view the expected return and volatility tradeoff of the stock market as acceptable. If you need something to help you feel less stress about investing right away, think of it as dollar cost averaging on a yearly basis instead of monthly. Further, you take take comfort in knowing that you have allocated your wealth as you can instead of letting it fizzle away in real terms in a bank account.",
"title": ""
}
] |
[
{
"docid": "dba80ff472f390f5f0c726aae6bb982c",
"text": "Yes, I have done this and did not feel a change in cash flow - but I didn't do it a the age of 23. I did it at a time when it was comfortable to do so. I should have done it sooner and I strongly encourage you to do so. Another consideration: Is your companies program a good one? if it is not among the best at providing good funds with low fees then you should consider only putting 6% into your employer account to get the match. Above that dollar amount start your own ROTH IRA at the brokerage of your choice and invest the rest there. The fee difference can be considerable amounting to theoretically much higher returns over a long time period. If you choose to do the max , You would not want to max out before the end of the year. Calculate your deferral very carefully to make sure you at least put in 6% deferral on every paycheck to the end of the year. Otherwise you may miss out on your company match. It is wise to consider a ROTH but it is extremely tough to know if it will be good for you or not. It all depends on what kind of taxes (payroll, VAT, etc) you pay now and what you will pay in the future. On the other hand the potential for tax-free capital accumulation is very nice so it seems you should trend toward Roth.",
"title": ""
},
{
"docid": "a6f36feca2812f61fd959f5089dbcb7e",
"text": "This is the same as any case where income is variable. How do you deal with the months where expected cash flows are lower than projected? When I got married, my wife was in the habit of allocating money to be spent in the current month from income accrued during the previous month. This is slightly complicated because we account for taxes (and benefit expenses) withheld in the current months' paychecks as current expenses, but we allocate the gross income from that check to the following month for spending. The benefit of spending only money made during the previous month is that income shocks are less shocking. I was working for a start-up and they missed payroll that normally arrived on the first of the month. Most of my co-workers were calling the bank in a panic to avoid over-draft fees with their mortgage payments, but my mortgage payment was already covered. Similarly, when the same start-up had a reduction in force on the first day of a new quarter, I didn't have to pull any money from savings during the 3 weeks I was unemployed. In the end, you're going to have to allocate money to the budget based on the actual income--which is lower than your expectations. What part of the budget should fairly be reduced is a question you and your wife will have to figure out.",
"title": ""
},
{
"docid": "9a5f2fc0186a9439970d88423060556b",
"text": "I think I understand what I am doing wrong. To provide some clarity, I am trying to determine what the value of a project is to a firm. To do this I am taking FCF, not including interest or principal payments, and discounting back to get an NPV enterprise value. I then back off net debt to get to equity value. I believe what I am doing wrong is that I show that initial $50M as a cash outflow in period 0 and then back it off again when I go from enterprise value to equity value. Does this make any sense? Thanks for your help.",
"title": ""
},
{
"docid": "03a0fb7b8594a2f775d15ddeccc01168",
"text": "Brownbag your lunch and make coffee at home. If your current lifestyle includes daily takeout lunches and/or barista-made drinks, a rough estimate is you have a negative cash flow of $8-20 per day, $40-100 per week, $2080-5200 per year. If you have daily smoothies, buy a blender. If you have daily lattes buy an espresso maker. I recently got myself a sodastream and it's been worth it. Until you have a six figure portfolio, you aren't going to swing a comparable annual return differential based on asset allocation.",
"title": ""
},
{
"docid": "877f62b444601a9c72c48c25447bfa9d",
"text": "Balance sheet engineering. You might be right, but it might not be a cost of money issue. It could be a million other things. You might be trying to line up some future ROI metrics because you know something positive or negative about WFM's near term projections. There's many reasons to go one way or another, and future effects on investor sentiment, the balance sheet, various metrics that AMZN has deemed important in past comments/filings, etc.... A lot goes into these decisions.",
"title": ""
},
{
"docid": "99b27b57ce3a120c0ec6eba6980fe7a2",
"text": "Does it make sense to calculate the IRR based on the outstanding value of the project, or just use the cash flows paid out? Let's assume I invest x amount every year for 49 years, and the investment grows at a constant rate, but I do not get dividends before (which will be constant) 50 years later. I assume that the value of the investment will decline as it pays dividend, and will be worth 0 when the dividends stop. Do I calculate the IRR as the negative streams of outflows for the first 49 years and then positive cash inflows from 50 year in the future? If I apply this method, the IRR will be very low, almost equal to the annual expected return. Or based on the current value of the project for each year combined with cash outflows for the first 49 years and dividends from year 50? If I apply this method, the IRR will be a lot higher than the first method.",
"title": ""
},
{
"docid": "1c8bbe9235409f5c606a86859895a345",
"text": "That depends whether you're betting on the market going up, or down, during the year. If you don't like to bet (and I don't), you can take advantage of dollar cost averaging by splitting it up into smaller contributions throughout the year.",
"title": ""
},
{
"docid": "0e8fefe281a9f811bfd8f1f21c19ed49",
"text": "If you define dollar cost cost averaging as investing a specific dollar amount over a certain fixed time frame then it does not work statistically better than any other strategy for getting that money in the market. (IE Aunt Ruth wants to invest $60,000 in the stock market and does it $5000 a month for a year.) It will work better on some markets and worse on others, but on average it won't be any better. Dollar cost averaging of this form is effectively a bet that gains will occur at the end of the time period rather than the beginning, sometimes this bet will pay off, other times it won't. A regular investment contribution of what you can afford over an indefinite time period (IE 401k contribution) is NOT Dollar Cost Averaging but it is an effective investment strategy.",
"title": ""
},
{
"docid": "da7de84904846162a370c77b3517cae3",
"text": "\"So, if I understand the investment program here: You have $100 of tax withheld from your salary at the end of Jan, Feb, Mar... until December. This withholding is in excess of the expected tax for the year. You use the appropriate H&R Block product to file your taxes, and H&R Block gets your refund of $1200 on March 1st. H&R Block adds 10& and give you e-cards for $1320 On the face of it, this represents a return of 15.19% per year, compounded monthly. However, there are a few wrinkles that might make the scheme less inviting: You'll get a receipt for miscellaneous income from H&R Block, and pay tax on the \"\"earnings\"\". The quoted return is only realized if you can use the e-cards immediately. If they sit around for a while, then they aren't earning any interest. If you sell them for cash at a discount (if you even can!) then this reduces the return. If you don't cash them at all, they're a total loss. This offer was announced on Jan 15, 2015. So you can't go back and put it in place for 2014. And if you set it up for withholding in 2015, is there any guarantee that it the same offer will be in place when filing in 2016?\"",
"title": ""
},
{
"docid": "c0aad58dd1f7708fabaa2d5d9a2c7d99",
"text": "> 1)What is the formula to turn the annualized rate into a monthly rate? What do you *think* it is? > 2)What is the formula to find out the NPV of monthly cash flows? Same one as usual. Remember, value can only be summed if it's *at the same point in time.* > For example, if I get $1000, $2000, and $3000 in months 1, 2, and 3, how do I calculate how much each of those are equal to as a present value if the annual discount rate is 8%? Think it through.",
"title": ""
},
{
"docid": "e8771dc2165ce076d4b9c06951d94b41",
"text": "\"The best way to do this is to use IRR. It's a complicated calculation, but will take into account multiple in/out cash flows over time along with \"\"idle periods\"\" where your money may not have been doing anything. Excel can calculate it for you using the XIRR function\"",
"title": ""
},
{
"docid": "589e8e9ab52c413eb5b16076903fd7a3",
"text": "The optimal time period is unambiguously zero seconds. Put it all in immediately. Dollar cost averaging reduces the risk that you will be buying at a bad time (no one knows whether now is a bad or great time), but brings with it reduction in expected return because you will be keeping a lot of money in cash for a long time. You are reducing your risk and your expected return by dollar cost averaging. It's not crazy to trade expected returns for lower risk. People do it all the time. However, if you have a pot of money you intend to invest and you do so over a period of time, then you are changing your risk profile over time in a way that doesn't correspond to changes in your risk preferences. This is contrary to finance theory and is not optimal. The optimal percentage of your wealth invested in risky assets is proportional to your tolerance for risk and should not change over time unless that tolerance changes. Dollar cost averaging makes sense if you are setting aside some of your income each month to invest. In that case it is simply a way of being invested for as long as possible. Having a pile of money sitting around while you invest it little by little over time is a misuse of dollar-cost averaging. Bottom line: forcing dollar cost averaging on a pile of money you intend to invest is not based in sound finance theory. If you want to invest all that money, do so now. If you are too risk averse to put it all in, then decide how much you will invest, invest that much now, and keep the rest in a savings account indefinitely. Don't change your investment allocation proportion unless your risk aversion changes. There are many people on the internet and elsewhere who preach the gospel of dollar cost averaging, but their belief in it is not based on sound principles. It's just a dogma. The language of your question implies that you may be interested in sound principles, so I have given you the real answer.",
"title": ""
},
{
"docid": "97c5f72c1b553c04307b43372b616452",
"text": "\"I am interested in seeing what happens to your report after you test this, but I don't think it's possible in practice, would not affect your credit score, and also wouldn't be worth it for you to carry a negative balance like that. Having a -1% credit utilization essentially means that you are lending the credit card company money, which isn't really something that the credit card companies \"\"do\"\". They would likely not accept an agreement where you are providing the credit to them. Having credit is a more formal agreement than just 'I paid you too much this month'. Even if your payment does post before the transaction and it says you have a negative balance and gets reported to the credit bureau like that, this would probably get flagged for human review, and a negative credit utilization doesn't really reflect what is happening. Credit utilization is 'how much do you owe / amount of credit available to you', and it's not really correct to say that you owe negative dollars. Carrying a negative balance like that is money that could be invested elsewhere. My guess is that the credit card company is not paying you the APR of your card on the amount they owe you (if they are please provide the name of your card!). They probably don't pay you anything for that negative balance and it's money that's better used elsewhere. Even if it does benefit your credit score you're losing out on any interest (each month!) you could have earned with that money to get maybe 1-2% better rate on your next home or car loan (when will that be?). TLDR: I think credit utilization approaches a limit at 0% because it's based on the amount you owe and you don't really owe negative dollars. I am very interested in seeing the results of this experiment, please update us when you find out!\"",
"title": ""
},
{
"docid": "3a5e579b13be145ba602a0f1c0448c12",
"text": "\"It can be pretty hard to compute the right number. What you need to know for your actual return is called the dollar-weighted return. This is the Internal Rate of Return (IRR) http://en.wikipedia.org/wiki/Internal_rate_of_return computed for your actual cash flows. So if you add $100 per month or whatever, that has to be factored in. If you have a separate account then hopefully your investment manager is computing this. If you just have mutual funds at a brokerage or fund company, computing it may be a bunch of manual labor, unless the brokerage does it for you. A site like Morningstar will show a couple of return numbers on say an S&P500 index fund. The first is \"\"time weighted\"\" and is just the raw return if you invested all money at time A and took it all out at time B. They also show \"\"investor return\"\" which is the average dollar-weighted return for everyone who invested in the fund; so if people sold the fund during a market crash, that would lower the investor return. This investor return shows actual returns for the average person, which makes it more relevant in one way (these were returns people actually received) but less relevant in another (the return is often lower because people are on average doing dumb stuff, such as selling at market bottoms). You could compare yourself to the time-weighted return to see how you did vs. if you'd bought and held with a big lump sum. And you can compare yourself to the investor return to see how you did vs. actual irrational people. .02, it isn't clear that either comparison matters so much; after all, the idea is to make adequate returns to meet your goals with minimum risk of not meeting your goals. You can't spend \"\"beating the market\"\" (or \"\"matching the market\"\" or anything else benchmarked to the market) in retirement, you can only spend cash. So beating a terrible market return won't make you feel better, and beating a great market return isn't necessary. I think it's bad that many investment books and advisors frame things in terms of a market benchmark. (Market benchmarks have their uses, such as exposing index-hugging active managers that aren't earning their fees, but to me it's easy to get mixed up and think the market benchmark is \"\"the point\"\" - I feel \"\"the point\"\" is to achieve your financial goals.)\"",
"title": ""
},
{
"docid": "0a5254a515594b246b95061e5fe235d1",
"text": "It sounds like they are matching your IRA contribution dollar for dollar up to 1% of your salary. Think of that as an instant 100% yield on your investment. (Your money instantly doubles.) My 401(k) has been doing pretty well over the last year, but it will take several years before my money doubles. So you can let it sit in cash for a year, then take some pretty hefty fees and you will probably still come out ahead. (Of course it's hard to say without knowing all of the fees.)",
"title": ""
}
] |
fiqa
|
9eadb6e102373e8c2796a03816dddb23
|
Long-term capital gain taxes on ETFs?
|
[
{
"docid": "215e36b5c385dc311d8f50b10a82be08",
"text": "Generally speaking, each year, mutual funds distribute to their shareholders the dividends that are earned by the stocks that they hold and also the net capital gains that they make when they sell stocks that they hold. If they did not do so, the money would be income to the fund and the fund would have to pay taxes on the amount not distributed. (On the other hand, net capital losses are held by the fund and carried forward to later years to offset future capital gains). You pay taxes on the amounts of the distributions declared by the fund. Whether the fund sold a particular stock for a loss or a gain (and if so, how much) is not the issue; what the fund declares as its distribution is. This is why it is not a good idea to buy a mutual fund just before it makes a distribution; your share price drops by the per-share amount of the distribution, and you have to pay taxes on the distribution.",
"title": ""
},
{
"docid": "0b8333e65a4904eda82fab6b725587ca",
"text": "Generally, ETFs and mutual funds don't pay taxes (although there are some cases where they do, and some countries where it is a common case). What happens is, the fund reports the portion of the gain attributed to each investor, and the investor pays the tax. In the US, this is reported to you on 1099-DIV as capital gains distribution, and can be either short term (as in the scenario you described), long term, or a mix of both. It doesn't mean you actually get a distribution, though, but if you don't - it reduces your basis.",
"title": ""
}
] |
[
{
"docid": "72f42bcf08ff3f5ef942bf06a8ac8b97",
"text": "\"As I recall from the documentation presented to me, any gain over the strike price from an ISO stock option counts as a long term capital gain (for tax purposes) if it's held from 2 years from the date of grant and 1 year from the date of exercise. If you're planning to take advantage of that tax treatment, exercising your options now will start that 1-year countdown clock now as well, and grant you a little more flexibility with regards to when you can sell in the future. Of course, no one's renewed the \"\"Bush tax cuts\"\" yet, so the long-term capital gains rate is going up, and eventually it seems they'll want to charge you Medicare on those gains as well (because they can... ), soo, the benefit of this tax treatment is being reduced... lovely time to be investing, innnit?\"",
"title": ""
},
{
"docid": "398402f51ec457500408822627b1c4f2",
"text": "Here's how capital gains are totaled: Long and Short Term. Capital gains and losses are either long-term or short-term. It depends on how long the taxpayer holds the property. If the taxpayer holds it for one year or less, the gain or loss is short-term. Net Capital Gain. If a taxpayer’s long-term gains are more than their long-term losses, the difference between the two is a net long-term capital gain. If the net long-term capital gain is more than the net short-term capital loss, the taxpayer has a net capital gain. So your net long-term gains (from all investments, through all brokers) are offset by any net short-term loss. Short term gains are taxed separately at a higher rate. I'm trying to avoid realizing a long term capital gain, but at the same time trade the stock. If you close in the next year, one of two things will happen - either the stock will go down, and you'll have short-term gains on the short, or the stock will go up, and you'll have short-term losses on the short that will offset the gains on the stock. So I don;t see how it reduces your tax liability. At best it defers it.",
"title": ""
},
{
"docid": "ddeeb269e3a6f6fa27a70fb0ceea2f58",
"text": "The problem there is that there's a tax due on that dividend. So, if you wish, you can buy the ETF and specify to reinvest dividends, but you'll have to pay a bit of tax on them, and keep track of your basis, if the account isn't a retirement account.",
"title": ""
},
{
"docid": "4693068364d85fcfd6bc49a34620ab6e",
"text": "In a taxable account you're going to owe taxes when you sell the shares for a gain. You're also going to owe taxes on any distributions you receive from the holdings in the account; these distributions can happen one or more times a year. Vanguard has a writeup on mutual fund taxation. Note: for a fund like you linked, you will owe taxes annually, regardless of whether you sell it. The underlying assets will pay dividends and those are distributed to you either in cash, or more beneficially as additional shares of the mutual fund (look into dividend reinvestment.) Taking VFIAX's distributions as an example, if you bought 1 share of the fund on March 19, 2017, on March 20th you would have been given $1.005 that would be taxable. You'd owe taxes on that even if you didn't sell your share during the year. Your last paragraph is based on a false premise. The mutual fund does report to you at the end of the year the short and long term capital gains, along with dividends on a 1099-DIV. You get to pay taxes on those transactions, that's why it's advantageous to hold low turnover mutual funds in taxable accounts.",
"title": ""
},
{
"docid": "4925a42610d9d45797fcb67ad5c8a122",
"text": "I agree, one should not let the tax tail wag the investing dog. The only question should be whether he'd buy the stock at today's price. If he wishes to own it long term, he keeps it. To take the loss this year, he'd have to sell soon, and can't buy it back for 30 days. If, for whatever reason, the stock comes back a bit, he's going to buy in higher. To be clear, the story changes for ETFs or mutual funds. You can buy a fund to replace one you're selling, capture the loss, and easily not run afoul of wash sale rules.",
"title": ""
},
{
"docid": "d3aace6c8e0679a8a4f70b3956a899c4",
"text": "Your tax efficient reasoning is solid for where you want to distribute your assets. ETFs are often more tax efficient than their equivalent mutual funds but the exact differences would depend on the comparison between the fund and ETF you were considering. The one exception to this rule is Vanguard funds and ETFs which have the exact same tax-efficiency because ETFs are a share class of the corresponding mutual fund.",
"title": ""
},
{
"docid": "623a7cf06315a9a2d497d5ccec710152",
"text": "For a long term gain you must hold the stock a year and a day, so, the long term hold period will fall into 2015 regardless. This is the only tax related issue that occurs to me, did you have something else in mind? Welcome to Money.SE.",
"title": ""
},
{
"docid": "c5578afe7b8b8fea73e4f1a44aea7c7e",
"text": "To try to answer the three explicit questions: Every share of stock is treated proportionately: each share is assigned the same dollar amount of investment (1/176th part of the contribution in the example), and has the same discount amount (15% of $20 or $25, depending on when you sell, usually). So if you immediately sell 120 shares at $25, you have taxable income on the gain for those shares (120*($25-$17)). Either selling immediately or holding for the long term period (12-18 mo) can be advantageous, just in different ways. Selling immediately avoids a risk of a decline in the price of the stock, and allows you to invest elsewhere and earn income on the proceeds for the next 12-18 months that you would not otherwise have had. The downside is that all of your gain ($25-$17 per share) is taxed as ordinary income. Holding for the full period is advantageous in that only the discount (15% of $20 or $25) will be taxed as ordinary income and the rest of the gain (sell price minus $20 or $25) will be taxed at long-term capital gain tax rates, which generally are lower than ordinary rates (all taxes are due in the year you do sell). The catch is you will sell at different price, higher or lower, and thus have a risk of loss (or gain). You will never be (Federally) double taxed in any scenario. The $3000 you put in will not be taxed after all is sold, as it is a return of your capital investment. All money you receive in excess of the $3000 will be taxed, in all scenarios, just potentially at different rates, ordinary or capital gain. (All this ignores AMT considerations, which you likely are not subject to.)",
"title": ""
},
{
"docid": "8f05a577b8e104eb30a83b40795f6836",
"text": "\"This answer is about the USA. Each time you sell a security (a stock or a bond) or some other asset, you are expected to pay tax on the net gain. It doesn't matter whether you use a broker or mutual fund to make the sale. You still owe the tax. Net capital gain is defined this way: Gross sale prices less (broker fees for selling + cost of buying the asset) The cost of buying the asset is called the \"\"basis price.\"\" You, or your broker, needs to keep track of the basis price for each share. This is easy when you're just getting started investing. It stays easy if you're careful about your record keeping. You owe the capital gains tax whenever you sell an asset, whether or not you reinvest the proceeds in something else. If your capital gains are modest, you can pay all the taxes at the end of the year. If they are larger -- for example if they exceed your wage earnings -- you should pay quarterly estimated tax. The tax authorities ding you for a penalty if you wait to pay five- or six-figure tax bills without paying quarterly estimates. You pay NET capital gains tax. If one asset loses money and another makes money, you pay on your gains minus your losses. If you have more losses than gains in a particular year, you can carry forward up to $3,000 (I think). You can't carry forward tens of thousands in capital losses. Long term and short term gains are treated separately. IRS Schedule B has places to plug in all those numbers, and the tax programs (Turbo etc) do too. Dividend payments are also taxable when they are paid. Those aren't capital gains. They go on Schedule D along with interest payments. The same is true for a mutual fund. If the fund has Ford shares in it, and Ford pays $0.70 per share in March, that's a dividend payment. If the fund managers decide to sell Ford and buy Tesla in June, the selling of Ford shares will be a cap-gains taxable event for you. The good news: the mutual fund managers send you a statement sometime in February or March of each year telling what you should put on your tax forms. This is great. They add it all up for you. They give you a nice consolidated tax statement covering everything: dividends, their buying and selling activity on your behalf, and any selling they did when you withdrew money from the fund for any purpose. Some investment accounts like 401(k) accounts are tax free. You don't pay any tax on those accounts -- capital gains, dividends, interest -- until you withdraw the money to live on after you retire. Then that money is taxed as if it were wage income. If you want an easy and fairly reliable way to invest, and don't want to do a lot of tax-form scrambling, choose a couple of different mutual funds, put money into them, and leave it there. They'll send you consolidated tax statements once a year. Download them into your tax program and you're done. You mentioned \"\"riding out bad times in cash.\"\" No, no, NOT a good idea. That investment strategy almost guarantees you will sell when the market is going down and buy when it's going up. That's \"\"sell low, buy high.\"\" It's a loser. Not even Warren Buffett can call the top of the market and the bottom. Ned Johnson (Fidelity's founder) DEFINITELY can't.\"",
"title": ""
},
{
"docid": "3a9a2887e88a59612d0e83c08cffd926",
"text": "Capital gains tax is an income tax upon your profit from selling investments. Long-term capital gains (investments you have held for more than a year) are taxed significantly less than short-term gains. It doesn't limit how many shares you can sell; it does discourage selling them too quickly after buying. You can balance losses against gains to reduce the tax due. You can look for tax-advantaged investments (the obvious one being a 401k plan, IRA, or equivalent, though those generally require leaving the money invested until retirement). But in the US, most investments other than the house you are living in (which some of us argue isn't really an investment) are subject to capital gains tax, period.",
"title": ""
},
{
"docid": "5cc6b7105374e03fb5d2d30f87ce6e3e",
"text": "I believe the answer to your question boils down to a discussion of tax strategies and personal situation, both now and in the future. As a result, it's pretty hard to give a concrete example to the question as asked right now. For example, if your tax rate now is likely to be higher than your tax rate at retirement (it is for most people), than putting the higher growth ETF in a retirement fund makes some sense. But even then, there are other considerations. However, if the opposite is true (which could happen if your income is growing so fast that your retirement income looks like it will be higher than your current income), than you might want the flexibility of holding all your ETFs in your non-tax advantaged brokerage account so that IF you do incur capital gains they are paid at prevailing, presumably lower tax rates. (I assume you meant a brokerage account rather than a savings account since you usually can't hold ETFs in a savings account.) I also want to mention that a holding in a corp account isn't necessarily taxed twice. It depends on the corporation type and the type of distribution. For example, S corps pay no federal income tax themselves. Instead the owners pay taxes when money is distributed to them as personal income. Which means you could trickle out the earnings from an holdings there such that it keeps you under any given federal tax bracket (assuming it's your only personal income.) This might come in handy when retired for example. Also, distribution of the holdings as dividends would incur cap gains tax rates rather than personal income tax rates. One thing I would definitely say: any holdings in a Roth account (IRA, 401k) will have no future taxes on earnings or distributions (unless the gov't changes its mind.) Thus, putting your highest total return ETF there would always be the right move.",
"title": ""
},
{
"docid": "70b4d269329f80b378156a5cb2f432ac",
"text": "\"The tax comes when you close the position. If the option expires worthless it's as if you bought it back for $0. There's a short-term capital gain for the difference between your short-sale price and your buyback price on the option. I believe the capital gain is always short-term because short sales are treated as short-term even if you hold them open more than one year. If the option is exercised (calling away your stock) then you add the premium to your sale price on the stock and then compute the capital gain. So in this case you can end up treating the premium as a long-term capital gain. See IRS pub 550 http://www.irs.gov/publications/p550/ch04.html#en_US_2010_publink100010619 Search for \"\"Writers of puts and calls\"\"\"",
"title": ""
},
{
"docid": "82f690a6970b4b385556ab21e8dbe8ad",
"text": "Fidelity has a good explanation of Restricted Stock Awards: For grants that pay in actual shares, the employee’s tax holding period begins at the time of vesting, and the employee’s tax basis is equal to the amount paid for the stock plus the amount included as ordinary compensation income. Upon a later sale of the shares, assuming the employee holds the shares as a capital asset, the employee would recognize capital gain income or loss; whether such capital gain would be a short- or long-term gain would depend on the time between the beginning of the holding period at vesting and the date of the subsequent sale. Consult your tax adviser regarding the income tax consequences to you. So, you would count from vesting for long-term capital gains purposes. Also note the point to include the amount of income you were considered to have earned as a result of the original vesting [market value then - amount you paid]. (And of course, you reported that as income in 2015/2016, right?) So if you had 300 shares of Stock ABC granted you in 2014 for a price of $5/share, and in 2015 100 of those shares vested at FMV $8/share, and in 2016 100 of those shares vested, current FMV $10/share, you had $300 in income in 2015 and $500 of income in 2016 from this. Then in 2017 you sold 200 shares for $15/share:",
"title": ""
},
{
"docid": "97614544e35e57ca982ce71562c3803a",
"text": "\"You cannot get \"\"your investment\"\" out and \"\"leave only the capital gains\"\" until they become taxable at the long-term rate. When you sell some shares after holding them for less than a year, you have capital gains on which you will have to pay taxes at the short-term capital gains rate (that is, at the same rate as ordinary income). As an example, if you bought 100 shares at $70 for a net investment of $7000, and sell 70 of them at $100 after five months to get your \"\"initial investment back\"\", you will have short-term capital gains of $30 per share on the 70 shares that you sold and so you have to pay tax on that $30x70=$2100. The other $4900 = $7000-$2100 is \"\"tax-free\"\" since it is just your purchase price of the 70 shares being returned to you. So after paying the tax on your short-term capital gains, you really don't have your \"\"initial investment back\"\"; you have something less. The capital gains on the 30 shares that you continue to hold will become (long-term capital gains) income to you only when you sell the shares after having held them for a full year or more: the gains on the shares sold after five months are taxable income in the year of sale.\"",
"title": ""
},
{
"docid": "c98556545e99fddd04d0a07dcf079005",
"text": "Not illegal. With respect to littleadv response, the printing of a check isn't illegal. I can order checks from cheap check printers, and they have no relationship to any bank, so long as they have my routing number and checking account number, they print. Years ago (25+) I wrote my account details on a shirt in protest to owing the IRS money, and my bank cashed it. They charged a penalty of some nominal amount, $20 or so for 'non-standard check format' or something like that. But, in fact, stupid young person rants aside, you may write a check out by hand on a piece of paper and it should clear. The missing factor is the magnetic ink. But, I often see a regular check with a strip taped to the bottom when the mag strip fails, proving that bad ink will not prevent a check from clearing. So long as the person trying to send you the funds isn't going to dispute the transaction (and the check is made out to you, so I suppose they couldn't even do that) this process should be simple. I see little to no risk so long as the image isn't intercepted along the way.",
"title": ""
}
] |
fiqa
|
2b54c878663207fc7745b9befdd917bd
|
gnucash share fractions
|
[
{
"docid": "b3ff2d91d58df55f959c18195cd1b5d0",
"text": "As BrenBarn stated, tracking fractional transactions beyond 8 decimal places makes no sense in the context of standard stock and mutual fund transactions. This is because even for the most expensive equities, those fractional shares would still not be worth whole cent amounts, even for account balances in the hundreds of thousands of dollars. One important thing to remember is that when dealing with equities the total cost, number of shares, and share price are all 3 components of the same value. Thus if you take 2 of those values, you can always calculate the third: (price * shares = cost, cost / price = shares, etc). What you're seeing in your account (9 decimal places) is probably the result of dividing uneven values (such as $9.37 invested in a commodity which trades for $235.11, results in 0.03985368550891072264046616477394 shares). Most brokerages will round this value off somewhere, yours just happens to include more decimal places than your financial software allows. Since your brokerage is the one who has the definitive total for your account balance, the only real solution is to round up or down, whichever keeps your total balance in the software in line with the balance shown online.",
"title": ""
}
] |
[
{
"docid": "7dc3912bdb7e7a71ae405133330accb6",
"text": "\"Some companies issue multiple classes of shares. Each share may have different ratios applied to ownership rights and voting rights. Some shares classes are not traded on any exchange at all. Some share classes have limited or no voting rights. Voting rights ratios are not used when calculating market cap but the market typically puts a premium on shares with voting rights. Total market cap must include ALL classes of shares, listed or not, weighted according to thee ratios involved in the company's ownership structure. Some are 1:1, but in the case of Berkshire Hathaway, Class B shares are set at an ownership level of 1/1500 of the Class A shares. In terms of Alphabet Inc, the following classes of shares exist as at 4 Dec 2015: When determining market cap, you should also be mindful of other classes of securities issued by the company, such as convertible debt instruments and stock options. This is usually referred to as \"\"Fully Diluted\"\" assuming all such instruments are converted.\"",
"title": ""
},
{
"docid": "909417d8d10021a49861245cd34381e3",
"text": "\"Not to detract from the other answers at all (which are each excellent and useful in their own right), but here's my interpretation of the ideas: Equity is the answer to the question \"\"Where is the value of the company coming from?\"\" This might include owner stakes, shareholder stock investments, or outside investments. In the current moment, it can also be defined as \"\"Equity = X + Current Income - Current Expenses\"\" (I'll come back to X). This fits into the standard accounting model of \"\"Assets - Liabilities = Value (Equity)\"\", where Assets includes not only bank accounts, but also warehouse inventory, raw materials, etc.; Liabilities are debts, loans, shortfalls in inventory, etc. Both are abstract categories, whereas Income and Expense are hard dollar amounts. At the end of the year when the books balance, they should all equal out. Equity up until this point has been an abstract concept, and it's not an account in the traditional (gnucash) sense. However, it's common practice for businesses to close the books once a year, and to consolidate outstanding balances. When this happens, Equity ceases to be abstract and becomes a hard value: \"\"How much is the company worth at this moment?\"\", which has a definite, numeric value. When the books are opened fresh for a new business year, the Current Income and Current Expense amounts are zeroed out. In this situation, in order for the big equation to equal out: Assets - Liabilities = X + Income - Expeneses the previous net value of the company must be accounted for. This is where X comes in, the starting (previous year's) equity. This allows the Assets and Liabilities to be non-zero, while the (current) Income and Expenses are both still zeroed out. The account which represents X in gnucash is called \"\"Equity\"\", and encompasses not only initial investments, but also the net increase & decreases from previous years. While the name would more accurately be called \"\"Starting Equity\"\", the only problem caused by the naming convention is the confusion of the concept Equity (X + Income - Expenses) with the account X, named \"\"Equity\"\".\"",
"title": ""
},
{
"docid": "47ae96508ca08a01b1c2432172264fb7",
"text": "I just decided to start using GnuCash today, and I was also stuck in this position for around an hour before I figured out what to do exactly. The answer by @jldugger pointed me partially on the right track, so this answer is intended to help people waste less time in the future. (Note: All numbers have been redacted for privacy issues, but I hope the images are sufficient to allow you to understand what is going on. ) Upon successfully importing your transactions, you should be able to see your transactions in the Checking Account and Savings Account (plus additional accounts you have imported). The Imbalance account (GBP in my case) will be negative of whatever you have imported. This is due to the double-entry accounting system that GnuCash uses. Now, you will have to open your Savings Account. Note that except for a few transactions, most of them are going to Imbalance. These are marked out with the red rectangles. What you have to do, now, is to click on them individually and sort them into the correct account. Unfortunately (I do not understand why they did this), you cannot move multiple transactions at once. See also this thread. Fortunately, you only have to do this once. This is what your account should look like after it is complete. After this is done, you should not have to move any more accounts, since you can directly enter the transactions in the Transfer box. At this point, your Accounts tab should look like this: Question solved!",
"title": ""
},
{
"docid": "1592c9cd0da3961ba90df07a51f28241",
"text": "Instead of gnucash i suggest you to use kmymoney. It's easier",
"title": ""
},
{
"docid": "3c8e61f363e1965f429f120675535e36",
"text": "The $47.67 per share figure is the trading price, or fair market value, of the OLD Johnson Controls, and should not be used to figure your gain nor to figure your basis in the new Johnson Controls International. Your new basis is the total of the gross proceeds received; that is, the cash plus the fair market value of the new shares, which was $45.69 per share. (I am not referring to cash-in-lieu for fractional shares, but the $5.7293 per share received upon the merger.) A person holding 100 shares of the old Johnson Controls would have received $572.93, plus 83.57 shares of the new company. Ignoring the fractional share, for simplicity's sake, gross proceeds would equal 83 x $45.69 = $3792.72 in fair market value of shares, plus the cash of $572.93, for a total of $4365.20. This is your basis in the 83 new shares. Regarding the fractional share, since new basis is at fair market value, there should be no gain or loss recognized upon its sale.",
"title": ""
},
{
"docid": "275df9312e040d3309fae20aff051c75",
"text": "Technically you should take the quarterly dividend yield as a fraction, add one, take the cube root, and subtract one (and then multiple by the stock price, if you want a dollar amount per share rather than a rate). This is to account for the fact that you could have re-invested the monthly dividends and earned dividends on that reinvestment. However, the difference between this and just dividing by three is going to be negligible over the range of dividend rates that are realistically paid out by ordinary stocks.",
"title": ""
},
{
"docid": "10d9f9670fe70075b14cc479478ba1a2",
"text": "No, GnuCash doesn't specifically provide a partner cash basis report/function. However, GnuCash reports are fairly easy to write. If the data was readily available in your accounts it shouldn't be too hard to create a cash basis report. The account setup is so flexible, you might actually be able to create accounts for each partner, and, using standard dual-entry accounting, always debit and credit these accounts so the actual cash basis of each partner is shown and updated with every transaction. I used GnuCash for many years to manage my personal finances and those of my business (sole proprietorship). It really shines for data integrity (I never lost data), customer management (decent UI for managing multiple clients and business partners) and customer invoice generation (they look pretty). I found the user interface ugly and cumbersome. GnuCash doesn't integrate cleanly with banks in the US. It's possible to import data, but the process is very clunky and error-prone. Apparently you can make bank transactions right from GnuCash if you live in Europe. Another very important limitation of GnuCash to be aware of: only one user at a time. Period. If this is important to you, don't use GnuCash. To really use GnuCash effectively, you probably have to be an actual accountant. I studied dual-entry accounting a bit while using GnuCash. Dual-entry accounting in GnuCash is a pain in the butt. Accurately recording certain types of transactions (like stock buys/sells) requires fiddling with complicated split transactions. I agree with Mariette: hire a pro.",
"title": ""
},
{
"docid": "bf0540111a2051185227f72005547c32",
"text": "\"Generally if you are using FIFO (first in, first out) accounting, you will need to match the transactions based on the number of shares. In your example, at the beginning of day 6, you had two lots of shares, 100 @ 50 and 10 @ 52. On that day you sold 50 shares, and using FIFO, you sold 50 shares of the first lot. This leaves you with 50 @ 50 and 10 @ 52, and a taxable capital gain on the 50 shares you sold. Note that commissions incurred buying the shares increase your basis, and commissions incurred selling the shares decrease your proceeds. So if you spent $10 per trade, your basis on the 100 @ 50 lot was $5010, and the proceeds on your 50 @ 60 sale were $2990. In this example you sold half of the lot, so your basis for the sale was half of $5010 or $2505, so your capital gain is $2990 - 2505 = $485. The sales you describe are also \"\"wash sales\"\", in that you sold stock and bought back an equivalent stock within 30 days. Generally this is only relevant if one of the sales was at a loss but you will need to account for this in your code. You can look up the definition of wash sale, it starts to get complex. If you are writing code to handle this in any generic situation you will also have to handle stock splits, spin-offs, mergers, etc. which change the number of shares you own and their cost basis. I have implemented this myself and I have written about 25-30 custom routines, one for each kind of transaction that I've encountered. The structure of these deals is limited only by the imagination of investment bankers so I think it is impossible to write a single generic algorithm that handles them all, instead I have a framework that I update each quarter as new transactions occur.\"",
"title": ""
},
{
"docid": "1577e21bf4ad3391c4631197ed104014",
"text": "I would say when starting with Gnucash to start with the level of granularity you are comfortable with while sticking to the double entry bookkeeping practices. So going through each one: Refund for Parking Pass. Assuming you treat the Parking Pass as a sunk cost, i.e. an Expense account, its just a negative entry in the Expense account which turns into a positive one in your Bank account. Yes it may look weird, and if you don't like it you can always 'pay from Equity' the prior month, or your Bank Account if you're backfilling old statements. Selling physical items. If you sold it on eBay and the value is high enough you'll get tax forms indicating you've earned x. Even if its small or not done via eBay, treat it the same way and create a 'Personal Items/Goods' Income account to track all of it. So the money you get in your Bank account would have come from there. Found jacket money would be an Equity entry, either Opening Balances into Cash or Bank account. Remember you are treating Equity / Opening Balances as the state before you started recording every transaction so both the value going into Assets (Banks,Stock,Mutual Funds) and Liabilities (Mortgage, Student Debt, Credit Card Debt) originate from there.",
"title": ""
},
{
"docid": "46bc1213fb52a6c9ecdc1047f6d59daa",
"text": "For double entry bookkeeping, personal or small business, GnuCash is very good. Exists for Mac Os.",
"title": ""
},
{
"docid": "d204e5a191765d7f582e25039e810cc9",
"text": "\"To keep it simple, let's say that A shares trade at 500 on average between April 2nd 2014 and April 1st 2015 (one year anniversary), then if C shares trade on average: The payment will be made either in cash or in shares within 90 days. The difficulties come from the fact that the formula is based on an average price over a year, which is not directly tradable, and that the spread is only covered between 1% and 5%. In practice, it is unlikely that the market will attribute a large premium to voting shares considering that Page&Brin keep the majority and any discount of Cs vs As above 2-3% (to include cost of trading + borrowing) will probably trigger some arbitrage which will prevent it to extend too much. But there is no guarantee. FYI here is what the spread has looked like since April 3rd: * details in the section called \"\"Class C Settlement Agreement\"\" in the S-3 filing\"",
"title": ""
},
{
"docid": "0f8bff4246bf5e8c9e8ded7affa5caa8",
"text": "\"Gnucash is first and foremost just a general ledger system. It tracks money in accounts, and lets you make transactions to transfer money between the accounts, but it has no inherent concept of things like taxes. This gives you a large amount of flexibility to organize your account hierarchy the way you want, but also means that it sometimes can take a while to figure out what account hierarchy you want. The idea is that you keep track of where you get money from (the Income accounts), what you have as a result (the Asset accounts), and then track what you spent the money on (the Expense accounts). It sounds like you primarily think of expenses as each being for a particular property, so I think you want to use that as the basis of your hierarchy. You probably want something like this (obviously I'm making up the specifics): Now, when running transaction reports or income/expense reports, you can filter to the accounts (and subaccounts) of each property to get a report specific to that property. You mention that you also sometimes want to run a report on \"\"all gas expenses, regardless of property\"\", and that's a bit more annoying to do. You can run the report, and when selecting accounts you have to select all the Gas accounts individually. It sounds like you're really looking for a way to have each transaction classified in some kind of two-axis system, but the way a general ledger works is that it's just a tree, so you need to pick just one \"\"primary\"\" axis to organize your accounts by.\"",
"title": ""
},
{
"docid": "80a85c95c7462ad01c4b710df507a311",
"text": "\"Hello! I am working on a project where I am trying to determine the profit made by a vendor if they hold our funds for 5 days in order to collect the interest on those funds during that period before paying a third party. Currently I am doing \"\"Amount x(Fed Funds Rate/365)x5\"\" but my output seems too low. Any advice?\"",
"title": ""
},
{
"docid": "4864753b99d7a96b7700b749d5cb8693",
"text": "The solution to this problem is somewhat like grading on a curve. Use the consumption ratio multiplied by the attendance (which is also a ratio, out of 100 days) to calculate how much each person owes. This will leave you short. Then add together all of the shares in a category, determine the % increase required to get to the actual cost of that category, and increase all the shares by that %.",
"title": ""
},
{
"docid": "2a77a8dbaf8b3e7646f8946e1be9dede",
"text": "Usually five shares and some cash.",
"title": ""
}
] |
fiqa
|
ebcc44d123862c35c909135878666f03
|
Do ETF dividends make up for fees?
|
[
{
"docid": "ed0ed68df5683cfbdc67e5ce8577bcd3",
"text": "Any ETF has expenses, including fees, and those are taken out of the assets of the fund as spelled out in the prospectus. Typically a fund has dividend income from its holdings, and it deducts the expenses from the that income, and only the net dividend is passed through to the ETF holder. In the case of QQQ, it certainly will have dividend income as it approximates a large stock index. The prospectus shows that it will adjust daily the reported Net Asset Value (NAV) to reflect accrued expenses, and the cash to pay them will come from the dividend cash. (If the dividend does not cover the expenses, the NAV will decline away from the modeled index.) Note that the NAV is not the ETF price found on the exchange, but is the underlying value. The price tends to track the NAV fairly closely, both because investors don't want to overpay for an ETF or get less than it is worth, and also because large institutions may buy or redeem a large block of shares (to profit) when the price is out of line. This will bring the price closer to that of the underlying asset (e.g. the NASDAQ 100 for QQQ) which is reflected by the NAV.",
"title": ""
},
{
"docid": "6579527da0c9cfbe380c490ca49de854",
"text": "\"It depends. Dividends and fees are usually unrelated. If the ETF holds a lot of stocks which pay significant dividends (e.g. an S&P500 index fund) these will probably cover the cost of the fees pretty readily. If the ETF holds a lot of stocks which do not pay significant dividends (e.g. growth stocks) there may not be any dividends - though hopefully there will be capital appreciation. Some ETFs don't contain stocks at all, but rather some other instruments (e.g. commodity-trust ETFs which hold precious metals like gold and silver, or daily-leveraged ETFs which hold options). In those cases there will never be any dividends. And depending on the performance of the market, the capital appreciation may or may not cover the expenses of the fund, either. If you look up QQQ's financials, you'll find it most recently paid out a dividend at an annualized rate of 0.71%. Its expense ratio is 0.20%. So the dividends more than cover its expense ratio. You could also ask \"\"why would I care?\"\" because unless you're doing some pretty-darned-specific tax-related modeling, it doesn't matter much whether the ETF covers its expense ratio via dividends or whether it comes out of capital gains. You should probably be more concerned with overall returns (for QQQ in the most recent year, 8.50% - which easily eclipses the dividends.)\"",
"title": ""
}
] |
[
{
"docid": "56572fa8686195ba428b686c2c1bfa5e",
"text": "Victor, Yes the drop in price does completely cancel the dividend at first. However, as others have noted, there are other forces working on the price as well. If dividends were pointless then the following scenario would be true: Let's assume, hypothetically, two identical stocks, only one of which pays a 2% annual dividend quarterly. At the end of the year we would expect the share price of the dividend stock to be 2% lower than the non-dividend stock. And an equal investment in both stocks would yield exactly the same amount of money. So that is a hypothetical, and here is real market example: I compared, i.e. took the ratio of Vanguard's S&P 500 ETF (VOO) closing price to the S&P 500 Index closing price from sep 9, (2010-2014), after accounting for the VOO 2013 split. The VOO pays a quarterly dividend(about 2%/year), the S&P is an index, hence no dividend. The VOO share price, reduced each quarter by the dividend, still grew more than the S&P each year except 2012 to 2013, but looking at the entire 4yr period the VOO share price grew 80.3987% while S&P grew 80.083% (1/3 of 1% more for VOO). VOO does drop about 1/2% relative to S&P on every ex date, but obviously it makes it up. There are other forces working on VOO. VOO is trade-able, therefore subject to supply/demand pressures, while the S&P 500 is not. So for the VOO ETF the data does not indicate pointless dividends but instead implies dividends are free money. StockCharts.com supports this. S&P500 for last 1244 days (9/8/2010) shows 90% growth http://stockcharts.com/freecharts/perf.php?%24SPX while VOO for last 1244 days shows 105% growth http://stockcharts.com/freecharts/perf.php?VOO",
"title": ""
},
{
"docid": "266ea3f62c2d646c5f51f9743c8634ca",
"text": "I asked this question in this weeks question thread. But I'll ask it here too. If the buy side here is an etf and has an expense ratio of x, does the rebate offset the expense or is accounted some other way? Another way, Is the rebate profit for the fund manager or the fund? Does it get disclosed somehow?",
"title": ""
},
{
"docid": "e60c30bb513745a94722a086cfa2fad4",
"text": "\"What you seem to want is a dividend reinvestment plan (DRIP). That's typically offered by the broker, not by the ETF itself. Essentially this is a discounted purchase of new shares when you're dividend comes out. As noted in the answer by JoeTaxpayer, you'll still need to pay tax on the dividend, but that probably won't be a big problem unless you've got a lot of dividends. You'll pay that out of some other funds when it's due. All DRIPs (not just for ETFs) have potential to complicate computation of your tax basis for eventual sale, so be aware of that. It doesn't have to be a show-stopper for you, but it's something to consider before you start. It's probably less of a problem now than it used to be since brokers now have to report your basis on the 1099-B in the year of sale, reducing your administrative burden (if you trust them to get it right). Here's a list of brokerages that were offering this from a top-of-the-search-list article that I found online: Some brokerages, including TD Ameritrade, Vanguard, Scottrade, Schwab and, to a lesser extent, Etrade, offer ETF DRIPs—no-cost dividend reinvestment programs. This is very helpful for busy clients. Other brokerages, such as Fidelity, leave ETF dividend reinvestment to their clients. Source: http://www.etf.com/sections/blog/23595-your-etf-has-drip-drag.html?nopaging=1 Presumably the list is not constant. I almost didn't included but I thought the wide availability (at least as of the time of the article's posting) was more interesting than any specific broker on it. You'll want to do some research before you choose a broker to do this. Compare fees for sure, but also take into account other factors like how soon after the dividend they do the purchase (is it the ex-date, the pay date, or something else?). A quick search online should net you several decent articles with more information. I just searched on \"\"ETF DRIP\"\" to check it out.\"",
"title": ""
},
{
"docid": "d37d9a994626f347749725d7d6066a17",
"text": "With the disclaimer that I am not a technician, I'd answer yes, it does. SPY (for clarification, an ETF that reflects the S&P 500 index) has dividends, and earnings, therefore a P/E and dividend yield. It would follow that the tools technicians use, such as moving averages, support and resistance levels also apply. Keep in mind, each and every year, one can take the S&P stocks and break them up, into quintiles or deciles based on return and show that not all stock move in unison. You can break up by industry as well which is what the SPDRs aim to do, and observe the movement of those sub-groups. But, no, not all the stocks will perform the way the index is predicted to. (Note - If a technician wishes to correct any key points here, you are welcome to add a note, hopefully, my answer was not biased)",
"title": ""
},
{
"docid": "c382ab89f323f5aa80febf3f096bc883",
"text": "A DRIP plan with the ETF does just that. It provides cash (the dividends you are paid) back to the fund manager who will accumulate all such reinvested dividends and proportionally buy more shares of stock in the ETF. Most ETFs will not do this without your approval, as the dividends are taxed to you (you must include them as income for that year if this is in a taxable account) and therefore you should have the say on where the dividends go.",
"title": ""
},
{
"docid": "2d6c3b768179744cbae7673ecd47ecee",
"text": "The expense ratio is stated as an annual figure but is often broken down to be taken out periodically of the fund's assets. In traditional mutual funds, there will be a percent of assets in cash that can be nibbled to cover the expenses of running the fund and most deposits into the fund are done in cash. In an exchange-traded fund, new shares are often created through creation/redemption units which are baskets of securities that make things a bit different. In the case of an ETF, the dividends may be reduced by the expense ratio as the trading price follows the index usually. Expense ratios can vary as in some cases there may be initial waivers on new funds for a time period to allow them to build an asset base. There is also something to be said for economies of scale that allow a fund to have its expense ratio go down over time as it builds a larger asset base. These would be noted in the prospectus and annual reports of the fund to some degree. SPDR Annual Report on page 312 for the Russell 3000 ETF notes its expense ratio over the past 5 years being the following: 0.20% 0.20% 0.22% 0.20% 0.21% Thus, there is an example of some fluctuation if you want a real world example.",
"title": ""
},
{
"docid": "7c73f3efee233cebf09efa70a897dd2c",
"text": "It may be true for a bond fund. But it is not true for bond etf. Bond etf will drop by the same amount when it distribute dividend on ex-dividend date.",
"title": ""
},
{
"docid": "360b618f715186825da5a27f9163b026",
"text": "Your ETF will return the interest as dividends. If you hold the ETF on the day before the Ex-Dividend date, you will get the dividend. If you sell before that, you will not. Note that at least one other answer to this question is wrong. You do NOT need to hold on the Record date. There is usually 2 days (or so) between the ex-date and the record date, which corresponds to the number of days it takes for your trade to settle. See the rules as published by the SEC: http://www.sec.gov/answers/dividen.htm",
"title": ""
},
{
"docid": "55a0bf6bc65d807b555cb98d1d2a6053",
"text": "Your best bet is to remove the excess contribution. Your broker should have forms to do that. There is a 6% tax on the excess contributions for each year that it remains uncorrected. It would be better to just eat the $25 fee and get rid of any future headaches.",
"title": ""
},
{
"docid": "43c7802718feab88d1054220636e2c0d",
"text": "Some other suggestions: Index-tracking mutual funds. These have the same exposure as ETFs, but may have different costs; for example, my investment manager (in the UK) charges a transaction fee on ETFs, but not funds, but caps platform fees on ETFs and not funds! Target date funds. If you are saving for a particular date (often retirement, but could also be buying a house, kids going to college, mid-life crisis motorbike purchase, a luxury cruise to see an eclipse, etc), these will automatically rebalance the investment from risk-tolerant (ie equities) to risk-averse (ie fixed income) as the date approaches. You can get reasonably low fees from Vanguard, and i imagine others. Income funds/ETFs, focusing on stocks which are expected to pay a good dividend. The idea is that a consistent dividend helps smooth out volatility in prices, giving you a more consistent return. Historically, that worked pretty well, but given fees and the current low yields, it might not be smart right now. That said Vanguard Equity Income costs 0.17%, and i think yields 2.73%, which isn't bad.",
"title": ""
},
{
"docid": "418c1aba4dd73fbeabded92cc00ddb0c",
"text": "The question is valid, you just need to work backwards. After how much money-time will the lower expense offset the one time fee? Lower expenses will win given the right sum of money and right duration for the investment.",
"title": ""
},
{
"docid": "21b0a09f26272db9528e08a4a7e3437a",
"text": "\"This has been answered countless times before: One example you may want to look at is DGRO. It is an iShares ETF that many discount brokers trade for free. This ETF: offers \"\"exposure to U.S. stocks focused on dividend growth\"\".\"",
"title": ""
},
{
"docid": "0b8333e65a4904eda82fab6b725587ca",
"text": "Generally, ETFs and mutual funds don't pay taxes (although there are some cases where they do, and some countries where it is a common case). What happens is, the fund reports the portion of the gain attributed to each investor, and the investor pays the tax. In the US, this is reported to you on 1099-DIV as capital gains distribution, and can be either short term (as in the scenario you described), long term, or a mix of both. It doesn't mean you actually get a distribution, though, but if you don't - it reduces your basis.",
"title": ""
},
{
"docid": "95bd051eec913747fac08c2007034758",
"text": "\"Dividends can also be automatically reinvested in your stock holding through a DRIP plan (see the wikipedia link for further details, wiki_DRIP). Rather than receiving the dividend money, you \"\"buy\"\" additional stock shares your with dividend money. The value in the DRIP strategy is twofold. 1) your number of shares increases without paying transaction fees, 2) you increase the value of your holding by increasing number of shares. In the end, the RIO can be quite substantial due to the law of compounding interest (though here in the form of dividends). Talk with your broker (brokerage service provider) to enroll your dividend receiving stocks in a DRIP.\"",
"title": ""
},
{
"docid": "6dbb192aac9096a004b081e5518c1263",
"text": "There are a few ETFs that fall into the money market category: SHV, BIL, PVI and MINT. What normally looks like an insignificant expense ratio looks pretty big when compared to the small yields offered by these funds. The same holds for the spread and transaction fees. For that reason, I'm not sure if the fund route is worth it.",
"title": ""
}
] |
fiqa
|
31eb3d8bcb97f13930c318e7fd8222db
|
IRR vs. Interest Rates
|
[
{
"docid": "57d4127f36956d651366ce1fbfaec39e",
"text": "Yes, if your IRR is 5% per annum after three years then the total return (I prefer total rather than your use of actual) over those three years is 15.76%. Note that if you have other cashflows in and out, it gets a bit more complicated (e.g. using the XIRR function in Excel), but the idea is to find an effective annual percentage return that you're getting for your money.",
"title": ""
},
{
"docid": "d9a638edb28c13980a548d2a47c26aad",
"text": "IRR is not subjective, this is a response to @Laythesmack, to his remark that IRR is subjective. Not that I feel a need to defend my position, but rather, I'm going to explain his. My company offered stock at a 15% discount. We would have money withheld from pay, and twice per year buy at that discount. Coworkers said it was a 15% gain. I offered some math. I started by saying that 100/85 was 17.6%, and that was in fact, the gain. But, the funds were held by the company for an average of 3 months, not 6, so that gain occurred in 3 months and I did the math 1.176^4 and resulted in 91.5% annual return. This is IRR. It's not that it's subjective, but it assumes the funds continue to be invested fully during the time. In our case the 91.5% was real in one sense, yet no one doubled their money in just over a year. Was the 91% useless? Not quite. It simply meant to me that coworkers who didn't participate were overlooking the fact that if they borrowed money at a reasonable rate, they'd exceed that rate, especially for the fact that credit lines are charged day to day. Even if they borrowed that money on a credit card, they'd come out ahead. IRR is a metric. It has no emotion, no personality, no goals. It's a number we can calculate. It's up to you to use it correctly.",
"title": ""
},
{
"docid": "5c5700d815d1ffe9510d788c7d2f1a85",
"text": "Yes, assuming that your cash flow is constantly of size 5 and initial investment is 100, the following applies: IRR of 5% over 3 years: Value of CashFlows: 4.7619 + 4.5351 + 4.3192 = 13.6162 NPV: 100 - 13.6162 = 86.3838 Continuous compounding: 86.3838 * (1.05^3) = 100",
"title": ""
},
{
"docid": "a8f5c67f878e8aa0682ee18e0675e321",
"text": "IRR is subjective, if you could provide another metric instead of the IRR; then this would make sense. You can't spend IRR. For example, you purchase a property with a down payment; and the property provides cash-flow; you could show that your internal rate of return is 35%, but your actual rate of importance could be the RoR, or Cap Rate. I feel that IRR is very subjective. IRR is hardly looked at top MBA programs. It's studied, but other metrics are used, such as ROI, ROR, etc. IRR should be a tool that you visually compare to another metric. IRR can be very misleading, for example it's like the cash on cash return on an investment.",
"title": ""
}
] |
[
{
"docid": "993e74f21978e5fdadfd067d7ee9cd47",
"text": "According to my wife who used to work in the industry, since an investment mortgage is more likely to fail (they are just riskier) there are higher loan to value requirements and higher interest rates. They are just different products for different situations.",
"title": ""
},
{
"docid": "e25e337420c113aef3d69ee5b4815c3f",
"text": "Interest rates are always given annually, to make them comparable. If you prefer to calculate the rate or the total interest for the complete time, like 10 years or 15 years or 30 years, it is simple math, and it tells you the total you will pay, but it is not helpful for picking the better or even the right offer for your situation. Compare it to your car's gas mileage- what sense does it make to provide the information that a car will use 5000 gallons of gas over its lifetime? Is that better than a car that uses 6000 gallons (but may live 2 years longer?)",
"title": ""
},
{
"docid": "de786917a9584835bf7c24d2ad3ed4be",
"text": "\"I did a rough model and in terms of total $$ paid (interest + penalty - alternative investment income) both options are almost the same with the \"\"paying it all upfront\"\" being perhaps a $300 or so better ($9200 vs $8900) However, that doesn't factor in inflation or tax considerations. Personally I'd go with the \"\"no-penalty\"\" scenario since you have more flexibility and can adjust along the way if anything else comes up in the meantime.\"",
"title": ""
},
{
"docid": "6b949e7c4d790bedfd8add9e42eca3b2",
"text": "Generically, interest rates being charged are driven in large part by the central bank's rate and competition tends to keep similar loans priced fairly close to each other. Interest rates being paid are driven by what's needed to get folks to lend you their money (deposit in bank, purchase bonds) so it's again related. There certainly isn't very direct coupling, but in general interest rates of all sorts do tend to swing (very) roughly in the same direction at (very) roughly the same time... so the concept that interest rates of all types are rising or falling at any given moment is a simplification but not wholly unreasonable. If you want to know which interest rates a particular person is citing to back up their claim you really need to ask them.",
"title": ""
},
{
"docid": "ca9ff7c27a27a446f5031e35247d5294",
"text": "Asset prices are inversely related to interest rates. If you're valuing a business or a bond, if you use a lower interest rate you get a higher valuation. Historic equity returns benefit from a falling interest rate environment which won't be repeated as interest rates can only go so low. edit: typo",
"title": ""
},
{
"docid": "c19193e24bda7e5901b24d261c9f47e6",
"text": "What is much more likely is immediate or close to immediate investment. but this is exactly my point of contention with how they do things. I know for a fact that the money is immediately invested, which is why i find it wrong that interest for money collected in a given financial year is announced after the end of the next financial year. i was wondering if this was a common practice in other countries.",
"title": ""
},
{
"docid": "3ab2573cad4bde03574e290f5e8ed6ac",
"text": "\"I think this is a good question with no single right answer. For a conservative investor, possible responses to low rates would be: Probably the best response is somewhere in the middle: consider riskier investments for a part of your portfolio, but still hold on to some cash, and in any case do not expect great results in a bad economy. For a more detailed analysis, let's consider the three main asset classes of cash, bonds, and stocks, and how they might preform in a low-interest-rate environment. (By \"\"stocks\"\" I really mean mutual funds that invest in a diversified mixture of stocks, rather than individual stocks, which would be even riskier. You can use mutual funds for bonds too, although diversification is not important for government bonds.) Cash. Advantages: Safe in the short term. Available on short notice for emergencies. Disadvantages: Low returns, and possibly inflation (although you retain the flexibility to move to other investments if inflation increases.) Bonds. Advantages: Somewhat higher returns than cash. Disadvantages: Returns are still rather low, and more vulnerable to inflation. Also the market price will drop temporarily if rates rise. Stocks. Advantages: Better at preserving your purchasing power against inflation in the long term (20 years or more, say.) Returns are likely to be higher than stocks or bonds on average. Disadvantages: Price can fluctuate a lot in the short-to-medium term. Also, expected returns are still less than they would be in better economic times. Although the low rates may change the question a little, the most important thing for an investor is still to be familiar with these basic asset classes. Note that the best risk-adjusted reward might be attained by some mixture of the three.\"",
"title": ""
},
{
"docid": "9dc01201aa4269618c5e42e2e8990c96",
"text": "Both are correct depending on what you are really trying to evaluate. If you only want to understand how that particular investment you were taking money in and out of did by itself than you would ignore the cash. You might use this if you were thinking of replacing that particular investment with another but keeping the in/out strategy. If you want to understand how the whole investment strategy worked (both the in/out motion and the choice of investment) than you would definitely want to include the cash component as that is necessary for the strategy and would be your final return if you implemented that strategy. As a side note, neither IRR or CAGR are not great ways to judge investment strategies as they have some odd timing issues and they don't take into account risk.",
"title": ""
},
{
"docid": "1ce467dafc6ecdc97e17f36d18c0971e",
"text": "The year over year compounding in India has the potential to make up for interest rate parity. But 3% isn't really going to create convenient amounts of earnings either until you get to larger amounts.",
"title": ""
},
{
"docid": "992674f8684d5708dcff9648a574e10e",
"text": "I think sometimes this is simply ignorance. If my marginal tax rate is 25%, then I can either pay tax deductible interest of $10K or pay income tax of $2.5K. I think most americans don't realize that paying $10K of tax deductible interest (think mortgage) only saves them $2.5K in taxes. In other words, I'd be $7.5K ahead if I didn't have the debt, but did pay higher taxes.",
"title": ""
},
{
"docid": "aa5e66223e0d31931d90f77af9794929",
"text": "Q1: Which is better option and provide good returns between FD and RD ? There is no right or wrong answer here and depends on rates, convenience, exactness of duration, etc and other things. In general an FD would give you better return than RD. Q2: Am I liable to pay income tax on the interest earned ? If you have a NRE/NRO account you are not liable for tax on interest in India. Note you may still be liable to pay tax on this in the US",
"title": ""
},
{
"docid": "7f40db430f8f8ee4666972af247ef285",
"text": "\"they said the expected returns from the stock market are around 7-9%(ish). (emphasis added) The key word in your quote is expected. On average \"\"the market\"\" gains in the 7-9% range (more if you reinvest dividends), but there's a great deal of risk too, meaning that in any given year the market could be down 20% or be up 30%. Your student loan, on the other hand, is risk free. You are guaranteed to pay (lose) 4% a year in interest. You can't directly compare the expected return of a risk-free asset with the expected return of a risky asset. You can compare the risks of two assets with equal expected returns, and the expected returns of assets with equal risks, but you can't directly compare returns of assets with different risks. So in two years, you might be better off if you had invested the money versus paying the loan, or you might be much worse off. In ten years, your chances of coming out ahead are better, but still not guaranteed. What's confusing is I've heard that if you're investing, you should be investing in both stocks and bonds (since I'm young I wouldn't want to put much in bonds, though). So how would that factor in? Bonds have lower risk (uncertainty) than stocks, but lower expected returns. If you invest in both, your overall risk is lower, since sometimes (not always) the gain in stocks are offset by losses in bonds). So there is value in diversifying, since you can get better expected returns from a diversified portfolio than from a single asset with a comparable amount of risk. However, there it no risk-free asset that will have a better return than what you're paying in student loan interest.\"",
"title": ""
},
{
"docid": "cd32495b2fc65a7b03e82757110cf866",
"text": "\"The CBOE states, in an investor's guide to Interest Rate Options: The Options’ Underlying Values Underlying values for the option contracts are 10 times the underlying Treasury yields (rates)— 13-week T-bill yield (for IRX), 5-year T-note yield (for FVX), 10-year T-note yield (for TNX) and 30-year T-bond yield (for TYX). The Yahoo! rate listed is the actual Treasury yield; the Google Finance and CBOE rates reflect the 10 times value. I don't think there's a specific advantage to \"\"being contrary\"\", more likely it's a mistake, or just different.\"",
"title": ""
},
{
"docid": "85ec14f084fa130fad51e5b6d27fee4a",
"text": "> Does it make sense to calculate the IRR based on the outstanding value of the project, or just use the cash flows paid out? What is the outstanding value of the project based on? I'm guessing it is the PV of net cash flow? The timing of each cash outflow (i.e. investment) is crucial to calculating a proper IRR because of time value of money. Putting in $x each year for 49 years will give you a different figure from putting in $49x in the first year and zero for the next 48 years because a larger figure is tied up for a longer time period.",
"title": ""
},
{
"docid": "b18dfb2f980c7c6e0d47ae978440fba3",
"text": "\"The definition you cite is correct, but obscure. I prefer a forward looking definition. Consider the real investment. You make an original investment at some point in time. You make a series of further deposits and withdrawals at specified times. At some point after the last deposit/withdrawal, (the \"\"end\"\") the cash value of the investment is determined. Now, find a bank account that pays interest compounded daily. Possibly it should allow overdrafts where it charges the same interest rate. Make deposits and withdrawals to/from this account that match the investment payments in amount and date. At the \"\"end\"\" the value in this bank account is the same as the investment. The bank interest rate that makes this happen is the IRR for the investment...\"",
"title": ""
}
] |
fiqa
|
db2840834d82bcf6aec899932368298b
|
Why do many British companies offer a scrip dividend option in lieu of cash?
|
[
{
"docid": "28417c330fec9be359de5236701d740d",
"text": "There are quite a few reasons that a company may choose to pay dividends rather than hold cash [increasing the share value]. Of couse there are equally other set of reasons why a company may not want to give dividends and hold on to cash. Related question here Please explain the relationship between dividend amount, stock price, and option value?",
"title": ""
}
] |
[
{
"docid": "4fd5b5ed5fb4dd0fcbd1af43c9b6ee97",
"text": "Some investors (pension funds or insurance companies) need to pay out a certain amount of money to their clients. They need cash on a periodical basis, and thus prefer dividend paying stock more.",
"title": ""
},
{
"docid": "8251000cc2c3e8b95abfb04205e6fcc7",
"text": "\"The answer is Discounted Cash Flows. Companies that don't pay dividends are, ostensibly reinvesting their cash at returns higher than shareholders could obtain elsewhere. They are reinvesting in productive capacity with the aim of using this greater productive capacity to generate even more cash in the future. This isn't just true for companies, but for almost any cash-generating project. With a project you can purchase some type of productive assets, you may perform some kind of transformation on the good (or not), with the intent of selling a product, service, or in fact the productive mechanism you have built, this productive mechanism is typically called a \"\"company\"\". What is the value of such a productive mechanism? Yes, it's capacity to continue producing cash into the future. Under literally any scenario, discounted cash flow is how cash flows at distinct intervals are valued. A company that does not pay dividends now is capable of paying them in the future. Berkshire Hathaway does not pay a dividend currently, but it's cash flows have been reinvested over the years such that it's current cash paying capacity has multiplied many thousands of times over the decades. This is why companies that have never paid dividends trade at higher prices. Microsoft did not pay dividends for many years because the cash was better used developing the company to pay cash flows to investors in later years. A companies value is the sum of it's risk adjusted cash flows in the future, even when it has never paid shareholders a dime. If you had a piece of paper that obligated an entity (such as the government) to absolutely pay you $1,000 20 years from now, this $1,000 cash flows present value could be estimated using Discounted Cash Flow. It might be around $400, for example. But let's say you want to trade this promise to pay before the 20 years is up. Would it be worth anything? Of course it would. It would in fact typically go up in value (barring heavy inflation) until it was worth very close to $1,000 moments before it's value is redeemed. Imagine that this \"\"promise to pay\"\" is much like a non-dividend paying stock. Throughout its life it has never paid anyone anything, but over the years it's value goes up. It is because the discounted cash flow of the $1,000 payout can be estimated at almost anytime prior to it's payout.\"",
"title": ""
},
{
"docid": "abb4cdd47e8ddd5e34572e51cc065730",
"text": "Shareholders can [often] vote for management to pay dividends Shareholders are sticking around if they feel the company will be more valuable in the future, and if the company is a target for being bought out. Greater fool theory",
"title": ""
},
{
"docid": "cc91ea4c757c7222136a6d2fab185128",
"text": "Typically, preferred shares come with one or both different benefits - a disproportionate share of votes, say 10 votes per share vs the normal 1, or a preferred dividend. The vote preference is great for the owner(s) looking to go public, but not lose control of the company. Say, I am a Walton (of Walmart fame) and when I went public, I sold 80% of the (1000 share total) company. But, in creating the share structure, 20% of shares were assigned 10 votes each. 800 shares now trade with 800 votes, 200 shares have 10 votes each or 2000 votes. So, there are still the 1000 shares but 2800 votes. The 20% of shares now have 2000/2800 or 71% of the total votes. So, my shares are just less than half ownership, but over 78% of votes. Preferred dividend is as simple as that, buy Stock A for ownership, or (same company) Stock A preferred shares which have ownership and $1/yr dividend. Edited to show a bit more math. I use a simple example to call out a total 1000 shares. The percentages would be the same for a million or billion shares if 20% were a 10 vote preferred.",
"title": ""
},
{
"docid": "f3efe3ae43a81233cc493fe7893ce776",
"text": "My answer is not specific, or even maybe applicable, to Microsoft. Companies don't want to cut dividends. So they have a fixed expense, but the cashflow that funds it might be quite lumpy, or cyclical, depending on the industry. Another, more general, issue is that taking on debt to retire shares is a capital allocation decision. A company needs capital to operate. This is why they went public in the first place, to raise capital. Debt is a cheaper form of capital than equity. Equity holders are last in line in a bankruptcy. Bondholders are at the front of the line. To compensate for this, equity holders require a larger return -- often called a hurdle rate. So why doesn't a company just use cheaper equity, and no debt? Some do. But consider that equity holders participate in the earnings, where bondholders just get the interest, nothing more. And because lenders don't participate in the potential upside, they introduce conditions (debt covenants) to help control their downside exposure. For a company, it's a balance, very much the same as personal finances. A reasonable amount of debt provides low-cost capital, which can be used to produce greater returns. But too much debt, and the covenants are breached, the debt is called due immediately, there's no cash to cover, and wham! bankruptcy. A useful measure, if a bit difficult to calculate, is a company's cost of capital, and the return on that capital. Cost of capital is a blended number taking both equity and debt into account. Good companies earn a return that is greater than their cost of capital. Seems obvious, but many companies don't succeed at this. In cases where this is persistent, the best move for shareholders would be for the company to dissolve and return all the capital. Unfortunately, as in the Railroad Tycoon example above, managers' incentives aren't always well aligned with shareholders, and they allocate capital in ways advantageous to themselves, and not the company.",
"title": ""
},
{
"docid": "1c0db525e29b179ccc16e47e332b7f84",
"text": "Numerous studies have actually shown that companies who pay dividends are much more reckless financially with returning capital to shareholders because they want to save face and maintain/grow the dividend. Buybacks are much more flexible and probably lead to better capital allocation decisions, in my opinion.",
"title": ""
},
{
"docid": "187da176de28134ca36a1b9726d3e13a",
"text": "The shareholders have a claim on the profits, but they may prefer that claim to be exercised in ways other than dividend payments. For example, they may want the company to invest all of its profits in growth, or they may want it to buy back shares to increase the value of the remaining shares, especially since dividends are generally taxed as income while an increase in the share price is generally taxed as a capital gain, and capital gains are often taxed at a lower rate than income.",
"title": ""
},
{
"docid": "48c4c09393444db71c1e41e3da89a24d",
"text": "\"A company has 100,000 shares and 100,000 unexercised call options (company issued). Share price and strike price both at $1. What country is this related to? I ask because, in the US, most people I know associate a \"\"call\"\" option with the instrument that is equivalent to 100 shares. So 100,000 calls would be 10,000,000 shares, which exceeds the number of shares you're saying the company has. I don't know if that means you pulled the numbers out of thin air, or whether it means you're thinking of a different type of option? Perhaps you meant incentive stock options meant to be given to employees? Each one of those is equivalent to a single share. They just aren't called \"\"call options\"\". In the rest of my answer, I'm going to assume you meant stock options. I assume the fact that these options exist will slow any price increases on the underlying shares due to potential dilution? I don't think the company can just create stock options without creating the underlying shares in the first place. Said another way, a more likely scenario is that company creates 200,000 shares and agrees to float 50% of them while reserving the other 50% as the pool for incentive employee stock. They then choose to give the employees options on the stock in the incentive pool, rather than outright grants of the stock, for various reasons. (One of which is being nice to the employees in regards to taxes since there is no US tax due at grant time if the strike price is the current price of the underlying stock.) An alternative scenario when the company shares are liquidly traded is that the company simply plans to buy back shares from the market in order to give employees their shares when options are exercised. In this case, the company needs the cash on hand, or cash flow to take money from, to buy those shares at current prices. Anyway, in either case, there is no dilution happening WHEN the options get exercised. Any dilution happened before or at the time the options were created. Meaning, the total number of shares in the company was already pre-set at an earlier time. As a result, the fact that the options exist in themselves will not slow price changes on the stock. However, price changes will be impacted by the total float of shares in the company, or the impact to cash flow if the company has to buy shares to redeem its option commitments. This is almost the same thing you're asking about, but it is technically different as to timing. If this is the case, can this be factored into any option pricing models like black-scholes? You're including the effect just by considering the total float of shares and net profits from cash flow when doing your modelling.\"",
"title": ""
},
{
"docid": "a93bf6e73cac1f5c7e8a250f6a3dae72",
"text": "Am I correct in understanding that a Scrip Dividend involves the issue of new shares instead of the purchase of existing shares? Yes. Instead of paying a cash dividend to shareholders, the company grants existing shareholders new shares at a previously determined price. This allows shareholders who join the program to obtain new shares without incurring transaction costs that would normally occur if they purchased these shares in the market. Does this mean that if I don't join this program, my existing shares will be diluted every time a Scrip Dividend is paid? Yes, because the number of shares has increased, so the relative percentage of shares in the company you hold will decrease if you opt-out of the program. The price of the existing shares will adjust so that the value of the company is essentially unchanged (similar to a stock split), but the number of outstanding shares has increased, so the relative weight of your shares declines if you opt out of the program. What is the benefit to the company of issuing Scrip Dividends? Companies may do this to conserve their cash reserves. Also, by issuing a scrip dividend, corporations could avoid the Advanced Corporation Tax (ACT) that they would normally pre-pay on their distributions. Since the abolition of the ACT in 1999, preserving cash reserves is the primary reason for a company to issue scrip dividends, as far as I know. Whether or not scrip dividends are actually a beneficial strategy for a company is debatable (this looks like a neat study, even though I've only skimmed it). The issue may be beneficial to you, however, because you might receive a tax benefit. You can sell the scrip dividend in the market; the capital gain from this sale may fall below the annual tax-free allowance for capital gains, in which case you don't pay any capital gains tax on that amount. For a cash dividend, however, there isn't a minimum taxable amount, so you would owe dividend tax on the entire dividend (and may therefore pay more taxes on a cash dividend).",
"title": ""
},
{
"docid": "edc7ef593efc8e63c3943b0bccda0122",
"text": "Instead of giving part of their profits back as dividends, management puts it back into the company so the company can grow and produce higher profits. When these companies do well, there is high demand for them as in the long term higher profits equates to a higher share price. So if a company invests in itself to grow its profits higher and higher, one of the main reasons investors will buy the shares, is in the expectation of future capital gains. In fact just because a company pays a dividend, would you still buy it if the share price kept decreasing year after year? Lets put it this way: Company A makes record profits year after year, continually keeps beating market expectations, its share price keeps going up, but it pays no dividend instead reinvests its profits to continually grow the business. Company B pays a dividend instead of reinvesting to grow the business, it has been surprising the market on the downside for a few years now, it has had some profit warnings lately and its share price has consistently been dropping for over a year. Which company would you be interested in buying out of the two? I know I would be interested in buying Company A, and I would definitely stay away from Company B. Company A may or may not pay dividends in the future, but if Company B continues on this path it will soon run out of money to pay dividends. Most market gains are made through capital gains rather than dividends, and most people invest in the hope the shares they buy go up in price over time. Dividends can be one attractant to investors but they are not the only one.",
"title": ""
},
{
"docid": "fa8e0c64174269d2bd8ace9c51271d15",
"text": "The upvoted answers fail to note that dividends are the only benefit that investors collectively receive from the companies they invest in. If you purchase a share for $100, and then later sell it for $150, you should note that there is always someone that purchases the same share for $150. So, you get $150 immediately, but somebody else has to pay $150 immediately. So, investors collectively did not receive any money from the transaction. (Yes, share repurchase can be used instead of dividends, but it can be considered really another form of paying dividends.) The fair value of a stock is the discounted value of all future dividends the stock pays. It is so simple! This shows why dividends are important. Somebody might argue that many successful companies like Berkshire Hathaway do not pay dividend. Yes, it is true that they don't pay dividend now but they will eventually have to start paying dividend. If they reinvest potential dividends continuously, they will run out of things to invest in after several hundred years has passed. So, even in this case the value of the stock is still the discounted value of all future dividends. The only difference is that the dividends are not paid now; the companies will start to pay the dividends later when they run out of things to invest in. It is true that in theory a stock could pay an unsustainable amount of dividend that requires financing it with debt. This is obviously not a good solution. If you see a company that pays dividend while at the same time obtaining more cash from taking more debt or from share issues, think twice whether you want to invest in such a company. What you need to do to valuate companies fairly is to estimate the amount of dividend that can sustain the expected growth rate. It is typically about 60% of the earnings, because a part of the earnings needs to be invested in future growth, but the exact figure may vary depending on the company. Furthermore, to valuate a company, you need the expected growth rate of dividends and the discount rate. You simply discount all future dividends, correcting them up by the expected dividend growth rate and correcting them down by the discount rate.",
"title": ""
},
{
"docid": "2b6cfd7b5ea58d48dc171d9ede3d46f0",
"text": "Often buyouts are paid for by the buyer issuing a load of new shares and giving those to the seller to pay them. Sometimes it could be all shares, sometimes all cash, or any mix in-between. If you believe in the future of the buyers' business model, you'll often get a load of shares at a discounted rate this way. If you do not believe in the buyers' future then you're getting shares that you think may be worth little or nothing some day, so cash would be better.",
"title": ""
},
{
"docid": "9ff4b83c8e5627b710d84964fc9b0a85",
"text": "\"This answer will expand a bit on the theory. :) A company, as an entity, represents a pile of value. Some of that is business value (the revenue stream from their products) and some of that is assets (real estate, manufacturing equipment, a patent portfolio, etc). One of those assets is cash. If you own a share in the company, you own a share of all those assets, including the cash. In a theoretical sense, it doesn't really matter whether the company holds the cash instead of you. If the company adds an extra $1 billion to its assets, then people who buy and sell the company will think \"\"hey, there's an extra $1 billion of cash in that company; I should be willing to pay $1 billion / shares outstanding more per share to own it than I would otherwise.\"\" Granted, you may ultimately want to turn your ownership into cash, but you can do that by selling your shares to someone else. From a practical standpoint, though, the company doesn't benefit from holding that cash for a long time. Cash doesn't do much except sit in bank accounts and earn pathetically small amounts of interest, and if you wanted pathetic amounts of interests from your cash you wouldn't be owning shares in a company, you'd have it in a bank account yourself. Really, the company should do something with their cash. Usually that means investing it in their own business, to grow and expand that business, or to enhance profitability. Sometimes they may also purchase other companies, if they think they can turn a profit from the purchase. Sometimes there aren't a lot of good options for what to do with that money. In that case, the company should say, \"\"I can't effectively use this money in a way which will grow my business. You should go and invest it yourself, in whatever sort of business you think makes sense.\"\" That's when they pay a dividend. You'll see that a lot of the really big global companies are the ones paying dividends - places like Coca-Cola or Exxon-Mobil or what-have-you. They just can't put all their cash to good use, even after their growth plans. Many people who get dividends will invest them in the stock market again - possibly purchasing shares of the same company from someone else, or possibly purchasing shares of another company. It doesn't usually make a lot of sense for the company to invest in the stock market themselves, though. Investment expertise isn't really something most companies are known for, and because a company has multiple owners they may have differing investment needs and risk tolerance. For instance, if I had a bunch of money from the stock market I'd put it in some sort of growth stock because I'm twenty-something with a lot of savings and years to go before retirement. If I were close to retirement, though, I would want it in a more stable stock, or even in bonds. If I were retired I might even spend it directly. So the company should let all its owners choose, unless they have a good business reason not to. Sometimes companies will do share buy-backs instead of dividends, which pays money to people selling the company stock. The remaining owners benefit by reducing the number of shares outstanding, so they own more of what's left. They should only do this if they think the stock is at a fair price, or below a fair price, for the company: otherwise the remaining owners are essentially giving away cash. (This actually happens distressingly often.) On the other hand, if the company's stock is depressed but it subsequently does better than the rest of the market, then it is a very good investment. The one nice thing about share buy-backs in general is that they don't have any immediate tax implications for the company's owners: they simply own a stock which is now more valuable, and can sell it (and pay taxes on that sale) whenever they choose.\"",
"title": ""
},
{
"docid": "ac18a23cf30f659b257d22786cc092b5",
"text": "\"As I understand it, a company raises money by sharing parts of it (\"\"ownership\"\") to people who buy stocks from it. It's not \"\"ownership\"\" in quotes, it's ownership in a non-ironic way. You own part of the company. If the company has 100 million shares outstanding you own 1/100,000,000th of it per share, it's small but you're an owner. In most cases you also get to vote on company issues as a shareholder. (though non-voting shares are becoming a thing). After the initial share offer, you're not buying your shares from the company, you're buying your shares from an owner of the company. The company doesn't control the price of the shares or the shares themselves. I get that some stocks pay dividends, and that as these change the price of the stock may change accordingly. The company pays a dividend, not the stock. The company is distributing earnings to it's owners your proportion of the earnings are equal to your proportion of ownership. If you own a single share in the company referenced above you would get $1 in the case of a $100,000,000 dividend (1/100,000,000th of the dividend for your 1/100,000,000th ownership stake). I don't get why the price otherwise goes up or down (why demand changes) with earnings, and speculation on earnings. Companies are generally valued based on what they will be worth in the future. What do the prospects look like for this industry? A company that only makes typewriters probably became less valuable as computers became more prolific. Was a new law just passed that would hurt our ability to operate? Did a new competitor enter the industry to force us to change prices in order to stay competitive? If we have to charge less for our product, it stands to reason our earnings in the future will be similarly reduced. So what if the company's making more money now than it did when I bought the share? Presumably the company would then be more valuable. None of that is filtered my way as a \"\"part owner\"\". Yes it is, as a dividend; or in the case of a company not paying a dividend you're rewarded by an appreciating value. Why should the value of the shares change? A multitude of reasons generally revolving around the company's ability to profit in the future.\"",
"title": ""
},
{
"docid": "4275283d6083a46b9904a9ea71360cbc",
"text": "Firstly a stock split is easy, for example each unit of stock is converted into 10 units. So if you owned 1% of the company before the stock split, you will still own 1% after the stock split, but have 10 times the number of shares. The company does not pay out any money when doing this and there is no effect on tax for the company or the share holder. Now onto stock dividend… When a company make a profit, the company gives some of the profit to the share holders as a dividend; this is normally paid in cash. An investor may then wish to buy more shares in the company using the money from the dividend. However buying shares used to have a large cost in broker charges etc. Therefore some companies allowed share holders to choose to have the dividend paid as shares. The company buys enough of their own shares to cover the payout, only having one set of broker charges and then sends the correct number of shares to each share holder that has opted for a stock dividend. (Along with any cash that was not enough to buy a complete share.) This made since when you had paper shares and admin costs where high for stock brokers. It does not make sense these days. A stock dividend is taxed as if you had been paid the dividend in cash and then brought the stock yourself.",
"title": ""
}
] |
fiqa
|
6dec3fca180a4e6ab9cd0400c4faaf8f
|
Are quarterly earnings released first via a press release on the investor website, via conference call, or does it vary by company?
|
[
{
"docid": "2b91ea9ba00641d019c71d2986da2f19",
"text": "the financial information is generally filed via SEDAR (Canada) or SEC (US) before the conference call with the investment community. This can take before either before the market opens or after the market closes. The information is generally distribute to the various newswire service and company website at the same time the filing is made with SEDAR/SEC.",
"title": ""
},
{
"docid": "5f5d0b22cf78bf5aa71207de175bdba3",
"text": "Companies typically release their earnings before the market opens, and then later host an analyst/investor conference call to discuss the results. Here's a link to an interesting article abstract on the subject: Disclosure Rules For Earnings Releases And Calls | Bowne Digest. Excerpt: In the aftermath of the Sarbanes-Oxley Act, the SEC changed regulations to bring quarterly earnings announcements in line with the generally heightened sensitivity to adequate disclosure. New regulations required that issuers file or furnish their earnings press releases on Form 8-K and conduct any related oral presentations promptly thereafter, to avoid a second 8-K. [...] Sample from a news release by The Coca Cola Company: ATLANTA, September 30, 2009 - The Coca-Cola Company will release third quarter and year-to-date 2009 financial results on Tuesday, October 20, before the stock market opens. The Company will host an investor conference call at 9:30 a.m. ( EDT ), on October 20. [...] Sample from a news release by Apple, Inc.: CUPERTINO, California—January 21, 2009—Apple® today announced financial results for its fiscal 2009 first quarter ended December 27, 2008. The Company posted record revenue of [...] Apple will provide live streaming of its Q1 2009 financial results conference call utilizing QuickTime®, Apple’s standards-based technology for live and on-demand audio and video streaming. The live webcast will begin at [...]",
"title": ""
},
{
"docid": "ae140b6307f4e621e74d4a2184730378",
"text": "Companies release their earnings reports over news agencies like Reuters, Dow Jones and Bloomberg before putting them on their website (which usually occurs a few minutes after the official dissemination of the report). This is because they have to make sure that all investors get the news at the same time (which is kind of guaranteed when official news channels are used). The conference call is usually a few hours after the earnings report release to discuss the results with analysts and investors.",
"title": ""
}
] |
[
{
"docid": "f0681a6e39199fc97f9881b1bd449ca6",
"text": "In the U.S., publicly traded companies are under the rules of Regulation Fair Disclosure, which says that a company must release information to all investors at the same time. The company website and social media both count as fair disclosure, because every investor has access to those outlets, but a press release newswire service could also be the first outlet. (What is forbidden by this regulation is the practice of releasing news first to the brokers, who could inform certain customers of the news early.) I think that the first outlet for press releases could be different for each company, depending on the internal procedures of the company. Some would update their website first, and others would wait to update the site until the press release hits the newswire first.",
"title": ""
},
{
"docid": "4289ceb981b00debab6378001ff515e2",
"text": "\"Share sales & purchases are accounted only on the balance sheet & cash flow statement although their effects are seen on the income statement. Remember, the balance sheet is like a snapshot in time of all accrued accounts; it's like looking at a glass of water and noting the level. The cash flow and income statements are like looking at the amount of water, \"\"actually\"\" and \"\"imaginary\"\" respectively, pumped in and out of the glass. So, when a corporation starts, it sells shares to whomever. The amount of cash received is accounted for in the investing section of the cash flow statement under the subheading \"\"issuance (retirement) of stock\"\" or the like, so when shares are sold, it is \"\"issuance\"\"; when a company buys back their shares, it's called \"\"retirement\"\", as cash inflows and outflows respectively. If you had a balance sheet before the shares were sold, you'd see under the \"\"equity\"\" heading a subheading common stock with a nominal (irrelevant) par value (this is usually something obnoxiously low like $0.01 per share used for ease of counting the shares from the Dollar amount in the account) under the subaccount almost always called \"\"common stock\"\". If you looked at the balance sheet after the sale, you'd see the number of shares in a note to the side. When shares trade publicly, the corporation usually has very little to do with it unless if they are selling or buying new shares under whatever label such as IPO, secondary offering, share repurchase, etc, but the corporation's volume from such activity would still be far below the activity of the third parties: shares are trading almost exclusively between third parties. These share sales and purchases will only be seen on the income statement under earnings per share (EPS), as EPS will rise and fall with stock repurchases and sales assuming income is held constant. While not technically part of the income statement but printed with it, the \"\"basic weighted average\"\" and \"\"diluted weighted average\"\" number of shares are also printed which are the weighted average over the reporting period of shares actually issued and expected if all promises to issue shares with employee stock options, grants, convertibles were made kept. The income statement is the accrual accounts of the operations of the company. It has little detail on investing (depreciation & appreciation) or financing (interest expenses & preferred dividends).\"",
"title": ""
},
{
"docid": "138081ec8dc672510864b024303858ca",
"text": "Whilst it is true that they do not have a conference call every time a rating is produced, the parameters of a natural oligopoly do indicate that there are negative effects of deviating too much from the other members of an oligopoly. There are instances of rating agencies (Moody's) giving lower ratings to punish the issuer for going elsewhere (Re Hannover), but usually a slightly lower rating may be acceptable and is usually corrected to be in line with the competitor shortly afterwards. The power, arguably, is with the issuer in this sense because they can take their business to the 3rd member (Fitch) if the rating is too low from one of the Big Two. The preservation of the 'Big Two', for so long, is arguably testament to the S&P and Moody's understanding of these parameters If the answer is not micromanaging, what do you think it is out of interest?",
"title": ""
},
{
"docid": "72f8406a31741459ff9869a0c5d52123",
"text": "\"Does your job give you access to \"\"confidential information\"\", such that you can only buy or sell shares in the company during certain windows? Employees with access to company financial data, resource planning databases, or customer databases are often only allowed to trade in company securities (or derivatives thereof) during certain \"\"windows\"\" a few days after the company releases its quarterly earnings reports. Even those windows can be cancelled if a major event is about to be announced. These windows are designed to prevent the appearance of insider trading, which is a serious crime in the United States. Is there a minimum time that you would need to hold the stock, before you are allowed to sell it? Do you have confidence that the stock would retain most of its value, long enough that your profits are long-term capital gains instead of short-term capital gains? What happens to your stock if you lose your job, retire, or go to another company? Does your company's stock price seem to be inflated by any of these factors: If any of these nine warning flags are the case, I would think carefully before investing. If I had a basic emergency fund set aside and none of the nine warning flags are present, or if I had a solid emergency fund and the company seemed likely to continue to justify its stock price for several years, I would seriously consider taking full advantage of the stock purchase plan. I would not invest more money than I could afford to lose. At first, I would cash out my profits quickly (either as quickly as allowed, or as quickly as lets me minimize my capital gains taxes). I would reinvest in more shares, until I could afford to buy as many shares as the company would allow me to buy at the discount. In the long-run, I would avoid having more than one-third of my net worth in any single investment. (E.g., company stock, home equity, bonds in general, et cetera.)\"",
"title": ""
},
{
"docid": "08c3f5e83dd7e845ab352290781bcd70",
"text": "Dividends are not paid immediately upon reception from the companies owned by an ETF. In the case of SPY, they have been paid inconsistently but now presumably quarterly.",
"title": ""
},
{
"docid": "5680b160ef451d1256d0d99b6011ba1a",
"text": "Look at the how the income statement is built. The stock price is nowhere on it. The net income is based on the revenue (money coming in) and expenses (money going out). Most companies do not issue stock all that often. The price you see quoted is third parties selling the stock to each other.",
"title": ""
},
{
"docid": "01dc0a97e9737837fc1a151aacdca3fe",
"text": "If the period is consistent for company X, but occurs in a different month as Company Y, it might be linked to the release of their annual report, or the payment of their annual dividend. Companies don't have to end their fiscal year near the end of the Calendar year, therefore these end of year events could occur in any month. The annual report could cause investors to react to the hard numbers of the report compared to what wall street experts have been predicting. The payment of an annual dividend will also cause a direct drop in the price of the stock when the payment is made. There will also be some movement in prices as the payment date approaches.",
"title": ""
},
{
"docid": "7a1af1f518ca2fda333f2639837459d9",
"text": "PE ratio is the current share price divided by the prior 4 quarters earnings per share. Any stock quote site will report it. You can also compute it yourself. All you need is an income statement and a current stock quote.",
"title": ""
},
{
"docid": "332205f27c25ae4259976051970c26c8",
"text": "\"Filter by the filings when you look at the search results. The 10-K will include the annual report, which included fiscal year-end financial statements. Quarterly reports and statements are in the 10-Q filing. The filing will include a LOT of other information, but there should be a section called \"\"Financial Statements\"\" or something similar that will include all pertinent financials statements. You can also find \"\"normalized\"\" balance sheets and income statements on the \"\"finance\"\" pages of the main web search sites (Google, Yahoo, MSN) and other sites that provide stock quotes. If you're looking to do basic comparisons versus in-depth statement analysis those may be sufficient for you.\"",
"title": ""
},
{
"docid": "fba69109c372ce3a7f882968dd7b3e36",
"text": "Note that your link shows the shares as of March 31, 2016 while http://uniselect.com/content/files/Press-release/Press-Release-Q1-2016-Final.pdf notes a 2-for-1 stock split so thus you have to double the shares to get the proper number is what you are missing. The stock split occurred in May and thus is after the deadline that you quoted.",
"title": ""
},
{
"docid": "1215709f7759651dfa4fa316b87bc917",
"text": "The websites of the most publicly traded companies publish their quarterly and annual financials. Check the investor relations sections out at the ones you want to look at.",
"title": ""
},
{
"docid": "2f06e5113e47302d55798c67dc6474c7",
"text": "I would look on http://seekingalpha.com/currents/earnings. You can also get copies of the conference calls for each company you are looking at. What you referred to is the conference call. The people who usually ask questions are professional analysts. I would recommend getting the transcript as it is easier to highlight and keep records of. I hope that helps",
"title": ""
},
{
"docid": "ffcfbbbf77acfc7817be2bc3cc848775",
"text": "\"EPS is often earnings/diluted shares. That is counting shares as if all convertible securities (employee stock options for example) were converted. Looking at page 3 of Q4 2015 Reissued Earnings Press Release we find both basic ($1.13) and diluted EPS ($1.11). Dividends are not paid on diluted shares, but only actual shares. If we pull put this chart @ Yahoo finance, and hovering our mouse over the blue diamond with a \"\"D\"\", we find that Pfizer paid dividends of $0.28, $0.28, $0.28, $0.30 in 2015. Or $1.14 per share. Very close to the $1.13, non-diluted EPS. A wrinkle is that one can think of the dividend payment as being from last quarter, so the first one in 2015 is from 2014. Leaving us with $0.28, $0.28, $0.30, and unknown. Returning to page three of Q4 2015 Reissued Earnings Press Release, Pfizer last $0.03 per share. So they paid more in dividends that quarter than they made. And from the other view, the $0.30 cents they paid came from the prior quarter, then if they pay Q1 2016 from Q4 2015, then they are paying more in that view also.\"",
"title": ""
},
{
"docid": "5a9de080444de75c710b8e60527623c7",
"text": "\"I'm trying to understand how an ETF manager optimized it's own revenue. Here's an example that I'm trying to figure out. ETF firm has an agreement with GS for blocks of IBM. They have agreed on daily VWAP + 1% for execution price. Further, there is a commission schedule for 5 mils with GS. Come month end, ETF firm has to do a monthly rebalance. As such must buy 100,000 shares at IBM which goes for about $100 The commission for the trade is 100,000 * 5 mils = $500 in commission for that trade. I assume all of this is covered in the expense ratio. Such that if VWAP for the day was 100, then each share got executed to the ETF at 101 (VWAP+ %1) + .0005 (5 mils per share) = for a resultant 101.0005 cost basis The ETF then turns around and takes out (let's say) 1% as the expense ratio ($1.01005 per share) I think everything so far is pretty straight forward. Let me know if I missed something to this point. Now, this is what I'm trying to get my head around. ETF firm has a revenue sharing agreement as well as other \"\"relations\"\" with GS. One of which is 50% back on commissions as soft dollars. On top of that GS has a program where if you do a set amount of \"\"VWAP +\"\" trades you are eligible for their corporate well-being programs and other \"\"sponsorship\"\" of ETF's interests including helping to pay for marketing, rent, computers, etc. Does that happen? Do these disclosures exist somewhere?\"",
"title": ""
},
{
"docid": "fdc2ab0a1a171b76514f8a99687db810",
"text": "I don't see how allowing usage of your vehicle is less support than giving money to buy their own vehicle. If that's the only vehicle your mother has - then you're supporting her. Quantifying that support may be difficult though, but if you are providing her all of her needs - it doesn't matter. If she does have income of her own, I do not think that you can put the actual amount you're paying as part of the calculation towards the 50% rule since she would otherwise have bought a much cheaper car. But if you pass the 50% threshold even without the car payments - then you're fine either way.",
"title": ""
}
] |
fiqa
|
4f97454f084b6430e31ae0a78b02a768
|
How and where can I deposit money to generate future payments / income?
|
[
{
"docid": "eacf7563b0bb74ec08e6a109be9e26ff",
"text": "Reversing your math, I am assuming you have $312K to work with. In that case, I would simply shop around your local banks and/or credit unions and have them compete for your money and you might be quite surprised how much they are willing to pay. A couple of months ago, you would be able to get about 4.25% from Israel Bonds in Canada on 5 years term (the Jubilee product, with minimum investment of $25K). It's a bit lower now, but you should still be able to get very good rates if you shop around tier-2 banks or credit unions (who are more hungry for capital than the well-funded tier-1 banks). Or you could look at preferred shares of a large corporation. They are different from common shares in the sense they are priced according to the payout rate (i.e. people buy it for the dividend). A quick screen from your favorite stock exchange ought to find you a few options. Another option is commercial bonds. You should be able to get that kind of return from investment grade (BBB- and higher) bonds on large corporations these days. I just did a quick glance at MarketWatch's Bond section (http://cxa.marketwatch.com/finra/BondCenter/Default.aspx) and found AAA grade bonds that will yield > 5%. You will need to investigate their underlying fundamentals, coupon rate and etc before investing (second thought, grab a introduction to bonds book from Chapters first). Hope these helps.",
"title": ""
},
{
"docid": "c842e205a840bf448e0a7df3c75c5bbe",
"text": "If you're in the USA and looking to retire in 10 years, pay your Social Security taxes? :P Just kidding. Do a search for Fixed Rate Annuities.",
"title": ""
}
] |
[
{
"docid": "c255f9fe7a02eec2d330e649199f09dc",
"text": "Unfortunately, in this market environment your goal is not very realistic. At the moment real interest rates are negative (and have been for some time). This means if you invest in something that will pay out for sure, you can expect to earn less than you lose through inflation. In other words, if you save your $50K, when you withdraw it in a few years you will be able to buy less with it then than you can now. You can invest in risky securities like stocks or mutual funds. These assets can easily generate 10% per year, but they can (and do) also generate negative returns. This means you can and likely will lose money after investing in them. There's an even better chance that you will make money, but that varies year by year. If you invest in something that expects to make 10% per year (meaning it makes that much on average), it will be extremely risky and many years it will lose money, perhaps a lot of it. That's the way risk is. Are you comfortable taking on large amounts of risk (good chances of losing a lot of your money)? You could make some kind of real investment. $50K is a little small to buy real estate, but you may be able to find something like real estate that can generate income, especially if you use it as a down payment to borrow from the bank. There is risk in being a landlord as well, of course, and a lot of work. But real investments like that are a reasonable alternative to financial markets for some people. Another possibility is to just keep it in your bank account or something else with no risk and take $5000 out per year. It will only last you 10 years that way, but if you are not too young, that will be a significant portion of your life. If you are young, you can work and add to it. Unfortunately, financial markets don't magically make people rich. If you make a lot of money in the market, it's because you took a risk and got lucky. If you make a comfortable amount with no risk, it means you invested in a market environment very different from what we see today. --------- EDIT ------------ To get an idea of what risk free investments (after inflation) earn per year at various horizons see this table at the treasury. At the time of this writing you would have to invest in a security with maturity almost 10 years in order to break even with inflation. Beating it by 10% or even 3% per year with minimal risk is a pipe dream.",
"title": ""
},
{
"docid": "a849b511991ca24f1b68207ffef4b33a",
"text": "How do I direct deposit my paycheck into a high yield financial vehicle, like lottery tickets? And can I roll over my winnings into more lottery tickets? I want to wait until I have a few billion before touching it, maybe in a year or two.",
"title": ""
},
{
"docid": "9b57b79376f59df43a6a51ee2b861ac6",
"text": "A credit card is essentially a contract where they will loan you money in an on demand basis. It is not a contract for you to loan them money. The money that you have overpaid is generally treated as if it is a payment for a charge that you have made that has not been processed yet. The bank can not treat that money as a deposit and thus leverage it make money them selves. You can open an account and get a debit card. This would allow you to accrue interest for your deposit while using your money. But if you find one willing to pay you 25% interest please share with the rest of us :)",
"title": ""
},
{
"docid": "95f8b0a9613586413cfb36902c06e781",
"text": "Genius answer: Don't spend more than you make. Pay off your outstanding debts. Put plenty away towards savings so that you don't need to rely on credit more than necessary. Guaranteed to work every time. Answer more tailored to your question: What you're asking for is not realistic, practical, logical, or reasonable. You're asking banks to take a risk on you, knowing based on your credit history that you're bad at managing debt and funds, solely based on how much cash you happen to have on hand at the moment you ask for credit or a loan or based on your salary which isn't guaranteed (except in cases like professional athletes where long-term contracts are in play). You can qualify for lower rates for mortgages with a larger down-payment, but you're still going to get higher rate offers than someone with good credit. If you plan on having enough cash around that you think banks would consider making you credit worthy, why bother using credit at all and not just pay for things with cash? The reason banks offer credit or low interest on loans is because people have proven themselves to be trustworthy of repaying that debt. Based on the information you have provided, the bank wouldn't consider you trustworthy yet. Even if you have $100,000 in cash, they don't know that you're not just going to spend it tomorrow and not have the ability to repay a long-term loan. You could use that $100,000 to buy something and then use that as collateral, but the banks will still consider you a default risk until you've established a credit history to prove them otherwise.",
"title": ""
},
{
"docid": "4767150d12ae946f266ade3beae6a7b0",
"text": "You could keep an eye on BankSimple perhaps? I think it looks interesting at least... too bad I don't live in the US... They are planning to create an API where you can do everything you can do normally. So when that is released you could probably create your own command-line interface :)",
"title": ""
},
{
"docid": "870f7f11ad028d9c36b07164d1596f6f",
"text": "\"> My issue understanding this is I've been told that banks actually don't hold 10% of the cash and lend the other 90% but instead hold the full 100% in cash and lend 900%. Is this accurate? That's the money multiplier effect being poorly described. You take a loan out, but that loan eventually makes its way to other banks as cash deposits, which then are loaned out, and go to other banks, and loaned, etc., so that the economy is \"\"running\"\" on 10x cash, where 1x is in physical cash, and the other 9x is in this deposit-loan-deposit phenomenon. > The issue I see with it is that it becomes exponential growth that is uncapped. Not true. If there is $1B outstanding \"\"physical\"\" cash (the money supply) with a 10% reserve, then the maximum amount of \"\"money\"\" flowing through is $1B / 10% = $10B. This assumes EVERYTHING legally possible is loaned out or saved in the banking system. As such, it represents a cap. If you have an Excel spreadsheet handy, you can easily model this out in four columns. Label the first row as follows: Deposit, Reserve, Cash Reserve, Loan Amount A2 will be your money supply. For simplicity, put $100. B2, your reserve column, will be 10%. C2 should be =A2 * B2, which will be the cash reserve in the bank. D2 should be A2 - C2, which is the new loan amount extended. A3 should be = D2, as the loans extended from the last step become deposits in the next. B3 = B2. Now, drag the formulas down, say, 500 rows. If you then sum the \"\"deposits\"\" column, it'll total $1,000. The cash reserve will total $100, and the loan amount will be $900. Thus, there is a cap.\"",
"title": ""
},
{
"docid": "e3597d5686151e5780cf14fe1fd20ac7",
"text": "Perfect super clear, thank you /u/xlct2 So it is like you buy a bond for $X, start getting interest, sell bond for $X :) I was thinking there could be a possibility of a bond working like a loan from a bank, that you going paying as time goes by :D",
"title": ""
},
{
"docid": "ac326aca2189c78f3ed3457661c6f291",
"text": "My daughter is two, and she has a piggy bank that regularly dines on my pocket change. When that bank is worth $100 or so I will make it a regular high yield savings account. Then I will either setup a regular $10/month transfer into it, or something depending on what we can afford. My plan is then to offer my kid an allowance when she can understand the concept of money. My clever idea is I will offer her a savings plan with the Bank of Daddy. If she lets me keep her allowance for the week, I will give her double the amount plus a percentage the next week. If she does it she will soon see the magic of saving money and how banks pay your for the privilege. I don't know when I will give her access to the savings account with actual cash. I will show it to her, and review it with her so she can track her money, but I need to know that she has some restraint before I open the gates to her.",
"title": ""
},
{
"docid": "308f51e6fffb971b0f16420cd23e042f",
"text": "For this scheme to work, you would require an investment with no chance of a loss. Money market accounts and short-term t-bills are about your only options. The other thing is that you will need to be very careful to never miss the payment date. One month's late charges will probably wipe out a few months' profit. The only other caveat, which I'm sure you've considered, is that having your credit maxed out will hurt your credit score.",
"title": ""
},
{
"docid": "92bc54545894a84958a397e020d8c194",
"text": "\"Nowadays, some banks in some countries offer things like temporary virtual cards for online payments. They are issued either free of charge or at a negligible charge, immediately, via bank's web interface (access to which might either be free or not, this varies). You get a separate account for the newly-issued \"\"card\"\" (the \"\"card\"\" being just a set of numbers), you transfer some money there (same web-interface), you use it to make payment(s), you leave $0 on that \"\"card\"\" and within a day or a month, it expires. Somewhat convenient and your possible loss is limited tightly. Check if your local banks offer this kind of service.\"",
"title": ""
},
{
"docid": "d8209f4c9de8d573f190b134f7b2fb0b",
"text": "\"What are the options available for safe, short-term parking of funds? Savings accounts are the go-to option for safely depositing funds in a way that they remain accessible in the short-term. There are many options available, and any recommendations on a specific account from a specific institution depend greatly on the current state of banks. As you're in the US, If you choose to save funds in a savings account, it's important that you verify that the account (or accounts) you use are FDIC insured. Also be aware that the insurance limit is $250,000, so for larger volumes of money you may need to either break up your savings into multiple accounts, or consult a Accredited Investment Fiduciary (AIF) rather than random strangers on the internet. I received an inheritance check... Money is a token we exchange for favors from other people. As their last act, someone decided to give you a portion of their unused favors. You should feel honored that they held you in such esteem. I have no debt at all and aside from a few deferred expenses You're wise to bring up debt. As a general answer not geared toward your specific circumstances: Paying down debt is a good choice, if you have any. Investment accounts have an unknown interest rate, whereas reducing debt is guaranteed to earn you the interest rate that you would have otherwise paid. Creating new debt is a bad choice. It's common for people who receive large windfalls to spend so much that they put themselves in financial trouble. Lottery winners tend to go bankrupt. The best way to double your money is to fold it in half and put it back in your pocket. I am not at all savvy about finances... The vast majority of people are not savvy about finances. It's a good sign that you acknowledge your inability and are willing to defer to others. ...and have had a few bad experiences when trying to hire someone to help me Find an AIF, preferably one from a largish investment firm. You don't want to be their most important client. You just want them to treat you with courtesy and give you simple, and sound investment advice. Don't be afraid to shop around a bit. I am interested in options for safe, short \"\"parking\"\" of these funds until I figure out what I want to do. Apart from savings accounts, some money market accounts and mutual funds may be appropriate for parking funds before investing elsewhere. They come with their own tradeoffs and are quite likely higher risk than you're willing to take while you're just deciding what to do with the funds. My personal recommendation* for your specific circumstances at this specific time is to put your money in an Aspiration Summit Account purely because it has 1% APY (which is the highest interest rate I'm currently aware of) and is FDIC insured. I am not affiliated with Aspiration. I would then suggest talking to someone at Vanguard or Fidelity about your investment options. Be clear about your expectations and don't be afraid to simply walk away if you don't like the advice you receive. I am not affiliated with Vanguard or Fidelity. * I am not a lawyer, fiduciary, or even a person with a degree in finances. For all you know I'm a dog on the internet.\"",
"title": ""
},
{
"docid": "3b92ddb76f337b877c0bd43c2cf267c2",
"text": "Another option is the new 'innovative finance isa' that allow you to put a wrapper round peer to peer lending platform investments. See Zopa, although I don't think they have come out with an ISA yet.",
"title": ""
},
{
"docid": "3fefad3681891f2aff20504b8134d854",
"text": "\"Yes, you can usually deposit/pay money into a credit card account in advance. They'll use it to pay any open debt; if there's money left over they'll carry it as a credit towards future changes. (\"\"Usually\"\" added in response to comments that some folks have been unable to do this -- though whether that was really policy or just limitation if web interface is unclear. Could be tested by simply sending them an overpayment as your next check and seeing whether they carry it as a credit or return the excess.)\"",
"title": ""
},
{
"docid": "6e6eb756cc10517e78138928fe576fa8",
"text": "\"Depositum irregulare is a Latin phrase that simply means \"\"irregular deposit.\"\" It's a concept from ancient Roman contract law that has a very narrow scope and doesn't actually apply to your example. There are two distinct parts to this concept, one dealing with the notion of a deposit and the other with the notion of irregularity. I'll address them both in turn since they're both relevant to the tax issue. I also think that this is an example of the XY problem, since your proposed solution (\"\"give my money to a friend for safekeeping\"\") isn't the right solution to your actual problem (\"\"how can I keep my money safe\"\"). The currency issue is a complication, but it doesn't change the fact that what you're proposing probably isn't a good solution. The key word in my definition of depositum irregulare is \"\"contract\"\". You don't mention a legally binding contract between you and your friend; an oral contract doesn't qualify because in the event of a breach, it's difficult to enforce the agreement. Legally, there isn't any proof of an oral agreement, and emotionally, taking your friend to court might cost you your friendship. I'm not a lawyer, but I would guess that the absence of a contract implies that even though in the eyes of you and your friend, you're giving him the money for \"\"safekeeping,\"\" in the eyes of the law, you're simply giving it to him. In the US, you would owe gift taxes on these funds if they're higher than a certain amount. In other words, this isn't really a deposit. It's not like a security deposit, in which the money may be held as collateral in exchange for a service, e.g. not trashing your apartment, or a financial deposit, where the money is held in a regulated financial institution like a bank. This isn't a solution to the problem of keeping your money safe because the lack of a contract means you incur additional risk in the form of legal risk that isn't present in the context of actual deposits. Also, if you don't have an account in the right currency, but your friend does, how are you planning for him to store the money anyway? If you convert your money into his currency, you take on exchange rate risk (unless you hedge, which is another complication). If you don't convert it and simply leave it in his safe, house, car boot, etc. you're still taking on risk because the funds aren't insured in the event of loss. Furthermore, the money isn't necessarily \"\"safe\"\" with your friend even if you ignore all the risks above. Without a written contract, you have little recourse if a) your friend decides to spend the money and not return it, b) your friend runs into financial trouble and creditors make claim to his assets, or c) you get into financial trouble and creditors make claims to your assets. The idea of giving money to another individual for safekeeping during bankruptcy has been tested in US courts and ruled invalid. If you do decide to go ahead with a contract and you do want your money back from your friend eventually, you're in essence loaning him money, and this is a different situation with its own complications. Look at this question and this question before loaning money to a friend. Although this does apply to your situation, it's mostly irrelevant because the \"\"irregular\"\" part of the concept of \"\"irregular deposit\"\" is a standard feature of currencies and other legal tender. It's part of the fungibility of modern currencies and doesn't have anything to do with taxes if you're only giving your friend physical currency. If you're giving him property, other assets, etc. for \"\"safekeeping\"\" it's a different issue entirely, but it's still probably going to be considered a gift or a loan. You're basically correct about what depositum irregulare means, but I think you're overestimating its reach in modern law. In Roman times, it simply refers to a contract in which two parties made an agreement for the depositor to deposit money or goods with the depositee and \"\"withdraw\"\" equivalent money or goods sometime in the future. Although this is a feature of the modern deposit banking system, it's one small part alongside contract law, deposit insurance, etc. These other parts add complexity, but they also add security and risk mitigation. Your arrangement with your friend is much simpler, but also much riskier. And yes, there probably are taxes on what you're proposing because you're basically giving or loaning the money to your friend. Even if you say it's not a loan or a gift, the law may still see it that way. The absence of a contract makes this especially important, because you don't have anything speaking in your favor in the event of a legal dispute besides \"\"what you meant the money to be.\"\" Furthermore, the money isn't necessarily safe with your friend, and the absence of a contract exacerbates this issue. If you want to keep your money safe, keep it in an account that's covered by deposit insurance. If you don't have an account in that currency, either a) talk to a lawyer who specializes in situation like this and work out a contract, or b) open an account with that currency. As I've stated, I'm not a lawyer, so none of the above should be interpreted as legal advice. That being said, I'll reiterate again that the concept of depositum irregulare is a concept from ancient Roman law. Trying to apply it within a modern legal system without a contract is a potential recipe for disaster. If you need a legal solution to this problem (not that you do; I think what you're looking for is a bank), talk to a lawyer who understands modern law, since ancient Roman law isn't applicable to and won't pass muster in a modern-day court.\"",
"title": ""
},
{
"docid": "b37c9c4fd5f5cccfc979693e5c5889fa",
"text": "\"This is a supplement to the additional answers. One way to generate \"\"passive\"\" income is by taking advantage of high interest checking / saving accounts. If you need to have a sum of liquid cash readily available at all times, you might as well earn the most interest you can while doing so. I'm not on any bank's payroll, so a Google search can yield a lot on this topic and help you decide what's in your best interest (pun intended). More amazingly, some banks will reward you straight in cash for simply using their accounts, barring some criteria. There's one promotion I've been taking advantage of which provides me $20/month flat, irrespective of my account balance. Again, I am not on anyone's payroll, but a Google search can be helpful here. I'd call these passive, as once you meet the promotion criteria, you don't need to do anything else but wait for your money. Of course, none of this will be enough to live off of, but any extra amount with minimal to zero time investment seems to be a good deal. (if people do want links for the claims I make, I will put these up. I just do not want to advertise directly for any banks or companies.)\"",
"title": ""
}
] |
fiqa
|
65442d79c091c092e9688c5022f60a88
|
Why buying an inverse ETF does not give same results as shorting the ETF
|
[
{
"docid": "b6d4a65012a0447327893fd782a79b46",
"text": "Suppose that the ETF is currently at a price of $100. Suppose that the next day it moves up 10% (to a price of $110) and the following day it moves down 5% (to a price of $104.5). Over these two days the ETF has had a net gain of 4.5% from its original price. The inverse ETF reverses the daily gains/losses of the base ETF. Suppose for simplicity that the inverse ETF also starts out at a price of $100. So on the first day it goes down 10% (to $90) and on the second day it goes up 5% (to $94.5). Thus over the two days the inverse ETF has had a net loss of 5.5%. The specific dollar amounts do not matter here. The result is that the ETF winds up at 110%*95% = 104.5% of its original price and the inverse ETF is at 90%*105% = 94.5% of its original price. A similar example is given here. As suggested by your quote, this is due to compounding. A gain of X% followed by a loss of Y% (compounded on the gain) is not in general the same as a loss of X% followed by a gain of Y% (compounded on the loss). Or, more simply put, if something loses 10% of its value and then gains 10% of its new value, it will not return to its original value, because the 10% it gained was 10% of its decreased value, so it's not enough to bring it all the way back up. Likewise if it gains 10% and then loses 10%, it will go slightly below its original value (since it lost 10% of its newly increased value).",
"title": ""
},
{
"docid": "458035a733cd81b90a29dc552cdd3bfb",
"text": "\"The most fundamental answer is that when you short a stock (or an ETF), you short a specific number of shares on a specific day, and you probably don't adjust this much as the price wobbles goes up and down. But an inverse fund is not tied to a specific start date, like your own transaction is. It adjusts on an ongoing basis to maintain its full specified leverage at all times. If the underlying index goes up, it has to effectively \"\"buy in\"\" because its collateral is no longer sufficient to support its open position. On the other hand, if the underlying index goes down, that frees up collateral which is used to effectively short-sell more of the underlying. So by design it will buy high and sell low, and so any volatility will pump money out of the fund. I say \"\"effectively\"\" because inverse funds use derivatives and contracts, rather than actually shorting the underlying security. Which brings up the less fundamental issue. These derivatives and contracts are relatively opaque; the counter-parties are in it for their own benefit, not yours; and the people who run the fund get their expenses regardless of how you do, and they are hard for you to monitor. This is a hazardous combination.\"",
"title": ""
}
] |
[
{
"docid": "962ea288290efde34f5522ca7d5171a9",
"text": "Michael gave a good answer describing the transaction but I wanted to follow up on your questions about the lender. First, the lender does charge interest on the borrowed securities. The amount of interest can vary based on a number of factors, such as who is borrowing, how much are they borrowing, and what stock are they trying to borrow. Occasionally when you are trying to short a stock you will get an error that it is hard to borrow. This could be for a few reasons, such as there are already a large amount of people who have shorted your broker's shares, or your broker never acquired the shares to begin with (which usually only happens on very small stocks). In both cases the broker/lender doesnt have enough shares and may be unwilling to get more. In that way they are discriminating on what they lend. If a company is about to go bankrupt and a lender doesnt have any more shares to lend out, it is unlikely they will purchase more as they stand to lose a lot and gain very little. It might seem like lending is a risky business but think of it as occurring over decades and not months. General Motors had been around for 100 years before it went bankrupt, so any lender who had owned and been lending out GM shares for a fraction of that time likely still profited. Also this is all very simplified. JoeTaxpayer alluded to this in the comments but in actuality who is lending stock or even who owns stock is much more complicated and probably doesnt need to be explained here. I just wanted to show in this over-simplified explanation that lending is not as risky as it may first seem.",
"title": ""
},
{
"docid": "9fe25aa1854aa86f6ddc2763e3e763ba",
"text": "In addition to the higher risk as pointed out by @JamesRoth, you also need to consider that there are regulations against 'naked shorting' so you generally need to either own the security, or have someone that is willing to 'loan' the security to you in order to sell short. If you own a stock you are shorting, the IRS could view the transaction as a Sell followed by a buy taking place in a less than 30 day period and you could be subject to wash-sale rules. This added complexity (most often the finding of someone to loan you the security you are shorting) is another reason such trades are considered more advanced. You should also be aware that there are currently a number of proposals to re-instate the 'uptick rule' or some circuit-breaker variant. Designed to prevent short-sellers from driving down the price of a stock (and conducting 'bear raids etc) the first requires that a stock trade at the same or higher price as prior trades before you can submit a short. In the latter shorting would be prohibited after a stock price had fallen a given percentage in a given amount of time. In either case, should such a rule be (re)established then you could face limitations attempting to execute a short which you would not need to worry about doing simple buys or sells. As to vehicles that would do this kind of thing (if you are convinced we are in a bear market and willing to take the risk) there are a number of ETF's classified as 'Inverse Exchange Traded Funds (ETF's) for a variety of markets that via various means seek to deliver a return similar to that of 'shorting the market' in question. One such example for a common broad market is ticker SH the ProShares Short S&P500 ETF, which seeks to deliver a return that is the inverse of the S&P500 (and as would be predicted based on the roughly +15% performance of the S&P500 over the last 12 months, SH is down roughly -15% over the same period). The Wikipedia article on inverse ETF's lists a number of other such funds covering various markets. I think it should be noted that using such a vehicle is a pretty 'aggressive bet' to take in reaction to the belief that a bear market is imminent. A more conservative approach would be to simply take money out of the market and place it in something like CD's or Treasury instruments. In that case, you preserve your capital, regardless of what happens in the market. Using an inverse ETF OTOH means that if the market went bull instead of bear, you would lose money instead of merely holding your position.",
"title": ""
},
{
"docid": "8669ca18d1876d62225229827919ee84",
"text": "\"Depends on how far down the market is heading, how certain you are that it is going that way, when you think it will fall, and how risk-averse you are. By \"\"better\"\" I will assume you are trying to make the most money with this information that you can given your available capital. If you are very certain, the way that makes the most money for the least investment from the options you provided is a put. If you can borrow some money to buy even more puts, you will make even more. Use your knowledge of how far and when the market will fall to determine which put is optimal at today's prices. But remember that if the market stays flat or goes up you lose everything you put in and may owe extra to your creditor. A short position in a futures contract is also an easy way to get extreme leverage. The extremity of the leverage will depend on how much margin is required. Futures trade in large denominations, so think about how much you are able to put to risk. The inverse ETFs are less risky and offer less reward than the derivative contracts above. The levered one has twice the risk and something like twice the reward. You can buy those without a margin account in a regular cash brokerage, so they are easier in that respect and the transactions cost will likely be lower. Directly short selling an ETF or stock is another option that is reasonably accessible and only moderately risky. On par with the inverse ETFs.\"",
"title": ""
},
{
"docid": "ebe9f271f525dee886a9bb437c6430a4",
"text": "You can't make money on the way down if it was your money that bought the shares when the market was up. When you sell short, borrowing lets you tap into the value without paying for it. That way, when the price (hopefully) drops you profit from the difference. In your example, if you hadn't paid the £20 in the first place, then you would actually be up £5. But since you started with £20, you still show loss. As others said, borrowing is the definition of selling short. It is also simply the only way the math works. Of course, there is a large risk you must assume to enjoy benefiting from something you do not own!",
"title": ""
},
{
"docid": "dfca697bdc900ed9568f9ebb0b06581a",
"text": "It's actually quite simple. You're actually confusing two concept. Which are taking a short position and short selling itself. Basically when taking a short position is by believing that the stock is going to drop and you sell it. You can or not buy it back later depending on the believe it grows again or not. So basically you didn't make any profit with the drop in the price's value but you didn't lose money either. Ok but what if you believe the market or specific company is going to drop and you want to profit on it while it's dropping. You can't do this by buying stock because you would be going long right? So back to the basics. To obtain any type of profit I need to buy low and sell high, right? This is natural for use in long positions. Well, now knowing that you can sell high at the current moment and buy low in the future what do you do? You can't sell what you don't have. So acquire it. Ask someone to lend it to you for some time and sell it. So selling high, check. Now buying low? You promised the person you would return him his stock, as it's intangible he won't even notice it's a different unit, so you buy low and return the lender his stock. Thus you bought low and sold high, meaning having a profit. So technically short selling is a type of short position. If you have multiple portfolios and lend yourself (i.e. maintaining a long-term long position while making some money with a short term short-term strategy) you're actually short selling with your own stock. This happens often in hedge funds where multiple strategies are used and to optimise the transaction costs and borrowing fees, they have algorithms that clear (match) long and short coming in from different traders, algorithms, etc. Keep in mind that you while have a opportunities risk associated. So basically, yes, you need to always 'borrow' a product to be able to short sell it. What can happen is that you lend yourself but this only makes sense if:",
"title": ""
},
{
"docid": "5dee461595cf49c9e87ebe15415b1ee2",
"text": "This is dumb on many levels. First are they really trying to blame the Flash Crash for investors leaving the market. It's 2014 and they claim in 17 of 25 months since the 2010 crash investors have withdrawn (not really sure what this means either, since shares don't disappear- does this just mean the market is up? It's unclear). Either way the article quoted is from 2012! And for a research team creating an empirical model, this is a very scary usage of small sample size, mistaking correlation for causation, etc. And the algo, though they don't give much info, seems to be the very essence of data mining. Pick a million signals and filter on those that are empirically the best. If you are an individual investor or a small fund thinking of purchasing something like this, first consider: if it really worked, why are the founders selling it when they could just use it themselves and profit from it? It's almost guaranteed not to work out of sample",
"title": ""
},
{
"docid": "6910613137c444c85fb4e476e25872dc",
"text": "I have heard of this, but then the broker is short the shares if they weren't selling them out of inventory, so they still want to accumulate the shares or a hedge before EOD most likely - In that case it may not be the client themselves, but that demand is hitting the market at some point if there isn't sufficient selling volume. Whether or not the broker ends up getting all of them below VWAP is a cost of marketing for them, how they expect to reliably get real size below vwap is my question.",
"title": ""
},
{
"docid": "6515286ee6ba2472db2ccfacf71192a3",
"text": "They're exchange traded debt, basically, not funds. E.g. from the NYSE: An exchange-traded note (ETN) is a senior unsecured debt obligation designed to track the total return of an underlying market index or other benchmark, minus investor fees. Whereas an ETF, in some way or another, is an equity product - which doesn't mean that they can only expose you to equity, but that they themselves are a company that you buy shares in. FCOR for example is a bond ETF, basically a company whose sole purpose is to own a basket of bonds. Contrast that to DTYS, a bear Treasury ETN, which is described as The ETNs are unsecured debt obligations of the issuer, Barclays Bank PLC, and are not, either directly or indirectly, an obligation of or guaranteed by any third party. Also from Barclays site: Because the iPath ETNs are debt securities, they do not have any voting rights. FCOR on the other hand is some sort of company owned/managed by a Fidelity trust, though my EDGAR skills are rusty. AGREEMENT made this 18th day of September, 2014, by and between Fidelity Merrimack Street Trust, a Massachusetts business trust which may issue one or more series of shares of beneficial interest (hereinafter called the Trust), on behalf of Fidelity Corporate Bond ETF (hereinafter called the Fund), and Fidelity Investments Money Management, Inc., a New Hampshire corporation (hereinafter called the Adviser) as set forth in its entirety below.",
"title": ""
},
{
"docid": "9381d589de0907189c958cae99ba34b6",
"text": "The ETF supply management policy is arcane. ETFs are not allowed to directly arbitrage their holdings against the market. Other firms must handle redemptions & deposits. This makes ETFs slightly costlier than the assets held. For ETFs with liquid holdings, its price will rarely vary relative to the holdings, slippage of the ETF's holdings management notwithstanding. This is because the firms responsible for depositing & redeeming will arbitrage their equivalent holdings of the ETF assets' prices with the ETF price. For ETFs with illiquid holdings, such as emerging markets, the ETF can vary between trades of the holdings. This will present sometimes large variations between the last price of the ETF vs the last prices of its holdings. If an ETF is shunned, its supply of holdings will simply drop and vice versa.",
"title": ""
},
{
"docid": "7e6f4f331cde178e6cbfb007797db5f9",
"text": "The risk of the particular share moving up or down is same for both. however in terms of mitigating the risk, Investor A is conservative on upside, ie will exit if he gets 10%, while is ready to take unlimited downside ... his belief is that things will not go worse .. While Investor B is wants to make at least 10% less than peak value and in general is less risk averse as he will sell his position the moment the price hits 10% less than max [peak value] So it more like how do you mitigate a risk, as to which one is wise depends on your belief and the loss appetite",
"title": ""
},
{
"docid": "40828f57fcd22be1419564583875d92f",
"text": "There are rules that prevent two of the reactive measures you suggest from occurring. First, on the date of and shortly following an IPO, there is no stock available to borrow for shorting. Second, there are no put options available for purchase. At least, none that are listed, of the sort you probably have in mind. In fact, within a day or two of the LinkedIn IPO, most (all?) of the active equity traders I know were bemoaning the fact that they couldn't yet do exactly what you described i.e. buying puts, or finding shares to sell short. There was a great deal of conviction that LinkedIn shares were overpriced, but scant means available to translate that market assessment into an influence of market value. This does not mean that the Efficient Markets Hypothesis is deficient. Equilibrium is reached quickly enough, once the market is able to clear as usual.",
"title": ""
},
{
"docid": "9ea59d67dcb34045c7694a346a08d840",
"text": "SeekingAlpha has a section dedicated to Short ETFs as well as others. In there you will find SH, and SDS. Both of which are inverse to the S&P 500. Edit: I linked to charts that compare SH and SDS to SPY.",
"title": ""
},
{
"docid": "44e190a836dc931608ac49786bc1e595",
"text": "The product type and transaction direction are not linked. When you short sell a stock, you are still dealing with shares of an equity security. When you take delivery of a bond, you are buying that bond and holding a long position even though it's a debt security.",
"title": ""
},
{
"docid": "c2071f544cb210bcd86f115eb46929cc",
"text": "There is no margin call. Inverse ETFs use derivatives that would lose value in the case you describe though this doesn't force a margin call as you may be misunderstanding how these funds are constructed.",
"title": ""
},
{
"docid": "928751b619c8acb14b05b9788c7ed7fb",
"text": "It's not quite identical, due to fees, stock rights, and reporting & tax obligations. But the primary difference is that a person could have voting rights in a company while maintaining zero economic exposure to the company, sometimes known as empty voting. As an abstract matter, it's identical in that you reduce your financial exposure whether you sell your stock or short it. So the essence of your question is fundamentally true. But the details make it different. Of course there are fee differences in how your broker will handle it, and also margin requirements for shorting. Somebody playing games with overlapping features of ownership, sales, and purchases, may have tax and reporting obligations for straddles, wash sales, and related issues. A straight sale is generally less complicated for tax reporting purposes, and a loss is more likely to be respected than someone playing games with sales and purchases. But the empty voting issue is an important difference. You could buy stock with rights such as voting, engage in other behavior such as forwards, shorts, or options to negate your economic exposure to the stock, while maintaining the right to vote. Of course in some cases this may have to be disclosed or may be covered by contract, and most people engaging in stock trades are unlikely to have meaningful voting power in a public company. But the principle is still there. As explained in the article by Henry Hu and Bernie Black: Hedge funds have been especially creative in decoupling voting rights from economic ownership. Sometimes they hold more votes than economic ownership - a pattern we call empty voting. In an extreme situation, a vote holder can have a negative economic interest and, thus, an incentive to vote in ways that reduce the company's share price. Sometimes investors hold more economic ownership than votes, though often with morphable voting rights - the de facto ability to acquire the votes if needed. We call this situation hidden (morphable) ownership because the economic ownership and (de facto) voting ownership are often not disclosed.",
"title": ""
}
] |
fiqa
|
d5d4bf11537fb4275fb563d940587994
|
What is an effective way to convert large sums of US based investments to foreign currencies?
|
[
{
"docid": "0242f3b75c5f03501851593ad39b712b",
"text": "A stock, bond or ETF is basically a commodity. Where you bought it does not really matter, and it has a value in USD only inasmuch as there is a current market price quoted at an American exchange. But nothing prevents you from turning around and selling it on a European exchange where it is also listed for an equivalent amount of EUR (arbitrage activities of investment banks ensure that the price will be equivalent in regard to the current exchange rate). In fact, this can be used as a cheap form of currency conversion. For blue chips at least this is trivial; exotic securities might not be listed in Europe. All you need is a broker who allows you to trade on European exchanges and hold an account denominated in EUR. If necessary, transfer your securities to a broker who does, which should not cost more than a nominal fee. Mutual funds are a different beast though; it might be possible to sell shares on an exchange anyway, or sell them back to the issuer for EUR. It depends. In any case, however, transferring 7 figure sums internationally can trigger all kinds of tax events and money laundering investigations. You really need to hire a financial advisor who has international investment experience for this kind of thing, not ask a web forum!",
"title": ""
}
] |
[
{
"docid": "8ed8bf7342dacdca59824555d53f7ff7",
"text": "The reason it's not automatic is that Questrade doesn't want to force you to convert in margin accounts at the time of buying the stock. What if you bought a US stock today and the exchange rate happened to be very unfavorable (due to whatever), wouldn't you rather wait a few days to exchange the funds rather than lose on conversion right away? In my opinion, Questrade is doing you a favor by letting you convert at your own convenience.",
"title": ""
},
{
"docid": "3f556ec1a4b3445c80dd443fbfc037af",
"text": "I prefer to use a Foreign Exchange transfer service. You will get a good exchange rate (better than from Paypal or from your bank) and it is possible to set it up with no transfer fees on both ends. You can use an ACH transfer from your US bank account to the FX's bank account and then a SEPA transfer in Europe to get the funds into your bank account. Transfers can also go in the opposite direction (Europe to USA). I've used XE's service (www.xe.com) and US Forex's service (www.usforex.com). Transferwise (www.transferwise.com) is another popular service. US Forex's service calls you to confirm each transfer. They also charge a $5 fee on transfers under $1000. XE's service is more convenient: they do not charge fees for small transfers and do not call you to confirm the tramsfer. However, they will not let you set up a free ACH transfer from US bank accounts if you set up your XE account outside the US. In both cases, the transfer takes a few business days to complete. EDIT: In my recent (Summer 2015) experience, US Forex has offered slightly better rates than XE. I've also checked out Transferwise, and for transfers from the US it seems to be a bit of a gimmick with a fee added late in the process. For reference, I just got quotes from the three sites for converting 5000 USD to EUR:",
"title": ""
},
{
"docid": "3d2aa8a4521aa7f29a2be50ecb07d790",
"text": "\"In most countries, you are deemed to dispose of all your assets at the fair value at that time, at the moment you are considered no longer a resident. ie: on the day your friend leaves Brazil, Brazil will likely consider him to have sold his BTC for $1M. The Brazilian government will then likely want him to calculate how much it cost him to mine/buy it, so that they can tax him on the gain. No argument about how BTC isn't \"\"Fiat money\"\" matters here; tax laws will typically apply to all investments in a way similar to stocks etc.. The US will likely be very suspicious of such a large amount of money without some level of traceability including that he paid taxes on any relevant gains in other countries. By showing the US that he paid appropriate 'expatriate taxes' in Brazil (if they exist; I am speaking generally and have no knowledge of Brazilian taxes), he is helping to prove that he does not need to pay any taxes on that money in the US. Typically the BTC then is valued for US tax purposes as the $1M it was worth when he entered the US becoming a resident there [This may require tax planning prior to entering the US] [see additional answer here: https://money.stackexchange.com/a/48031/44232]. Any attempt to bring the BTC into the US without paying appropriate Brazilian / US taxes [as applicable, I'm not 100% on either; check with a tax lawyer knowledgeable on both US & Brazilian tax law, because the amount of money is material] will likely be considered fraud. 'How to commit fraud' is not entertained as valid subject matter on this site.\"",
"title": ""
},
{
"docid": "4f83fd4e12068a3dd80172e8afb3afef",
"text": "In addition to TransferWise that @miernik answered with and that I successfully used, I found CurrencyFair which looks to be along similar lines and also supports US$.",
"title": ""
},
{
"docid": "bf9423b9d4b925b1d38d1d09b0f2d4a8",
"text": "\"My solution when I lived in Singapore was to open an account with HSBC, who at the time also had branches in the US. When I was home, I used the same debit card, and the bank only charged a nominal currency exchange fee (since it never had to leave their system, it was lower than had it left their system). Another option, though slightly more costly, is to use Paypal. A third option is to cash-out in CAD and convert to USD at a \"\"large\"\" institution - the larger your deposit/conversion balance, the better the rate you can get. To the best of my knowledge, this shouldn't be taxable - presuming you've paid the taxes on it to start with, and you've been filing your IRS returns every year you've been in Canada.\"",
"title": ""
},
{
"docid": "d67d3a9f9940d33d75c8fbfa7f854d74",
"text": "The general idea is that if the statement wasn't true there would be an arbitrage opportunity. You'll probably want to do the math yourself to believe me. But theoretically you could borrow money in country A at their real interest rate, exchange it, then invest the money in the other country at Country B's interest rate. Generating a profit without any risk. There are a lot of assumptions that go along with the statement (like borrowing and lending have the same costs, but I'm sure that is assumed wherever you read that statement.)",
"title": ""
},
{
"docid": "6505237c2b3c1430722426bc5eb31baa",
"text": "\"If all of the money needs to be liquid, T-Bills from a broker are the way to go. Treasury Direct is a little onerous -- I'm not sure that you could actually get money out of there in a week. If you can sacrifice some liquidity, I'd recommend a mix of treasury, brokered CDs, agency and municipal securities. The government has implicitly guaranteed that \"\"too big to fail\"\" entities are going to be backed by the faith & credit of the United States, so investments in general obligation bonds from big states like New York, California and Florida and cities like New York City will yield you better returns, come with significant tax benefits, and represent only marginal additional short-term risk.\"",
"title": ""
},
{
"docid": "90da52d0db0ff30eb04f78eb18a7a3d0",
"text": "While most all Canadian brokers allow us access to all the US stocks, the reverse is not true. But some US brokers DO allow trading on foreign exchanges. (e.g. Interactive Brokers at which I have an account). You have to look and be prepared to switch brokers. Americans cannot use Canadian brokers (and vice versa). Trading of shares happens where-ever two people get together - hence the pink sheets. These work well for Americans who want to buy-sell foreign stocks using USD without the hassle of FX conversions. You get the same economic exposure as if the actual stock were bought. But the exchanges are barely policed, and liquidity can dry up, and FX moves are not necessarily arbitraged away by 'the market'. You don't have the same safety as ADRs because there is no bank holding any stash of 'actual' stocks to backstop those traded on the pink sheets.",
"title": ""
},
{
"docid": "d72f65a044b71a3bd6360019255b8039",
"text": "Part 1 Quite a few [or rather most] countries allow USD account. So there is no conversion. Just to illustrare; In India its allowed to have a USD account. The funds can be transfered as USD and withdrawn as USD, the interest is in USD. There no conversion at any point in time. Typically the rates for CD on USD account was Central Bank regulated rate of 5%, recently this was deregulated, and some banks offer around 7% interest. Why is the rate high on USD in India? - There is a trade deficit which means India gets less USD and has to pay More USD to buy stuff [Oil and other essential items]. - The balance is typically borrowed say from IMF or other countries etc. - Allowing Banks to offer high interest rate is one way to attract more USD into the country in short term. [because somepoint in time they may take back the USD out of India] So why isn't everyone jumping and making USD investiments in India? - The Non-Residents who eventually plan to come back have invested in USD in India. - There is a risk of regulation changes, ie if the Central Bank / Country comes up pressure for Forex Reserves, they may make it difficut to take back the USD. IE they may impose charges / taxes or force conversion on such accounts. - The KYC norms make it difficult for Indian Bank to attract US citizens [except Non Resident Indians] - Certain countries would have explicit regulations to prevent Other Nationals from investing in such products as they may lead to volatility [ie all of them suddenly pull out the funds] - There would be no insurance to foreign nationals. Part 2 The FDIC insurance is not the reason for lower rates. Most countires have similar insurance for Bank deposits for account holdes. The reason for lower interst rate is all the Goverments [China etc] park the excess funds in US Treasuries because; 1. It is safe 2. It is required for any international purchase 3. It is very liquid. Now if the US Fed started giving higher interest rates to tresaury bonds say 5%, it essentially paying more to other countries ... so its keeping the interest rates low even at 1% there are enough people [institutions / governemnts] who would keep the money with US Treasury. So the US Treasury has to make some revenue from the funds kept at it ... it lends at lower interest rates to Bank ... who in turn lend it to borrowers [both corporate and retail]. Now if they can borrow cheaply from Fed, why would they pay more to Individual Retail on CD?, they will pay less; because the lending rates are low as well. Part 3 Check out the regulations",
"title": ""
},
{
"docid": "a6f3673e71cdfeb5998f0abfae96975d",
"text": "In general, to someone in a similar circumstance I might suggest that the lowest-risk option is to immediately convert your excess currency into the currency you will be spending. Note that 'risk' here refers only to the variance in possible outcomes. By converting to EUR now (assuming you are moving to an EU country using the EUR), you eliminate the chance that the GBP will weaken. But you also eliminate the chance that the GBP will strengthen. Thus, you have reduced the variance in possible outcomes so that you have a 'known' amount of EUR. To put money in a different currency than what you will be using is a form of investing, and it is one that can be considered high risk. Invest in a UK company while you plan on staying in the UK, and you take on the risk of stock ownership only. But invest in a German company while you plan on staying in the UK, you take on the risk of stock ownership + the risk of currency volatility. If you are prepared for this type of risk and understand it, you may want to take on this type of risk - but you really must understand what you're getting into before you do this. For most people, I think it's fair to say that fx investing is more accurately called gambling [See more comments on the risk of fx trading here: https://money.stackexchange.com/a/76482/44232]. However, this risk reduction only truly applies if you are certain that you will be moving to an EUR country. If you invest in EUR but then move to the US, you have not 'solved' your currency volatility problem, you have simply replaced your GBP risk with EUR risk. If you had your plane ticket in hand and nothing could stop you, then you know what your currency needs will be in 2 years. But if you have any doubt, then exchanging currency now may not be reducing your risk at all. What if you exchange for EUR today, and in a year you decide (for all the various reasons that circumstances in life may change) that you will stay in the UK after all. And during that time, what if the GBP strengthened again? You will have taken on risk unnecessarily. So, if you lack full confidence in your move, you may want to avoid fully trading your GBP today. Perhaps you could put away some amount every month into EUR (if you plan on moving to an EUR country), and leave some/most in GBP. This would not fully eliminate your currency risk if you move, but it would also not fully expose yourself to risk if you end up not moving. Just remember that doing this is not a guarantee that the EUR will strengthen and the GBP will weaken.",
"title": ""
},
{
"docid": "2570c173435745bdfc94803f83bc1151",
"text": "Take a look at Transferwise. I find them good for currency conversions and paying people in India from a US bank account.",
"title": ""
},
{
"docid": "3da6581a70d5dbae8ecdb677ea0df69d",
"text": "\"The Option 2 in your answer is how most of the money is moved cross border. It is called International Transfer, most of it carried out using the SWIFT network. This is expensive, at a minimum it costs in the range of USD 30 to USD 50. This becomes a expensive mechanism to transfer small sums of money that individuals are typically looking at. Over a period of years, the low value payments by individuals between certain pair of countries is quite high, example US-India, US-China, Middle-East-India, US-Mexico etc ... With the intention to reduce cost, Banks have built a different work-flow, this is the Option 1. This essentially works on getting money from multiple individuals in EUR. The aggregated sum is converted into INR, then transferred to partner Bank in India via Single SWIFT. Alongside the partner bank is also sent a file of instructions having the credit account. The Partner Bank in India will use the local clearing network [these days NEFT] to credit the funds to the Indian account. Option 3: Other methods include you writing a check in EUR and sending it over to a friend/relative in India to deposit this into Indian Account. Typically very nominal costs. Typically one month of timelines. Option 4: Another method would be to visit an Indian Bank and ask them to issue a \"\"Rupee Draft/Bankers Check\"\" payable in India. The charges for this would be higher than Option 3, less than Option 1. Mail this to friend/relative in India to deposit this into Indian Account. Typically couple of days timelines for transfer to happen.\"",
"title": ""
},
{
"docid": "51876fb7fa8f2f1b1c5fc654650a5ef4",
"text": "The other obvious suggestion I guess is to buy cheap stocks and bonds (maybe in a dollar denominated fund). If the US dollar rises you'd then get both the fund's US gains plus currency gains. However, no guarantee the US dollar will rise or when. Perhaps a more prudent approach is to simply diversify. Buy both domestic and foreign stocks and bonds. Rebalance regularly.",
"title": ""
},
{
"docid": "ca5d202b93c164af5f61d58a5cd0aa01",
"text": "Here's what the GnuCash documentation, 10.5 Tracking Currency Investments (How-To) has to say about bookkeeping for currency exchanges. Essentially, treat all currency conversions in a similar way to investment transactions. In addition to asset accounts to represent holdings in Currency A and Currency B, have an foreign exchange expenses account and a capital gains/losses account (for each currency, I would imagine). Represent each foreign exchange purchase as a three-way split: source currency debit, foreign exchange fee debit, and destination currency credit. Represent each foreign exchange sale as a five-way split: in addition to the receiving currency asset and the exchange fee expense, list the transaction profit in a capital gains account and have two splits against the asset account of the transaction being sold. My problems with this are: I don't know how the profit on a currency sale is calculated (since the amount need not be related to any counterpart currency purchase), and it seems asymmetrical. I'd welcome an answer that clarifies what the GnuCash documentation is trying to say in section 10.5.",
"title": ""
},
{
"docid": "a8d2b79642f69b96d682fd6049896ed9",
"text": "I won't think so. Too much trouble for the compliance and internal audit team. Unless you are moving money from Russia, Iran or those non-FATCA countries.",
"title": ""
}
] |
fiqa
|
0af2a87ef5142a009a6e9bb33b0b4f10
|
How late is Roth (rather than pretax) still likely to help?
|
[
{
"docid": "96cebd4831ce216b7c00f7a039a8691c",
"text": "My simplest approach is to suggest that people go Roth when in the 15% bracket, and use pre-tax to avoid 25%. I outlined that strategy in my article The 15% solution. The monkey wrench that gets thrown in to this is the distortion of the other smooth marginal tax curve caused by the taxation of social security. For those who can afford to, it makes the case to lean toward Roth as much as possible. I'd suggest always depositing pretax, and using conversions to better control the process. Two major benefits to this. It's less a question of too late than of what strategy to use.",
"title": ""
},
{
"docid": "8a62de7c839adaec6cb463239c9d06ab",
"text": "Years before retirement isn't related at all to the Pretax IRA/Roth IRA decision, except insomuch as income typically trends up over time for most people. If tax rates were constant (both at income levels and over time!), Roth and Pretax would be identical. Say you designate 100k for contribution, 20% tax rate. 80k contributed in Roth vs. 100k contributed in Pretax, then 20% tax rate on withdrawal, ends up with the same amount in your bank account after withdrawal - you're just moving the 20% tax grab from one time to another. If you choose Roth, it's either because you like some of the flexibility (like taking out contributions after 5 years), or because you are currently paying a lower marginal rate than you expect you will be in the future - either because you aren't making all that much this year, or because you are expecting rates to rise due to political changes in our society. Best is likely a diversified approach - some of your money pretax, some posttax. At least some should be in a pretax IRA, because you get some tax-free money each year thanks to the personal exemption. If you're working off of 100% post-tax, you are paying more tax than you ought unless you're getting enough Social Security to cover the whole 0% bucket (and probably the 10% bucket, also). So for example, you're thinking you want 70k a year. Assuming single and ignoring social security (as it's a very complicated issue - Joe Taxpayer has a nice blog article regarding it that he links to in his answer), you get $10k or so tax-free, then another $9k or so at 10% - almost certainly lower than what you pay now. So you could aim to get $19k out of your pre-tax IRA, then, and 51k out of your post-tax IRA, meaning you only pay $900 in taxes on your income. Of course, if you're in the 25% bucket now, you may want to use more pretax, since you could then take that out - all the way to around $50k (standard exemption + $40k or so point where 25% hits). But on the other hand, Social Security would probably change that equation back to using primarily Roth if you're getting a decent Social Security check.",
"title": ""
}
] |
[
{
"docid": "71146df668f12b055a8d5912ca96a59b",
"text": "It depends on the relative rates and relative risk. Ignore the deduction. You want to compare the rates of the investment and the mortgage, either both after-tax or both before-tax. Your mortgage costs you 5% (a bit less after-tax), and prepayments effectively yield a guaranteed 5% return. If you can earn more than that in your IRA with a risk-free investment, invest. If you can earn more than that in your IRA while taking on a degree of risk that you are comfortable with, invest. If not, pay down your mortgage. See this article: Mortgage Prepayment as Investment: For example, the borrower with a 6% mortgage who has excess cash flow would do well to use it to pay down the mortgage balance if the alternative is investment in assets that yield 2%. But if two years down the road the same assets yield 7%, the borrower can stop allocating excess cash flow to the mortgage and start accumulating financial assets. Note that he's not comparing the relative risk of the investments. Paying down your mortgage has a guaranteed return. You're talking about CDs, which are low risk, so your comparison is simple. If your alternative investment is stocks, then there's an element of risk that it won't earn enough to outpace the mortgage cost. Update: hopefully this example makes it clearer: For example, lets compare investing $100,000 in repayment of a 6% mortgage with investing it in a fund that pays 5% before-tax, and taxes are deferred for 10 years. For the mortgage, we enter 10 years for the period, 3.6% (if that is the applicable rate) for the after tax return, $100,000 as the present value, and we obtain a future value of $142,429. For the alternative investment, we do the same except we enter 5% as the return, and we get a future value of $162,889. However, taxes are now due on the $62,889 of interest, which reduces the future value to $137,734. The mortgage repayment does a little better. So if your marginal tax rate is 30%, you have $10k extra cash to do something with right now, mortgage rate is 5%, IRA CD APY is 1%, and assuming retirement in 30 years: If you want to plug it into a spreadsheet, the formula to use is (substitute your own values): (Note the minus sign before the cash amount.) Make sure you use after tax rates for both so that you're comparing apples to apples. Then multiply your IRA amount by (1-taxrate) to get the value after you pay future taxes on IRA withdrawals.",
"title": ""
},
{
"docid": "d0e622644fac5c51c872683f0cc8e444",
"text": "Also, consider the possibility of early withdrawal penalties. Regular 401k early withdrawal (for non-qualified reasons) gets you a 10% penalty, in addition to tax, on the entire amount, even if you're just withdrawing your own contributions. Withdrawing from a Roth 401k can potentially mean less penalties (if it's been in place 5 years, and subject to a bunch of fine print of course).",
"title": ""
},
{
"docid": "311624613cc87899692c9eddabdeb721",
"text": "Fast Forward 40 - 45 years, you're 70.5. You must take out ~5% from your Traditional IRA. If that was a Roth, you take out as much as you need (within reason) when you need it with zero tax consequences. I don't know (and don't care) whether they'll change the Roth tax exclusion in 40 years. It's almost guaranteed that the rate on the Roth will be less than the regular income status of a Traditional IRA. Most likely we'll have a value added tax (sales tax) then. Possibly even a Wealth Tax. The former doesn't care where the money comes from (source neutral) the latter means you loose more (probably) of that 2.2 MM than the 1.7. Finally, if you're planning on 10%/yr over 40 yrs, good luck! But that's crazy wild speculation and you're likely to be disappointed. If you're that good at picking winners, then why stop at 10%? Money makes money. Your rate of return should increase as your net worth increases. So, you should be able to pick better opportunities with 2.2 million than with a paltry 1.65 MM.",
"title": ""
},
{
"docid": "810eceab7edb6216ea4133d029874089",
"text": "\"I humbly disagree with #2. the use of Roth or pre-tax IRA depends on your circumstance. With no match in the 401(k), I'd start with an IRA. If you have more than $5k to put in, then some 401(k) would be needed. Edit - to add detail on Roth decision. I was invited to write a guest article \"\"Roth IRAs and your retirement income\"\" some time ago. In it, I discuss the large amount of pretax savings it takes to generate the income to put you in a high bracket in retirement. This analysis leads me to believe the risk of paying tax now only to find tHat you are in a lower bracket upon retiring is far greater than the opposite. I think if there were any generalization (I hate rules of thumb, they are utterly pick-apartable) to be made, it's that if you are in the 15% bracket or lower, go Roth. As your income puts you into 25%, go pretax. I believe this would apply to the bulk of investors, 80%+.\"",
"title": ""
},
{
"docid": "03a994a5087593a76b53c9ac7b8de476",
"text": "\"(I'm expanding on what @BrenBarn had added to his answer.) The assumption of \"\"same tax bracket in retirement\"\" is convenient, but simplistic. If you are in, for instance, the second-lowest bracket now, and happen to remain in the second-lowest bracket for retirement, then Roth and traditional account options may seem equal — and your math backs that up, on the surface — but that's making an implicit assumption that tax rates will be constant. Yet, tax brackets and rates can change. And they do. The proof. i.e. Your \"\"15% bracket\"\" could become, say, the \"\"17% bracket\"\" (or, perhaps, the \"\"13% bracket\"\") All the while you might remain in the second-lowest bracket. So, given the potential for fluctuating tax rates, it's easy to see that there can be a case where a traditional tax-deferred account can yield more after-tax income than a Roth post-tax account, even if you remain in the same bracket: When your tax bracket's tax rate declines. So, don't just consider what bracket you expect to be in. Consider also whether you expect tax rates to go up, down, or remain the same. For twenty-something young folk, retirement is a long way away (~40 years) and I think in that time frame it is far more likely that the tax brackets won't have the same underlying tax rates that they have now. Of course, we can't know for sure which direction tax rates will head in, but an educated guess can help. Is your government deep in debt, or flush with extra cash? On the other hand, if you don't feel comfortable making predictions, much better than simply assuming \"\"brackets and rates will stay the same as now, so it doesn't matter\"\" is to instead hedge your bets: save some of your retirement money in a Roth-style account, and some in a traditional pre-tax account. Consider it tax diversification. See also my answer at this older but related question:\"",
"title": ""
},
{
"docid": "909eae1d15d84e2380144c2af50e1f14",
"text": "My observations is that this seems like hardly enough to kill inflation. Is he right? Or are there better ways to invest? The tax deferral part of the equation isn't what dominates regarding whether your 401k beats 30 years of inflation; it is the return on investment. If your 401k account tanks due to a prolonged market crash just as you retire, then you might have been better off stashing the money in the bank. Remember, 401k money at now + 30 years is not a guaranteed return (though many speak as though it were). There is also the question as to whether fees will eat up some of your return and whether the funds your 401k invests in are good ones. I'm uneasy with the autopilot nature of the typical 401k non-strategy; it's too much the standard thing to do in the U.S., it's too unconscious, and strikes me as Ponzi-like. It has been a winning strategy for some already, sure, and maybe it will work for the next 30-100 years or more. I just don't know. There are also changes in policy or other unknowns that 30 years will bring, so it takes faith I don't have to lock away a large chunk of my savings in something I can't touch without hassle and penalty until then. For that reason, I have contributed very little to my 403b previously, contribute nothing now (though employer does, automatically. I have no match.) and have built up a sizable cash savings, some of which may be used to start a business or buy a house with a small or no mortgage (thereby guaranteeing at least not paying mortgage interest). I am open to changing my mind about all this, but am glad I've been able to at least save a chunk to give me some options that I can exercise in the next 5-10 years if I want, instead of having to wait 25 or more.",
"title": ""
},
{
"docid": "5edca99d5d18ea6c96437d83eef4b26b",
"text": "\"The biggest and primary question is how much money you want to live on within retirement. The lower this is, the more options you have available. You will find that while initially complex, it doesn't take much planning to take complete advantage of the tax system if you are intending to retire early. Are there any other investment accounts that are geared towards retirement or long term investing and have some perk associated with them (tax deferred, tax exempt) but do not have an age restriction when money can be withdrawn? I'm going to answer this with some potential alternatives. The US tax system currently is great for people wanting to early retire. If you can save significant money you can optimize your taxes so much over your lifetime! If you retire early and have money invested in a Roth IRA or a traditional 401k, that money can't be touched without penalty until you're 55/59. (Let's ignore Roth contributions that can technically be withdrawn) Ok, the 401k myth. The \"\"I'm hosed if I put money into it since it's stuck\"\" perspective isn't true for a variety of reasons. If you retire early you get a long amount of time to take advantage of retirement accounts. One way is to primarily contribute to pretax 401k during working years. After retiring, begin converting this at a very low tax rate. You can convert money in a traditional IRA whenever you want to be Roth. You just pay your marginal tax rate which.... for an early retiree might be 0%. Then after 5 years - you now have a chunk of principle that has become Roth principle - and can be withdrawn whenever. Let's imagine you retire at 40 with 100k in your 401k (pretax). For 5 years, you convert $20k (assuming married). Because we get $20k between exemptions/deduction it means you pay $0 taxes every year while converting $20k of your pretax IRA to Roth. Or if you have kids, even more. After 5 years you now can withdraw that 20k/year 100% tax free since it has become principle. This is only a good idea when you are retired early because you are able to fill up all your \"\"free\"\" income for tax conversions. When you are working you would be paying your marginal rate. But your marginal rate in retirement is... 0%. Related thread on a forum you might enjoy. This is sometimes called a Roth pipeline. Basically: assuming you have no income while retired early you can fairly simply convert traditional IRA money into Roth principle. This is then accessible to you well before the 55/59 age but you get the full benefit of the pretax money. But let's pretend you don't want to do that. You need the money (and tax benefit!) now! How beneficial is it to do traditional 401ks? Imagine you live in a state/city where you are paying 25% marginal tax rate. If your expected marginal rate in your early retirement is 10-15% you are still better off putting money into your 401k and just paying the 10% penalty on an early withdrawal. In many cases, for high earners, this can actually still be a tax benefit overall. The point is this: just because you have to \"\"work\"\" to get money out of a 401k early does NOT mean you lose the tax benefits of it. In fact, current tax code really does let an early retiree have their cake and eat it too when it comes to the Roth/traditional 401k/IRA question. Are you limited to a generic taxable brokerage account? Currently, a huge perk for those with small incomes is that long term capital gains are taxed based on your current federal tax bracket. If your federal marginal rate is 15% or less you will pay nothing for long term capital gains, until this income pushes you into the 25% federal bracket. This might change, but right now means you can capture many capital gains without paying taxes on them. This is huge for early retirees who can manipulate income. You can have significant \"\"income\"\" and not pay taxes on it. You can also stack this with before mentioned Roth conversions. Convert traditional IRA money until you would begin owing any federal taxes, then capture long term capital gains until you would pay tax on those. Combined this can represent a huge amount of money per year. So littleadv mentioned HSAs but.. for an early retiree they can be ridiculously good. What this means is you can invest the maximum into your HSA for 10 years, let it grow 100% tax free, and save all your medical receipts/etc. Then in 10 years start withdrawing that money. While it sucks healthcare costs so much in America, you might as well take advantage of the tax opportunities to make it suck slightly less. There are many online communities dedicated to learning and optimizing their lives in order to achieve early retirement. The question you are asking can be answered superficially in the above, but for a comprehensive plan you might want other resources. Some you might enjoy:\"",
"title": ""
},
{
"docid": "348fe523b39695c11e4bb9e24392e524",
"text": "If you're getting the same total amount of money every year, then the main issue is psychological. I mean, you may find it easier to manage your money if you get it on one schedule rather than another. It's generally better to get money sooner rather than later. If you can deposit it into an account that pays interest or invest it between now and when you need it, then you'll come out ahead. But realistically, if we're talking about getting money a few days or a week or two sooner, that's not going to make much difference. If you get a paycheck just before the end of the year versus just after the end of the year, there will be tax implications. If the paycheck is delayed until January, then you don't have to pay taxes on it this year. Of course you'll have to pay the taxes next year, so that could be another case of sooner vs later. But it can also change your total taxes, because, in the US and I think many other countries, taxes are not a flat percentage, but the more you make, the higher the tax rate. So if you can move income to a year when you have less total income, that can lower your total taxes. But really, the main issue would be how it affects your budgeting. Others have discussed this so I won't repeat.",
"title": ""
},
{
"docid": "a448d95f22d848cd9953392e69d8a3c6",
"text": "If you exceed the income limit for deducting a traditional IRA (which is very low if you are covered by a 401(k) ), then your IRA options are basically limited to a Roth IRA. The Cramer person probably meant to compare 401(k) and IRA from the same pre-/post-tax-ness, so i.e. Traditional 401(k) vs. Traditional IRA, or Roth 401(k) vs. Roth IRA. Comparing a Roth investment against a Traditional investment goes into a whole other topic that only confuses what is being discussed here. So if deducting a traditional IRA is ruled out, then I don't think Cramer's advice can be as simply applied regarding a Traditional 401(k). (However, by that logic, and since most people on 401(k) have Traditional 401(k), and if you are covered by a 401(k) then you cannot deduct a Traditional IRA unless you are super low income, that would mean Cramer's advice is not applicable in most situations. So I don't really know what to think here.)",
"title": ""
},
{
"docid": "51ec965a4eec4d21850e5055c1062b74",
"text": "\"This is an excellent topic as it impacts so many in so many different ways. Here are some thoughts on how the accounts are used which is almost as important as the as calculating the income or tax. The Roth is the best bang for the buck, once you have taken full advantage of employer matched 401K. Yes, you pay taxes upfront. All income earned isn't taxed (under current tax rules). This money can be passed on to family and can continue forever. Contributions can be funded past age 70.5. Once account is active for over 5 years, contributions can be withdrawn and used (ie: house down payment, college, medical bills), without any penalties. All income earned must be left in the account to avoid penalties. For younger workers, without an employer match this is idea given the income tax savings over the longer term and they are most likely in the lowest tax bracket. The 401k is great for retirement, which is made better if employer matches contributions. This is like getting paid for retirement saving. These funds are \"\"locked\"\" up until age 59.5, with exceptions. All contributed funds and all earnings are \"\"untaxed\"\" until withdrawn. The idea here is that at the time contributions are added, you are at a higher tax rate then when you expect to withdrawn funds. Trade Accounts, investments, as stated before are the used of taxed dollars. The biggest advantage of these are the liquidity.\"",
"title": ""
},
{
"docid": "8139827df5aa181c2aa883974232b178",
"text": "Something that's come up in comments and been alluded to in answers, but not explicit as far as I can tell: Even if your marginal tax rate now were equal to your marginal tax rate in retirement, or even lower, a traditional IRA may have advantages. That's because it's your effective tax rate that matters on withdrawls. (Based on TY 2014, single person, but applies at higher numbers for other arrangements): You pay 0 taxes on the first $6200 of income, and then pay 10% on the next $9075, then 15% on $27825, then 25% on the total amount over that up to $89530, etc. As such, even if your marginal rate is 25% (say you earn $80k), your effective rate is much less: for example, $80k income, you pay taxes on $73800. That ends up being $14,600, for an effective rate in total of 17.9%. Let's say you had the same salary, $80k, from 20 to 65, and for 45 years saved up 10k a year, plus earned enough returns to pay you out $80k a year in retirement. In a Roth, you pay 25% on all $10k. In a traditional, you save that $2500 a year (because it comes off the top, the amount over $36900), and then pay 17.9% during retirement (your effective tax rate, because it's the amount in total that matters). So for Roth you had 7500*(returns), while for Traditional the correct amount isn't 10k*(returns)*0.75, but 10k*(returns)*0.821. You make the difference between .75 and .82 back even with the identical income. [Of course, if your $10k would take you down a marginal bracket, then it also has an 'effective' tax rate of something between the two rates.] Thus, Roth makes sense if you expect your effective tax rate to be higher in retirement than it is now. This is very possible, still, because for people like me with a mortgage, high property taxes, two kids, and student loans, my marginal tax rate is pretty low - even with a reasonably nice salary I still pay 15% on the stuff that's heading into my IRA. (Sadly, my employer has only a traditional 401k, but they also contribute to it without requiring a match so I won't complain too much.) Since I expect my eventual tax rate to be in that 18-20% at a minimum, I'd benefit from a Roth IRA right now. This matters more for people in the middle brackets - earning high 5 figure salaries as individuals or low 6 figure as a couple - because the big difference is relevant when a large percentage of your income is in the 15% and below brackets. If you're earning $200k, then so much of your income is taxed at 28-33% it doesn't make nearly as much of a difference, and odds are you can play various tricks when you're retiring to avoid having as high of a tax rate.",
"title": ""
},
{
"docid": "2ccd5eb1d0b5465caec02197574beaf4",
"text": "This all comes down to time: You can spend the maximum on taxes and penalties and have your money now. Or you can wait about a decade and not pay a cent in taxes or penalties. Consider (assuming no other us income and 2017 tax brackets which we know will change): Option 1 (1 year): Take all the money next year and pay the taxes and penalty: Option 2 (2 years): Spread it out to barely exceed the 10% bracket: Option 3 (6 years): Spread it out to cover your Standard Deduction each year: Option 4 (6-11 years): Same as Option 3 but via a Roth Conversion Ladder:",
"title": ""
},
{
"docid": "f08c6c36927d6dfa44a0d15516a956a5",
"text": "Why not just deposit to a Traditional IRA, and convert it to Roth? If you have pretax IRA money, you need to pay prorated tax (on what wasn't yet taxed) but that's it. It rarely makes sense to ask for a lower wage. Does your company offer a 401(k) account? To clarify, the existing Traditional IRA balance is the problem. The issue arises when you have a new deposit that otherwise isn't deductible and try to convert it. Absent that existing IRA, the immediate conversion is tax free. Now, with that IRA in place the conversion prorates some of that pretax money, and you are subject to a tax bill.",
"title": ""
},
{
"docid": "980789da5abf6464c0e7ff07ef72bc5e",
"text": "\"You have several questions in your post so I'll deal with them individually: Is taking small sums from your IRA really that detrimental? I mean as far as tax is concerned? Percentage wise, you pay the tax on the amount plus a 10% penalty, plus the opportunity cost of the gains that the money would have gotten. At 6% growth annually, in 5 years that's more than a 34% loss. There are much cheaper ways to get funds than tapping your IRA. Isn't the 10% \"\"penalty\"\" really to cover SS and the medicare tax that you did not pay before putting money into your retirement? No - you still pay SS and medicare on your gross income - 401(k) contributions just reduce how much you pay in income tax. The 10% penalty is to dissuade you from using retirement money before you retire. If I ... contributed that to my IRA before taxes (including SS and medicare tax) that money would gain 6% interest. Again, you would still pay SS and Medicare, and like you say there's no guarantee that you'll earn 6% on your money. I don't think you can pay taxes up front when making an early withdrawal from an IRA can you? This one you got right. When you file your taxes, your IRA contributions for the year are totaled up and are deducted from your gross income for tax purposes. There's no tax effect when you make the contribution. Would it not be better to contribute that $5500 to my IRA and if I didn't need it, great, let it grow but if I did need it toward the end of the year, do an early withdrawal? So what do you plan your tax withholdings against? Do you plan on keeping it there (reducing your withholdings) and pay a big tax bill (plus possibly penalties) if you \"\"need it\"\"? Or do you plan to take it out and have a big refund when you file your taxes? You might be better off saving that up in a savings account during the year, and if at the end of the year you didn't use it, then make an IRA contribution, which will lower the taxes you pay. Don't use your IRA as a \"\"hopeful\"\" savings account. So if I needed to withdrawal $5500 and I am in the 25% tax bracket, I would owe the government $1925 in taxes+ 10% penalty. So if I withdrew $7425 to cover the tax and penalty, I would then be taxed $2600 (an additional $675). Sounds like a cat chasing it's tail trying to cover the tax. Yes if you take a withdrawal to pay the taxes. If you pay the tax with non-retirement money then the cycle stops. how can I make a withdrawal from an IRA without having to pay tax on tax. Pay cash for the tax and penalty rather then taking another withdrawal to pay the tax. If you can't afford the tax and penalty in cash, then don't withdraw at all. based on this year's W-2 form, I had an accountant do my taxes and the $27K loan was added as earned income then in another block there was the $2700 amount for the penalty. So you paid 25% in income tax for the earned income and an additional 10% penalty. So in your case it was a 35% overall \"\"tax\"\" instead of the 40% rule of thumb (since many people are in 28% and 35% tax brackets) The bottom line is it sounds like you are completely unorganized and have absolutely no margin to cover any unexpected expenses. I would stop contributing to retirement today until you can get control of your spending, get on a budget, and stop trying to use your IRA as a piggy bank. If you don't plan on using the money for retirement then don't put it in an IRA. Stop borrowing from it and getting into further binds that force you to make bad financial decisions. You don't go into detail about any other aspects (mortgage? car loans? consumer debt?) to even begin to know where the real problem is. So you need to write everything down that you own and you owe, write out your monthly expenses and income, and figure out what you can cut if needed in order to build up some cash savings. Until then, you're driving across country in a car with no tires, worrying about which highway will give you the best gas mileage.\"",
"title": ""
},
{
"docid": "2eb5e5bdd4912cf03a38d7a6987476bd",
"text": "\"Your real question, \"\"why is this not discussed more?\"\" is intriguing. I think the media are doing a better job bringing these things into the topics they like to ponder, just not enough, yet. You actually produced the answer to How are long-term capital gains taxed if the gain pushes income into a new tax bracket? so you understand how it works. I am a fan of bracket topping. e.g. A young couple should try to top off their 15% bracket by staying with Roth but then using pretax IRA/401(k) to not creep into 25% bracket. For this discussion, 2013 numbers, a blank return (i.e. no schedule A, no other income) shows a couple with a gross $92,500 being at the 15%/25% line. It happens that $20K is exactly the sum of their standard deduction, and 2 exemptions. The last clean Distribution of Income Data is from 2006, but since wages haven't exploded and inflation has been low, it's fair to say that from the $92,000 representing the top 20% of earners, it won't have many more than top 25% today. So, yes, this is a great opportunity for most people. Any married couple with under that $92,500 figure can use this strategy to exploit your observation, and step up their basis each year. To littleadv objection - I imagine an older couple grossing $75K, by selling stock with $10K in LT gains just getting rid of the potential 15% bill at retirement. No trading cost if a mutual fund, just $20 or so if stocks. The more important point, not yet mentioned - even in a low cost 401(k), a lifetime of savings results in all gains being turned in ordinary income. And the case is strong for 'deposit to the match but no no more' as this strategy would let 2/3 of us pay zero on those gains. (To try to address the rest of your questions a bit - the strategy applies to a small sliver of people. 25% have income too high, the bottom 50% or so, have virtually no savings. Much of the 25% that remain have savings in tax sheltered accounts. With the 2013 401(k) limit of $17,500, a 40 year old couple can save $35,000. This easily suck in most of one's long term retirement savings. We can discuss demographics all day, but I think this addresses your question.) If you add any comments, I'll probably address them via edits, avoiding a long dialog below.\"",
"title": ""
}
] |
fiqa
|
4fe15cc2b6efa6b3f58eaa096abc0ad4
|
How can I figure out how much to bid on a parking space?
|
[
{
"docid": "13eebc93749f883f4ed2b7a6c5550e65",
"text": "If the cash flow information is complete, the valuation can be determined with relative accuracy and precision. Assuming the monthly rent is correct, the annual revenue is $1,600 per year, $250/mo * 12 months - $1,400/year in taxes. Real estate is best valued as a perpetuity where P is the price, i is the income, and r is the rate of interest. Theoreticians would suggest that the best available rate of interest would be the risk free rate, a 30 year Treasury rate ~3.5%, but the competition can't get these rates, so it is probably unrealistic. Anways, aassuming no expenses, the value of the property is $1,600 / 0.035 at most, $45,714.29. This is the general formula, and it should definitely be adjusted for expenses and a more realistic interest rate. Now, with a better understanding of interest rates and expenses, this will predict the most likely market value; however, it should be known that whatever interest rate is applied to the formula will be the most likely rate of return received from the investment. A Graham-Buffett value investor would suggest using a valuation no less than 15% since to a value investor, there's no point in bidding unless if the profits can be above average, ~7.5%. With a 15% interest rate and no expenses, $1,600 / .15, is $10,666.67. On average, it is unlikely that a bid this low will be successful; nevertheless, if multiple bids are placed using this similar methodology, by the law of small numbers, it is likely to hit the lottery on at most one bid.",
"title": ""
},
{
"docid": "7a4517829633220b631b2b74684ce8d1",
"text": "\"Scenario 1: Assume that you plan to keep the parking space for the rest of your life and collect the income from the rental. You say these spaces rent for $250 per month and there are fees of $1400 per year. Are there any other costs? Like would you be responsible for the cost of repaving at some point? But assuming that's covered in the $1400, the net profit is 250 x 12 - 1400 = $1600 per year. So now the question becomes, what other things could you invest your money in, and what sort of returns do those give? If, say, you have investments in the stock market that are generating a 10% annual return and you expect that rate of return to continue indefinitely, than if you pay a price that gives you a return of less than 10%, i.e. if you pay more than $16,000, then you would be better off to put the money in the stock market. That is, you should calculate the fair price \"\"backwards\"\": What return on investment is acceptable, and then what price would I have to pay to get that ROI? Oh, you should also consider what the \"\"occupancy rate\"\" on such parking spaces is. Is there enough demand that you can realistically expect to have it rented out 100% of the time? When one renter leaves, how long does it take to find another? And do you have any information on how often renters fail to pay the rent? I own a house that I rent out and I had two tenants in a row who failed to pay the rent, and the legal process to get them evicted takes months. I don't know what it takes to \"\"evict\"\" someone from a parking space. Scenario 2: You expect to collect rent on this space for some period of time, and then someday sell it. In that case, there's an additional piece of information you need: How much can you expect to get for this property when you sell it? This is almost surely highly speculative. But you could certainly look at past pricing trends. If you see that the value of a parking space in your area has been going up by, whatever, say 4% per year for the past 20 years, it's reasonable to plan on the assumption that this trend will continue. If it's been up and down and all over the place, you could be taking a real gamble. If you pay $30,000 for it today and when the time comes to sell the best you can get is $15,000, that's not so good. But if there is some reasonable consistent average rate of growth in value, you can add this to the expected rents. Like if you can expect it to grow in value by $1000 per year, then the return on your investment is the $1600 in rent plus $1000 in capital growth equals $2600. Then again do an ROI calculation based on potential returns from other investments.\"",
"title": ""
}
] |
[
{
"docid": "137304a6d70a9b27ece9809f15ac64d2",
"text": "I think your math is fine, and also consider insurance costs and the convenience factor of each scenario. Moving a car frequently to avoid parking tickets will become tedious. I'd rather spend an hour renting a car 20 times in a year rather than have to spend 15 minutes moving a car every three days. And if there's no other easy parking, that 15 minutes can take a lot longer. Plus it'll get dirty sitting there, could get vandalized. Yuck. For only 20 days/year, I don't see how owning a car is worth the hassle. I recommend using a credit card that comes with free car rental insurance.",
"title": ""
},
{
"docid": "822e1f9492535c3f6384740dce620347",
"text": "If the company that owns the lot is selling them it is doing so because it feels it will make more money doing so. You need to read carefully what it is you are getting and what the guarantees are from the owner of the property and the parking structure. I have heard from friends in Chicago that said there are people who will sell spaces they do not own as a scam. There are also companies that declare bankruptcy and go out of business after signing long term leases for their spots. They sell the lot to another company(which they have an interest in) and all the leases that they sold are now void so they can resell the spots. Because of this if I were going to invest in a parking space, I would make sure: The company making the offer is reputable and solvent Check for plans for major construction/demolition nearby that would impact your short and long term prospects for rent. Full time Rental would Recoup my investment in less than 5 years. Preferably 3 years. The risk on this is too high for me with out that kind of return.",
"title": ""
},
{
"docid": "0e8002a8483e94f44f69a314c387ea4a",
"text": "I believe @Dilip addressed your question alread, I am going to focus on your second question: What are the criteria one should use for estimating the worth of the situation? The criteria are: I hope this helps.",
"title": ""
},
{
"docid": "ca5eeab62ad25a710f6f6d4e5a082e79",
"text": "No, this is misbehavior of sales software that tries to automatically find the price point which maximizes profit. There have been much worse examples. Ignore it. The robot will eventually see that no sales occurred and try a more reasonable price.",
"title": ""
},
{
"docid": "70591461ef9fce7e7b32b7b259bf14f6",
"text": "The quant aspect '''''. This is the kind of math I was wondering if it existed, but now it sounds like it is much more complex in reality then optimizing by evaluating different cost of capital. Thank you for sharing",
"title": ""
},
{
"docid": "52e40fd08cb30cf52d054148af711b47",
"text": "\"I read a really good tract that my credit union gave me years ago written by a former car salesman about negotiation tactics with car dealers. Wish I could find it again, but I remember a few of the main points. 1) Never negotiate based on the monthly payment amount. Car salesmen love to get you into thinking about the monthly loan payment and often start out by asking what you can afford for a payment. They know that they can essentially charge you whatever they want for the car and make the payments hit your budget by tweaking the loan terms (length, down payment, etc.) 2) (New cars only) Don't negotiate on the price directly. It is extremely hard to compare prices between dealerships because it is very hard to find exactly the same combination of options. Instead negotiate the markup amount over dealer invoice. 3) Negotiate one thing at a time A favorite shell game of car dealers is to get you to negotiate the car price, trade-in price, and financing all at one time. Unless you are a rain-man mathematical genius, don't do it. Doing this makes it easy for them to make concessions on one thing and take them right back somewhere else. (Minus $500 on the new car, plus $200 through an extra half point on financing, etc). 4) Handling the Trade-In 5) 99.9999% of the time the \"\"I forgot to mention\"\" extra items are a ripoff They make huge bonuses for selling this extremely overpriced junk you don't need. 6) Scrutinize everything on the sticker price I've seen car dealers have the balls to add a line item for \"\"Marketing Costs\"\" at around $500, then claim with a straight face that unlike OTHER dealers they are just being upfront about their expenses instead of hiding them in the price of the car. Pure bunk. If you negotiate based on an offset from the invoice instead of sticker price it helps you avoid all this nonsense since the manufacturer most assuredly did not include \"\"Marketing costs\"\" on the dealer invoice. 7) Call Around before closing the deal Car dealers can be a little cranky about this, but they often have an \"\"Internet sales person\"\" assigned to handle this type of deal. Once you know what you want, but before you buy, get the model number and all the codes for the options then call 2-3 dealers and try to get a quote over the phone or e-mail on that exact car. Again, get the quote in terms of markup from dealer invoice price, not sticker price. Going through the Internet sales guy doesn't at all mean you have to buy on the Internet, I still suggest going down to the dealership with the best price and test driving the car in person. The Internet guy is just a sales guy like all the rest of them and will be happy to meet with you and talk through the deal in-person. Update: After recently going through this process again and talking to a bunch of dealers, I have a few things to add: 7a) The price posted on the Internet is often the dealer's bottom line number. Because of sites like AutoTrader and other car marketplaces that let you shop the car across dealerships, they have a lot of incentive to put their rock-bottom prices online where they know people aggressively comparison shop. 7b) Get the price of the car using the stock number from multiple sources (Autotrader, dealer web site, eBay Motors, etc.) and find the lowest price advertised. Then either print or take a screenshot of that price. Dealers sometimes change their prices (up or down) between the time you see it online and when you get to the dealership. I just bought a car where the price went up $1,000 overnight. The sales guy brought up the website and tried to convince me that I was confused. I just pulled up the screenshot on my iPhone and he stopped arguing. I'm not certain, but I got the feeling that there is some kind of bait-switch law that says if you can prove they posted a price they have to honor it. In at least two dealerships they got very contrite and backed away slowly from their bargaining position when I offered proof that they had posted the car at a lower price. 8) The sales guy has ultimate authority on the deal and doesn't need approval Inevitably they will leave the room to \"\"run the deal by my boss/financing guy/mom\"\" This is just a game and negotiating trick to serve two purposes: - To keep you in the dealership longer not shopping at competitors. - So they can good-cop/bad-cop you in the negotiations on price. That is, insult your offer without making you upset at the guy in front of you. - To make it harder for you to walk out of the negotiation and compromise more readily. Let me clarify that last point. They are using a psychological sales trick to make you feel like an ass for wasting the guy's time if you walk out on the deal after sitting in his office all afternoon, especially since he gave you free coffee and sodas. Also, if you have personally invested a lot of time in the deal so far, it makes you feel like you wasted your own time if you don't cross the goal line. As soon as one side of a negotiation forfeits the option to walk away from the deal, the power shifts significantly to the other side. Bottom line: Don't feel guilty about walking out if you can't get the deal you want. Remember, the sales guy is the one that dragged this thing out by playing hide-and-seek with you all day. He wasted your time, not the reverse.\"",
"title": ""
},
{
"docid": "e750f12f5683c48b851b165badc91522",
"text": "\"Do some homework to determine what is really a fair price for the house. Zillow helps. County tax records help, including last sale price and mortgage, if any (yes, it's public). Start at the low end of fair. Don't rely on the Realtor. He gets paid only if a sale occurs, and he's already coaxing you closer to a paycheck. He might be right with the numbers, though, so check for yourself. When you get within a thousand or two of acceptance, \"\"shut up\"\". I don't mean that in a rude way. A negotiating class I took taught me how effective silence can be, at the right time. The other side knows you're close and the highest you've offered. If they would be willing to find a way to come down to that, this is the time. The awkward silence is surprisingly effective.\"",
"title": ""
},
{
"docid": "e513a42cc62175045e50d61a634a5d83",
"text": "If an offered price is below what people are willing to sell for, it is simply ignored. (What happens if I offer to buy lots of cars as long as I only have to pay $2 each? Same thing.)",
"title": ""
},
{
"docid": "7e5b4f091f7a0e9f2328d42e944873bc",
"text": "I don't believe you would be able to with only Net Sales and COGS. Are you talking about trying to estimate them? Because then I could probably come up with an idea based on industry averages, etc. I think you would need to know the average days outstanding, inventory turnover and the terms they're getting from their vendors to calculate actuals. There may be other ways to solve the problem you're asking but thats my thoughts on it.",
"title": ""
},
{
"docid": "9a52969d6de27e78057142e53b34db9c",
"text": "You're realizing the perils of using a DCF analysis. At best, you can use them to get a range of possible values and use them as a heuristic, but you'll probably find it difficult to generate a realistic estimate that is significantly different than where the price is already.",
"title": ""
},
{
"docid": "1423a5b34e0ba05d007a623a2b02f8ec",
"text": "To calculate you take the Price and divide it by the Earnings, or by the Sales, or by the Free Cash Flow. Most of these calculations are done for you on a lot of finance sites if the data is available. Such sites as Yahoo Finance and Google Finance as well as my personal favorite: Morningstar",
"title": ""
},
{
"docid": "c18cae75fef4be13785d41f25b2afd15",
"text": "The usual lazy recommendation: See what similar objects, in similar condition, of similar age, have sold for recently on eBay. That establishes a fair market value by directly polling the market.",
"title": ""
},
{
"docid": "adbf875f8d2517033d641b19a42c1ad0",
"text": "\"1) Get some gold. 2) Walk around, yelling, \"\"Hey, I have some gold, who wants to buy it?\"\" 3) Once you have enough interested parties, hold an auction and see who will give you the most dollars for it. 4) Trade the gold for that many dollars. 5) You have just measured the value of your gold.\"",
"title": ""
},
{
"docid": "70d0915408fb98db5d2f5e7cb0c31731",
"text": "Assuming cell A1 contains the number of trades: will price up to A1=100 at 17 each, and the rest at 14 each. The key is the MAX and MIN. They keep an item from being counted twice. If X would end up negative, MAX(0,x) clamps it to 0. By extension, if X-100 would be negative, MAX(0, X-100) would be 0 -- ie: that number doesn't increase til X>100. When A1=99, MIN(a1,100) == 99, and MAX(0,a1-100) == 0. When A1=100, MIN(a1,100) == 100, and MAX(0,a1-100) == 0. When A1=101, MIN(a1,100) == 100, and MAX(0,a1-100) == 1. Of course, if the 100th item should be $14, then change the 100s to 99s.",
"title": ""
},
{
"docid": "cf436e92c85791cdbc4cce4ca62c946d",
"text": "\"I think there's a measure of confirmation bias here. If you talk to somebody that started a successful business and got a million out of it, he'd say \"\"it's easy, just do this and that, like I did\"\". If you consider this as isolated incident, you would ignore thousands of others that did exactly the same and still struggle to break even, or are earning much less, or just went broke and moved on long time ago. You will almost never hear about these as books titled \"\"How I tried to start a business and failed\"\" sell much worse than success stories. So I do not think there's a guaranteed easy way - otherwise we'd have much more millionaires than we do now :) However, it does not mean any of those ways is not worth trying - whatever failure rate there is, it's less than 100% failure rate of not trying anything. You have to choose what fits your abilities and personality best - frugality, risk, inventiveness? Then hope you get as lucky as those \"\"it's easy\"\" people are, I guess.\"",
"title": ""
}
] |
fiqa
|
93a12d33cf794e2d196def845b67078b
|
Are distributions from an S corp taxable as long term capital gains?
|
[
{
"docid": "57390fc75c7c0b3a47269f7ea8e90c07",
"text": "\"If you have an S-Corp with several shareholders - you probably also have a tax adviser who suggested using S-Corp to begin with. You're probably best off asking that adviser about this issue. If you decided to use S-Corp for multiple shareholders without a professional guiding you, you should probably start looking for such a professional, or you may get yourself into trouble. That said, and reminding you that: 1. Free advice on the Internet is worth exactly what you paid for it, and 2. I'm not a tax professional or tax adviser, you should talk to a EA/CPA licensed in your state, here's this: Generally S-Corps are disregarded entities for tax purposes and their income flows to their shareholders individual tax returns through K-1 forms distributed by the S-Corp yearly. The shareholders don't have to actually withdraw the profits, but if not withdrawing - they're added to their cost bases in the shares. I'm guessing your corp doesn't distribute the net income, but keeps it on the corporate account, only distributing enough to cover the shareholders' taxes on their respective income portion. In this case - the amount not distributed is added to their basis, the amount distributed has already been taxed through K-1. If the corporation distributes more than the shareholder's portion of net income, then there can be several different choices, depending on the circumstances: The extra distribution will be treated as salary to the shareholder and a deduction to the corporation (i.e.: increasing the net income for the rest of the shareholders). The extra distribution will be treated as return of investment, reducing that shareholder's basis in the shares, but not affecting the other shareholders. If the basis is 0 then it is treated as income to the shareholder and taxed at ordinary rates. The extra distribution will be treated as \"\"buy-back\"\" - reducing that shareholder's ownership stake in the company and reallocating the \"\"bought-back\"\" portion among the rest of the shareholders. In this case it is treated as a sale of stock, and the gain is calculated as with any other stock sale, including short-term vs. long-term taxation (there's also Sec. 1244 that can come in handy here). The extra distribution will be treated as dividend. This is very rare for S-Corp, but can happen if it was a C-Corp before. In that case it will be taxed as dividends. Note that options #2, #3 and #4 subject the shareholder to the NIIT, while option #1 subjects the shareholder to FICA/Self Employment tax (and subjects the company to payroll taxes). There might be other options. Your licensed tax adviser will go with you through all the facts and circumstances and will suggest the best way to proceed.\"",
"title": ""
}
] |
[
{
"docid": "04cfc11786b1d6c8709679a6c244060f",
"text": "Assuming that you have capital gains, you can expect to have to pay taxes on them. It might be short term, or long term capital gains. If you specify exactly which shares to sell, it is possible to sell mostly losers, thus reducing or eliminating capital gains. There are separate rules for 401K and other retirement programs regarding down payments for a house. This leads to many other issues such as the hit your retirement will take.",
"title": ""
},
{
"docid": "49b16b6e3edc3ebf47f9e78c863c098a",
"text": "Does the corporation need the money for its ongoing business? If so, don't transfer it. If not, feel free. This decision has nothing to do with whether the corporation made money in any particular year.",
"title": ""
},
{
"docid": "3ecb3c403e3a3186ddfa2c51db2b0c14",
"text": "Yes. The S-Corp can deduct up to the amount it actually incurred in expenses. If your actual expenses to build the carport were $1000, then the $1000 would be deductible, and your business should be able to show $1000 in receipts or inventory changes. Note you cannot deduct beyond your actual expenses even if you would normally charge more. For example, suppose you invoiced the non-profit $2000 for the carport, and once the bill was paid you turned around and donated the $2000 back to the non-profit. In that case you would be deducting $1000 for your cost + $2000 donation for a total of $3000. But, you also would have $2000 in income so in the end you would end up with a $1000 loss which is exactly what your expenses were to begin with. It would probably be a good idea to be able to explain why you did this for free. If somehow you personally benefit from it then it could possibly be considered income to you, similar to if you bought a TV for your home with company funds. It would probably be cleaner from an accounting perspective if you followed through as described above- invoice the non-profit and then donate the payment back to them. Though not necessary, it could lesson any doubt about your motives.",
"title": ""
},
{
"docid": "5ee5f967f040a013fe5a5188ca5f7d40",
"text": "Capital gain distribution is not capital gain on sale of stock. If you have stock sales (Schedule D) you should be filing 1040, not 1040A. Capital gain distributions are distributions from mutual funds/ETFs that are attributed to capital gains of the funds (you may not have actually received the distribution, but you still may have gain attributed to you). It is reported on 1099-DIV, and if it is 0 - then you don't have any. If you sold a stock, your broker should have given you 1099-B (which is not the same as 1099-DIV, but may be consolidated by your broker into one large PDF and not provided separately). On 1099-B the sales proceeds are recorded, and if you purchased the stock after 2011 - the cost basis is also recorded. The difference between the proceeds and the cost basis is your gain (or loss, if it is negative). Fees are added to cost basis.",
"title": ""
},
{
"docid": "3fa31b1975e0d7a3e9f65372d31635a5",
"text": "Capital losses do mirror capital gains within their holding periods. An asset or investment this is certainly held for a year into the day or less, and sold at a loss, will create a short-term capital loss. A sale of any asset held for over a year to your day, and sold at a loss, will create a loss that is long-term. When capital gains and losses are reported from the tax return, the taxpayer must first categorize all gains and losses between long and short term, and then aggregate the sum total amounts for every single regarding the four categories. Then the gains that are long-term losses are netted against each other, therefore the same is done for short-term gains and losses. Then your net gain that is long-term loss is netted against the net short-term gain or loss. This final net number is then reported on Form 1040. Example Frank has the following gains and losses from his stock trading for the year: Short-term gains - $6,000 Long-term gains - $4,000 Short-term losses - $2,000 Long-term losses - $5,000 Net short-term gain/loss - $4,000 ST gain ($6,000 ST gain - $2,000 ST loss) Net long-term gain/loss - $1,000 LT loss ($4,000 LT gain - $5,000 LT loss) Final net gain/loss - $3,000 short-term gain ($4,000 ST gain - $1,000 LT loss) Again, Frank can only deduct $3,000 of final net short- or long-term losses against other types of income for that year and must carry forward any remaining balance.",
"title": ""
},
{
"docid": "a0d77534de7a82cb11f9ea7f796d372f",
"text": "However, you might have to pay taxes on capital gains if these stocks were acquired during your prior residency.",
"title": ""
},
{
"docid": "ebfdc556e16641b35c2d76abcb6f55c6",
"text": "\"If you elect to have the company treated as an S corp, the profits/losses of the company will pass through to the shareholders (i.e. you) on a Schedule K-1 form every year. These amounts on the Schedule K-1 are taxable whether or not the company actually distributed the money to you. Typically, the company will distribute profits to the shareholders because they will have to pay taxes on this amount. https://turbotax.intuit.com/tax-tools/tax-tips/Small-Business-Taxes/What-is-a-Schedule-K-1-Tax-Form-/INF19204.html So the money held in the company's bank accounts won't appear on your taxes per se, but the profits/losses as reported on the company's tax return will pass through to you on the Schedule K-1. Typically these amounts are taxed as income. Your tax accountant can advise you on how much money you can/should take through regular payroll and how much can be distributed as a shareholder, as well as help you prepare the corporate tax returns and schedule(s) K-1 every year. There are tax advantages to taking money out of the company through distributions instead of payroll, but the amounts can be scrutinized and subject to a criterion of \"\"reasonable compensation\"\", hence my recommendation for a tax accountant.\"",
"title": ""
},
{
"docid": "bb2a49abc7f38198e5ab51a513439f22",
"text": "You could use HBB and other similar funds that exchange distributions for capital gains. There's HXT and HXS which is Canada and US equity markets. The swap fee + mer is a little more than some funds except for HXT which is very cheap. There's a risk for long term holders that this may eventually get banned and you're forced to sell with a gain at the wrong time, but this won't matter much if you're planning on selling in a few years. You have to pay the capital gains tax eventually. Note, the tax on distributions is really a long term drag on performance and won't make a big difference in the short term.",
"title": ""
},
{
"docid": "7656f373c9e4cfffccc92e080131a065",
"text": "If the charity accepts stock, you can avoid the tax on the long term cap gain when you donate it. e.g. I donate $10,000 in value of Apple. I write off $10,000 on my taxes, and benefit with a $2500 refund. If I sold it, I'd have nearly a $1500 tax bill (bought long enough ago, the basis is sub $100). Any trading along the way, and it's on you. Gains long or short are taxed on you. It's only the final donation that matters here. Edit - to address Anthony's comment on other answer - I sell my Apple, with a near $10,000 gain (it's really just $9900) and I am taxed $1500. Now I have $8500 cash I donate and get $2125 back in a tax refund. By donating the stock I am ahead nearly $375, and the charity, $1500.",
"title": ""
},
{
"docid": "a8ea55b8b623ba0c931af98338036e0b",
"text": "\"In the United States, with an S-Corp, you pay yourself a salary from company earnings. That portion is taxed at an individual rate. The rest of the company earnings are taxed as a corporation, which often have great tax benefits. If you are making over $80K/year, the difference can be substantial. A con is that there is more paperwork and you have to create a \"\"board\"\" of advisors.\"",
"title": ""
},
{
"docid": "28736c47950db9528b1fd9ac554aa8c6",
"text": "If you have held the stocks longer than a year, then there is no tax apart from the STT that is already deducted when you sell the shares. If you have held the stock for less than a year, you would have to pay short term capital gains at the rate of 15% on the profit. Edit: If you buy different shares from the total amount or profits, it makes no difference to taxes.",
"title": ""
},
{
"docid": "69cae92454c28e2e4d04cda5494408f7",
"text": "That's really not something that can be answered based on the information provided. There are a lot of factors involved: type of income, your wife's tax bracket, the split between Federal and State (if you're in a high bracket in a high income-tax rate State - it may even be more than 50%), etc etc. The fact that your wife didn't withdraw the money is irrelevant. S-Corp is a pass-through entity, i.e.: owners are taxed on the profits based on their personal marginal tax rates, and it doesn't matter what they did with the money. In this case, your wife re-invested it into the corp (used it to pay off corp debts), which adds back to her basis. You really should talk to a tax adviser (EA/CPA licensed in your State) to learn how S-Corps work and how to use them properly. Your wife, actually, as she's the owner.",
"title": ""
},
{
"docid": "c9d9db846af1c499fad68e351b194adb",
"text": "I realize this is a dated question, but for anyone interested in this subject please be aware of the availability of IRC § 1235 and capital gain treatment for the sale of patents. When the holder of a patent transfers all substantial rights to an unrelated person, it can qualify for long-term capital gain treatment. That can be a meaningful tax savings relative to ordinary income treatment. There are a number of specific provisions and requirements to access § 1235. The holder must be the creator or someone unrelated (and not the creator's employer) who purchased the patent from the creator. The holder must transfer all substantial rights to the patent (not a licensing), or sell an undivided portion of all substantial rights (partial sale, again not a license). The benefit of § 1235 is that long-term treatment will apply even for patents with holding periods under 1 year. Other rules and permutations of course also apply. Those who fail § 1235 may still qualify their assets as capital under § 1221 or § 1231. A patent held by its creator will often qualify as a capital asset. It may not make any sense to sell your business as a whole, particularly if all a purchaser wants is a patent or group of patents. Of course, if the patent was held by its creator in a single-member LLC or other disregarded entity sold to a buyer, then the tax treatment is still treated as the sale of a long-term capital asset.",
"title": ""
},
{
"docid": "ab9d23b9c64bf48c909c67f1f807bef8",
"text": "\"A mutual fund could make two different kinds of distributions to you: Capital gains: When the fund liquidates positions that it holds, it may realize a gain if it sells the assets for a greater price than the fund purchased them for. As an example, for an index fund, assets may get liquidated if the underlying index changes in composition, thus requiring the manager to sell some stocks and purchase others. Mutual funds are required to distribute most of their income that they generate in this way back to its shareholders; many often do this near the end of the calendar year. When you receive the distribution, the gains will be categorized as either short-term (the asset was held for less than one year) or long-term (vice versa). Based upon the holding period, the gain is taxed differently. Currently in the United States, long-term capital gains are only taxed at 15%, regardless of your income tax bracket (you only pay the capital gains tax, not the income tax). Short-term capital gains are treated as ordinary income, so you will pay your (probably higher) tax rate on any cash that you are given by your mutual fund. You may also be subject to capital gains taxes when you decide to sell your holdings in the fund. Any profit that you made based on the difference between your purchase and sale price is treated as a capital gain. Based upon the period of time that you held the mutual fund shares, it is categorized as a short- or long-term gain and is taxed accordingly in the tax year that you sell the shares. Dividends: Many companies pay dividends to their stockholders as a way of returning a portion of their profits to their collective owners. When you invest in a mutual fund that owns dividend-paying stocks, the fund is the \"\"owner\"\" that receives the dividend payments. As with capital gains, mutual funds will redistribute these dividends to you periodically, often quarterly or annually. The main difference with dividends is that they are always taxed as ordinary income, no matter how long you (or the fund) have held the asset. I'm not aware of Texas state tax laws, so I can't comment on your other question.\"",
"title": ""
},
{
"docid": "0a0abff4a29bb7980683feabb76108a1",
"text": "\"While @JB's \"\"yes\"\" is correct, a few more points to consider: There is no tax penalty for withdrawing any time from a taxable investment, that is, one not using specific tax protections like 401k/IRA or ESA or HSA. But you do pay tax on any income or gain distributions you receive from a taxable investment in a fund (except interest on tax-exempt aka \"\"municipal\"\" bonds), and any net capital gains you realize when selling (or technically redeeming for non-ETF funds). Just like you do for dividends and interest and gains on non-fund taxable investments. Many funds have a sales charge or \"\"load\"\" which means you will very likely lose money if you sell quickly typically within at least several months and usually a year or more, and even some no-load funds, to discourage rapid trading that makes their management more difficult (and costly), have a \"\"contingent sales charge\"\" if you sell after less than a stated period like 3 months or 6 months. For funds that largely or entirely invest in equities or longer term bonds, the share value/price is practically certain to fluctuate up and down, and if you sell during a \"\"down\"\" period you will lose money; if \"\"liquid\"\" means you want to take out money anytime without waiting for the market to move, you might want funds focussing on short-term bonds, especially government bonds, and \"\"money market\"\" funds which hold only very short bonds (usually duration under 90 days), which have much more stable prices (but lower returns over the longer term).\"",
"title": ""
}
] |
fiqa
|
f3475e18af17d4d4fa3669c9762e43e6
|
Should I invest my money in an ISA or Government bonds? (Or any other suggestion)
|
[
{
"docid": "463fa73a0da279bb43beb2b3d9493116",
"text": "\"So you are off to a really good start. Congratulations on being debt free and having a nice income. Being an IT contractor can be financially rewarding, but also have some risks to it much like investing. With your disposable income I would not shy away from investing in further training through sites like PluralSite or CodeSchool to improve weak skills. They are not terribly expensive for a person in your situation. If you were loaded down with debt and payments, the story would be different. Having an emergency fund will help you be a good IT contractor as it adds stability to your life. I would keep £10K or so in a boring savings account. Think of it not as an investment, but as insurance against life's woes. Having such a fund allows you to go after a high paying job you might fail at, or invest with impunity. I would encourage you to take an intermediary step: Moving out on your own. I would encourage renting before buying even if it is just a room in someone else's home. I would try to be out of the house in less than 3 months. Being on your own helps you mature in ways that can only be accomplished by being on your own. It will also reduce the culture shock of buying your own home or entering into an adult relationship. I would put a minimum of £300/month in growth stock mutual funds. Keeping this around 15% of your income is a good metric. If available you may want to put this in tax favored retirement accounts. (Sorry but I am woefully ignorant of UK retirement savings). This becomes your retire at 60 fund. (Starting now, you can retire well before 68.) For now stick to an index fund, and once it gets to 25K, you may want to look to diversify. For the rest of your disposable income I'd invest in something safe and secure. The amount of your disposable income will change, presumably, as you will have additional expenses for rent and food. This will become your buy a house fund. This is something that should be safe and secure. Something like a bond fund, money market, dividend producing stocks, or preferred stocks. I am currently doing something like this and have 50% in a savings account, 25% in a \"\"Blue chip index fund\"\", and 25% in a preferred stock fund. This way you have some decent stability of principle while also having some ability to grow. Once you have that built up to about 12K and you feel comfortable you can start shopping for a house. You may want to be at the high end of your area, so you should try and save at least 10%; or, you may want to be really weird and save the whole thing and buy your house for cash. If you are still single you may want to rent a room or two so your home can generate income. Here in the US there can be other ways to generate income from your property. One example is a home that has a separate area (and room) to park a boat. A boat owner will pay some decent money to have a place to park their boat and there is very little impact to the owner. Be creative and perhaps find a way where a potential property could also produce income. Good luck, check back in with progress and further questions! Edit: After some reading, ISA seem like a really good deal.\"",
"title": ""
},
{
"docid": "8d86c1fb4374ae63b11e53ce22bac604",
"text": "There are a number of UK banks that offer what passes for reasonable interest on an amount of cash held in their current accounts. I would suggest that you look into these. In the UK the first £1000 of bank or building society interest is paid tax-free for basic rate taxpayers (£500 for higher rate tax-payers) so if your interest income is below these levels then there is no point in investing in a cash ISA as the interest rate is often lower. At the moment Santander-123 bank account pays 1.5% on up to £20000 and Nationwide do 5% on up to £2500. A good source if information on the latest deals is Martin Lewis' Moneysaving Expert Website",
"title": ""
},
{
"docid": "3309463d722dd256925d15d55a2de6a7",
"text": "I recommend investing in precious metals like gold, considering the economic cycle we're in now. Government bonds are subject to possible default and government money historically tends to crumble in value, whereas gold and the metals tend to rise in value with the commodies. Stocks tend to do well, but right now most of them are a bit overvalued and they're very closely tied to overvalued currencies and unstable governments with lots of debt. I would stick to gold right now, if you're planning on investing for more than a month or so.",
"title": ""
}
] |
[
{
"docid": "32e71fb321d39a1fceb84c0481f32a5c",
"text": "Put £50 away as often as possible, and once it's built up to £500, invest in a stockmarket ETF. Repeat until you retire.",
"title": ""
},
{
"docid": "e07de22b589af6035a2298aff58498b6",
"text": "US government bonds and bonds issued by companies with a safe track record and consistently high ratings, for the past years, by credit agencies. But the time line of your investment, which is quite short, maybe a factor of choosing the right bonds. If you are not going to touch the money then CD maybe an option or an interest bearing savings account.",
"title": ""
},
{
"docid": "2234ad152a94b06edf2086f30592fe80",
"text": "I am not interested in watching stock exchange rates all day long. I just want to place it somewhere and let it grow Your intuition is spot on! To buy & hold is the sensible thing to do. There is no need to constantly monitor the stock market. To invest successfully you only need some basic pointers. People make it look like it's more complicated than it actually is for individual investors. You might find useful some wisdom pearls I wish I had learned even earlier. Stocks & Bonds are the best passive investment available. Stocks offer the best return, while bonds are reduce risk. The stock/bond allocation depends of your risk tolerance. Since you're as young as it gets, I would forget about bonds until later and go with a full stock portfolio. Banks are glorified money mausoleums; the interest you can get from them is rarely noticeable. Index investing is the best alternative. How so? Because 'you can't beat the market'. Nobody can; but people like to try and fail. So instead of trying, some fund managers simply track a market index (always successfully) while others try to beat it (consistently failing). Actively managed mutual funds have higher costs for the extra work involved. Avoid them like the plague. Look for a diversified index fund with low TER (Total Expense Ratio). These are the most important factors. Diversification will increase safety, while low costs guarantee that you get the most out of your money. Vanguard has truly good index funds, as well as Blackrock (iShares). Since you can't simply buy equity by yourself, you need a broker to buy and sell. Luckily, there are many good online brokers in Europe. What we're looking for in a broker is safety (run background checks, ask other wise individual investors that have taken time out of their schedules to read the small print) and that charges us with low fees. You probably can do this through the bank, but... well, it defeats its own purpose. US citizens have their 401(k) accounts. Very neat stuff. Check your country's law to see if you can make use of something similar to reduce the tax cost of investing. Your government will want a slice of those juicy dividends. An alternative is to buy an index fund on which dividends are not distributed, but are automatically reinvested instead. Some links for further reference: Investment 101, and why index investment rocks: However the author is based in the US, so you might find the next link useful. Investment for Europeans: Very useful to check specific information regarding European investing. Portfolio Ideas: You'll realise you don't actually need many equities, since the diversification is built-in the index funds. I hope this helps! There's not much more, but it's all condensed in a handful of blogs.",
"title": ""
},
{
"docid": "c92b620796eec1aea3d8d925390cb015",
"text": "\"Your dec ision is actually rather more complex than it first appears. The problem is that the limits on what you can pay into the HTB ISA might make it less attractive - it will all depend. Currently, you can put £15k/year into a normal ISA (Either Cash, or Stocks and Share or a combination). The HTB ISA only allows £200/month = £2,400/year. Since you can only pay into one Cash ISA in any one year you are going to lose out on the other £12,600 that you could save and grow tax free. Having said that, the 25% contribution by the govt. is extremely attractive and probably outweighs any tax saving. It is not so clear whether you can contribute to a HTB ISA (cash) and put the rest of your allowance into a Stocks and Shares ISA - if you can, you should seriously consider doing so. Yes this exposes you to a riskier investment (shares can go down as well as up etc.) but the benefits can be significant (and the gains are tax free). As said above, the rules are that money you have paid into an ISA in earlier years is separate - you can't pay any more into the \"\"old\"\" one whilst paying into a \"\"new\"\" one but you don't have to do anything with the \"\"old\"\" ISA. But you might WANT to do something since institutions are amazingly mean (underhand) in their treatment of customers. You may well find that the interest rate you get on your \"\"old\"\" ISA becomes less competitive over time. You should (Must) check every year what rate you are getting and whether you can get a better rate in a different ISA - if there is a better rate ISA and if it allows transfers IN, you should arrange to make the trasnfer - you ABSOLUTELY MUST TRANSFER between ISAs - never even think of taking the money out and then trying to pay it in to another ISA, it must be transferred directly between ISAs. So overall, yes, stop paying into the \"\"old\"\" ISA, open a new HTB ISA next year and if you can pay in the maximum do so. But if you can afford to save more, you might be able to open a Stocks and Shares ISA as well and pay into that too (max £15k into the pair in one year). And then do not \"\"forget\"\" about the \"\"old\"\" ISA(s) you will probably need to move all the money you have in the \"\"old\"\" one(s) regualrly into new ISAs to obtain a sensible rate. You might do well to read up on all this a lot more - I strongly recommend the site http://www.moneysavingexpert.com/ which gives a lot of helpful advice about everything to do with money (no I don't have any association with them).\"",
"title": ""
},
{
"docid": "3bc01e681551f89397ada2de94172c65",
"text": "\"Is he affiliated with the company charging this fee? If so, 1% is great. For him. You are correct, this is way too high. Whatever tax benefit this account provides is negated over a sufficiently long period of time. you need a different plan, and perhaps, a different friend. I see the ISA is similar to the US Roth account. Post tax money deposited, but growth and withdrawals tax free. (Someone correct, if I mis-read this). Consider - You deposit £10,000. 7.2% growth over 10 years and you'd have £20,000. Not quite, since 1% is taken each year, you have £18,250. Here's what's crazy. When you realize you lost £1750 to fees, it's really 17.5% of the £10,000 your account would have grown absent those fees. In the US, our long term capital gain rate is 15%, so the fees after 10 years more than wipe out the benefit. We are not supposed to recommend investments here, but it's safe to say there are ETFs (baskets of stocks reflecting an index, but trading like an individual stock) that have fees less than .1%. The UK tag is appreciated, but your concern regarding fees is universal. Sorry for the long lecture, but \"\"1%, bad.\"\"\"",
"title": ""
},
{
"docid": "00d2deabad09eee0b1cac143d60f0dd5",
"text": "\"Assuming you can understand and emotionally handle the volatility, a good indeed fund would be wise. These are low fee funds which perform as well as our better than most managed investments and since they don't cost as much, they typically out perform most other investment vehicles. The S&P 500 is traded as SPDR. Another option is the Dow Jones Industrial Average, which trades as DIA. Average returns over the long term are 10-12%. If you expect to need the money in the short term (5-8 years), you have a non trivial chance of needing to pull the money out when the market is down, so if that's unacceptable to you, choose something with a guarantee. If you're terrified of losing money in the short term, don't think you can handle waiting for the market to go up, especially when every news caster is crying hysterically that the End of Economic Life on Earth is here, then consider a CD at your bank. CDs return much lower rates (around 2% right now) but do not go down in value ever. However, you need to lock your money into them for months to years at a time. Some people might tell you to buy a bond fund. That's horrible advice. Bond funds get lower returns AND have no guarantee that you won't lose money on them, unlike aactual bonds. As you're new to investing, I encourage you to read \"\"The Intelligent Investor\"\" by Benjamin Gramm.\"",
"title": ""
},
{
"docid": "3ef09c35eb813a018624419292a029ba",
"text": "If you want the flexibility to make additional payments you should favour a flexible ISA. Shop around on comparators and you should be able to find a few that responds to your minimum interest rates Fixed-term ISAs are comparable to a bond: money goes in on day one and then no more deposit are allowed. The rate is fixed for the period. Even though they have a fixed you would still be able to withdraw cash but this would cost you an interest penalty. Not being able to withdraw money is asking the banks to take responsibility on your behalf... They won't do that",
"title": ""
},
{
"docid": "e07c617f1278b936ca41ad293ffd4b98",
"text": "Based on your question, I am going to assume your criterion are: Based on these, I believe you'd be interested in a different savings account, a CD, or money market account. Savings account can get you up to 1.3% and money market accounts can get up to 1.5%. CDs can get you a little more, but they're a little trickier. For example, a 5 year CD could get up to 2%. However, now you're money is locked away for the next few years, so this is not a good option if this money is your emergency fund or you want to use it soon. Also, if interest rates increase then your money market and savings accounts' interest rates will increase but your CD's interest rate misses out. Conversely, if interest rates drop, you're still locked into a higher rate.",
"title": ""
},
{
"docid": "2a802bbb4b1d55bf32ecbac3f41fdc5f",
"text": "As you are in UK, you should think in terms of Tax Free (interest and accumulated capital gains) ISA type investments for the long term AND/OR open a SIPP (Self Invested Pension Plan) account where you get back the tax you have paid on the money you deposit for your old age. Pensions are the best bet for money you do not need at present while ISAs are suitable for short term 5 years plus or longer.",
"title": ""
},
{
"docid": "30feb5a4ba881b67248e3400ceb0ad70",
"text": "\"What a lovely position to find yourself in! There's a lot of doors open to you now that may not have opened naturally for another decade. If I were in your shoes (benefiting from the hindsight of being 35 now) at 21 I'd look to do the following two things before doing anything else: 1- Put 6 months worth of living expenses in to a savings account - a rainy day fund. 2- If you have a pension, I'd be contributing enough of my salary to get the company match. Then I'd top up that figure to 15% of gross salary into Stocks & Shares ISAs - with a view to them also being retirement funds. Now for what to do with the rest... Some thoughts first... House: - If you don't want to live in it just yet, I'd think twice about buying. You wouldn't want a house to limit your career mobility. Or prove to not fit your lifestyle within 2 years, costing you money to move on. Travel: - Spending it all on travel would be excessive. Impromptu travel tends to be more interesting on a lower budget. That is, meeting people backpacking and riding trains and buses. Putting a resonable amount in an account to act as a natural budget for this might be wise. Wealth Managers: \"\"approx. 12% gain over 6 years so far\"\" equates to about 1.9% annual return. Not even beat inflation over that period - so guessing they had it in ultra-safe \"\"cash\"\" (a guaranteed way to lose money over the long term). Give them the money to 'look after' again? I'd sooner do it myself with a selection of low-cost vehicles and equal or beat their return with far lower costs. DECISIONS: A) If you decided not to use the money for big purchases for at least 4-5 years, then you could look to invest it in equities. As you mentioned, a broad basket of high-yielding shares would allow you to get an income and give opportunity for capital growth. -- The yield income could be used for your travel costs. -- Over a few years, you could fill your ISA allowance and realise any capital gains to stay under the annual exemption. Over 4 years or so, it'd all be tax-free. B) If you do want to get a property sooner, then the best bet would to seek out the best interest rates. Current accounts, fixed rate accounts, etc are offering the best interest rates at the moment. Usual places like MoneySavingExpert and SavingsChampion would help you identify them. -- There's nothing wrong with sitting on this money for a couple of years whilst you fid your way with it. It mightn't earn much but you'd likely keep pace with inflation. And you definitely wouldn't lose it or risk it unnecessarily. C) If you wanted to diversify your investment, you could look to buy-to-let (as the other post suggested). This would require a 25% deposit and likely would cost 10% of rental income to have it managed for you. There's room for the property to rise in value and the rent should cover a mortgage. But it may come with the headache of poor tenants or periods of emptiness - so it's not the buy-and-forget that many people assume. With some effort though, it may provide the best route to making the most of the money. D) Some mixture of all of the above at different stages... Your money, your choices. And a valid choice would be to sit on the cash until you learn more about your options and feel the direction your heart is pointing you. Hope that helps. I'm happy to elaborate if you wish. Chris.\"",
"title": ""
},
{
"docid": "3f665baca9e2e42ab39bf00e9fb75c8b",
"text": "Bond aren't necessarily any safer than the stock market. Ultimately, there is no such thing as a low risk mutual fund. You want something that will allow you get at your money relatively quickly. In other words, CDs (since you you can pick a definite time period for your money to be tied up), money market account or just a plain old savings account. Basically, you want to match inflation and have easy access to the money. Any other returns on top of that are gravy, but don't fret too much about it. See also: Where can I park my rainy-day / emergency fund? Savings accounts don’t generate much interest. Where should I park my rainy-day / emergency fund?",
"title": ""
},
{
"docid": "5c207d1cf5855655fedc7a150e083502",
"text": "A bond fund like VBMFX or similar I think are a good choice. Bonds are far less volatile and less risky than stocks. With your 1-2 year time frame, I say definitely stay away from stocks.",
"title": ""
},
{
"docid": "e4227383817fb1d7e34405d771bee381",
"text": "Thats a very open question, Depends on the risk you are willing to take with the money, or the length of time you are willing sit on it, or if you have a specific goal like buying a house. Some banks offer high(ish) rate savings accounts http://www.bankaccountsavings.co.uk/calculator with a switching bonus that could be a good start. (combining the nationwide flexdirect and regular saver) if you want something more long term - safe option is bonds, medium risk option is Index funds (kind of covers all 3 risks really), risky option is Stocks & shares. For these probably a S&S ISA for a tax efficient option. Also LISA or HtB ISA are worth considering if you want to buy a house in the future.",
"title": ""
},
{
"docid": "c16ecfe43336732053c526fee708fbb1",
"text": "\"You have a large number of possible choices to make, and a lot of it does depend upon what interests you when you are older. The first thing to note is the difference between ISAs and pension-contribution schemes tax wise, which is of course the taxation point. When you contribute to your pensions scheme, it is done before taxation, which is why when you draw from your pension scheme you have to pay income tax. Conversely, your ISA is something you contribute to after you have already paid income tax - so besides the 10% tax on dividends if you hold any assets which may them, it is tax free when you draw on it regardless of how much you have accrued over the years. Now, when it comes to the question \"\"what is the best way to save\"\", the answer is almost certainly going to be filling your pension to the point where you're going to retire just on the edge of the limit, and then putting the rest into ISAs. This way you will not be paying the higher rates of tax associated with breaking the lifetime limit, but also get maximum contributions into your various schemes. There is an exception to this of course, which is the return on investment. If you do not have access to a SIPP (Self Invested Personal Pension), you may be able to receive a far higher return on investment when using a Stocks & Shares ISA, in which case the fact that you have to pay taxes prior to funding it may not make a significant difference. The other issue you have, as others have mentioned is rent. While now you may be enjoying London, it is in my opinion quite likely that will change when you get older, London has a very high-cost of living, even compared to the home counties, and many of its benefits are not relevant to someone who is retired. When you retire, it is quite possible that you will see it fit to take a large sum out of your various savings, and purchase a house, which means that regardless of how much you are drawing out you will be able to have somewhere to live. Renting is fine when you are working, but when you have a certain amount of (admittedly growing) funds that have to last you indefinitely, who knows if it will last you.\"",
"title": ""
},
{
"docid": "61e08f0d238c2474a7eb648aac96c339",
"text": "\"TL;DR - go with something like Barry Ritholtz's All Century Portfolio: 20 percent total U.S stock market 5 percent U.S. REITs 5 percent U.S. small cap value 15 percent Pacific equities 15 percent European equities 10 percent U.S. TIPs 10 percent U.S. high yield corp bonds 20 percent U.S. total bond UK property market are absurdly high and will be crashing a lot very soon The price to rent ratio is certainly very high in the UK. According to this article, it takes 48 years of rent to pay for the same apartment in London. That sounds like a terrible deal to me. I have no idea about where prices will go in the future, but I wouldn't voluntarily buy in that market. I'm hesitant to invest in stocks for the fear of losing everything A stock index fund is a collection of stocks. For example the S&P 500 index fund is a collection of the largest 500 US public companies (Apple, Google, Shell, Ford, etc.). If you buy the S&P 500 index, the 500 largest US companies would have to go bankrupt for you to \"\"lose everything\"\" - there would have to be a zombie apocalypse. He's trying to get me to invest in Gold and Silver (but mostly silver), but I neither know anything about gold or silver, nor know anyone who takes this approach. This is what Jeremy Siegel said about gold in late 2013: \"\"I’m not enthusiastic about gold because I think gold is priced for either hyperinflation or the end of the world.\"\" Barry Ritholtz also speaks much wisdom about gold. In short, don't buy it and stop listening to your friend. Is buying a property now with the intention of selling it in a couple of years for profit (and repeat until I have substantial amount to invest in something big) a bad idea? If the home price does not appreciate, will this approach save you or lose you money? In other words, would it be profitable to substitute your rent payment for a mortgage payment? If not, you will be speculating, not investing. Here's an articles that discusses the difference between speculating and investing. I don't recommend speculating.\"",
"title": ""
}
] |
fiqa
|
a4b2487d04e44c160930f143f8592891
|
How useful is the PEG Ratio for large cap stocks?
|
[
{
"docid": "83ff91d25d43c5069739a553a5a028ad",
"text": "It is not so useful because you are applying it to large capital. Think about Theory of Investment Value. It says that you must find undervalued stocks with whatever ratios and metrics. Now think about the reality of a company. For example, if you are waiting KO (The Coca-Cola Company) to be undervalued for buying it, it might be a bad idea because KO is already an international well known company and KO sells its product almost everywhere...so there are not too many opportunities for growth. Even if KO ratios and metrics says it's a good time to buy because it's undervalued, people might not invest on it because KO doesn't have the same potential to grow as 10 years ago. The best chance to grow is demographics. You are better off either buying ETFs monthly for many years (10 minimum) OR find small-cap and mid-cap companies that have the potential to grow plus their ratios indicate they might be undervalued. If you want your investment to work remember this: stock price growth is nothing more than You might ask yourself. What is your investment profile? Agressive? Speculative? Income? Dividends? Capital preservation? If you want something not too risky: ETFs. And not waste too much time. If you want to get more returns, you have to take more risks: find small-cap and mid-companies that are worth. I hope I helped you!",
"title": ""
}
] |
[
{
"docid": "4331dfcd3dcdaffd04df712bb8c58514",
"text": "Well Company is a small assets company for example it has 450,000,000 shares outstanding and is currently traded at .002. Almost never has a bid price. Compare it to PI a relative company with 350 million marker cap brokers will buy your shares. This is why blue chip stock is so much better than small company because it is much more safer. You can in theory make millions with start up / small companies. You would you rather make stable medium risk investment than extremely high risk with high reward investment I only invest in medium risk mutual funds and with recent rallies I made 182,973 already in half year period.",
"title": ""
},
{
"docid": "be1b32a07b443f30339d679ae66b7750",
"text": "There are the EDHEC-risk indices based on similar hedge fund types but even then an IR would give you performance relative to the competition, which is not useful for most hf's as investors don't say I want to buy a global macro fund, vs a stat arb fund, investors say I want to pay a guy to give me more money! Most investors don't care how the OTHER funds did or where the market went, they want that NAV to go always up , which is why a modified sharpe is probably better.",
"title": ""
},
{
"docid": "e7b44d6fb01103d972318fdd1aa04c52",
"text": "\"You'll generally get a number close to market cap of a mature company if you divide profits (or more accurately its free cash flow to equity) by the cost of equity which is usually something like ~7%. The value is meant to represent the amount of cash you'd need to generate investment income off it matching the company you're looking at. Imagine it as asking \"\"How much money do I need to put into the bank so that my interest income would match the profits of the company I'm looking at\"\". Except replace the bank with the market and other forms of investments that generate higher returns of course and that value would be lower.\"",
"title": ""
},
{
"docid": "81ec14fc701de02e845c914aa6aa8ca4",
"text": "No, this is quite wrong. Almost all hedge funds (and all hedge fund investors) use Sharpe as a *primary* measure of performance. The fact that they don't consider themselves risk-free has no bearing on the issue (that's a bizarre line of reasoning - you're saying Sharpe is only relevant for assets that consider themselves risk-free?). And as AlphaPortfolio rightly points out, most funds have no explicit benchmark and they are usually paid for performance over zero. I've never seen a hedge fund use a benchmark relative information ratio - for starters, what benchmark would you measure a CB arb fund against? Or market neutral quant? Or global macro? Same for CTAs...",
"title": ""
},
{
"docid": "a8f4d0b823ec45f1f14ee70df1183374",
"text": "It sounds to me like you may not be defining fundamental investing very well, which is why it may seem like it doesn't matter. Fundamental investing means valuing a stock based on your estimate of its future profitability (and thus cash flows and dividends). One way to do this is to look at the multiples you have described. But multiples are inherently backward-looking so for firms with good growth prospects, they can be very poor estimates of future profitability. When you see a firm with ratios way out of whack with other firms, you can conclude that the market thinks that firm has a lot of future growth possibilities. That's all. It could be that the market is overestimating that growth, but you would need more information in order to conclude that. We call Warren Buffet a fundamental investor because he tends to think the market has made a mistake and overvalued many firms with crazy ratios. That may be in many cases, but it doesn't necessarily mean those investors are not using fundamental analysis to come up with their valuations. Fundamental investing is still very much relevant and is probably the primary determinant of stock prices. It's just that fundamental investing encompasses estimating things like future growth and innovation, which is a lot more than just looking at the ratios you have described.",
"title": ""
},
{
"docid": "0cc8c705118c1a33d31241664c06f9e3",
"text": "I would think there would be heavy overlap between companies that do well and market cap. You're not going to get to largest market cap without being well managed, or at least in the top percentile. After all, in a normal distribution, the badly managed firms go out of business or never get large.",
"title": ""
},
{
"docid": "4d14c004981443285c0e14072fc0a322",
"text": "The biggest benefit to having a larger portfolio is relatively reduced transaction costs. If you buy a $830 share of Google at a broker with a $10 commission, the commission is 1.2% of your buy price. If you then sell it for $860, that's another 1.1% gone to commission. Another way to look at it is, of your $30 ($860 - $830) gain you've given up $20 to transaction costs, or 66.67% of the proceeds of your trade went to transaction costs. Now assume you traded 10 shares of Google. Your buy was $8,300 and you sold for $8,600. Your gain is $300 and you spent the same $20 to transact the buy and sell. Now you've only given up 6% of your proceeds ($20 divided by your $300 gain). You could also scale this up to 100 shares or even 1,000 shares. Generally, dividend reinvestment are done with no transaction cost. So you periodically get to bolster your position without losing more to transaction costs. For retail investors transaction costs can be meaningful. When you're wielding a $5,000,000 pot of money you can make your trades on a larger scale giving up relatively less to transaction costs.",
"title": ""
},
{
"docid": "bf6022bc93687e36f52a30b212aea8d4",
"text": "I think it's safe to say that Apple cannot grow in value in the next 20 years as fast as it did in the prior 20. It rose 100 fold to a current 730B valuation. 73 trillion dollars is nearly half the value of all wealth in the world. Unfortunately, for every Apple, there are dozens of small companies that don't survive. Long term it appears the smaller cap stocks should beat large ones over the very long term if only for the fact that large companies can't maintain that level of growth indefinitely. A non-tech example - Coke has a 174B market cap with 46B in annual sales. A small beverage company can have $10M in sales, and grow those sales 20-25%/year for 2 decades before hitting even $1B in sales. When you have zero percent of the pie, it's possible to grow your business at a fast pace those first years.",
"title": ""
},
{
"docid": "ef598db00822ea62dc1ec99fb6904b32",
"text": "Thanks. Just to clarify I am looking for a more value-neutral answer in terms of things like Sharpe ratios. I think it's an oversimplification to say that on average you lose money because of put options - even if they expire uselessly 90% of the time, they still have some expected payoff that kicks in 10% of the time, and if the price is less than the expected payoff you will earn money in the long term by investing in put options (I am sure you know this as a PhD student I just wanted to get it out there.)I guess more formally my question would be are there studies on whether options prices correspond well to the diversification benefits they offer from an MPT point of view.",
"title": ""
},
{
"docid": "ce4221079abce3405a8b34b151d4a4d5",
"text": "The Sharpe ratio is, perhaps, the method you are looking for. That said, not really sure beta is a meaningful metric, as there are plenty of safe bets to be made on volatile stocks (and, conversely, unsafe bets to be made on non-volatile ones).",
"title": ""
},
{
"docid": "c26abce4a4b994467b349f12d67579d0",
"text": "\"Below is just a little information on this topic from my small unique book \"\"The small stock trader\"\": The most significant non-company-specific factor affecting stock price is the market sentiment, while the most significant company-specific factor is the earning power of the company. Perhaps it would be safe to say that technical analysis is more related to psychology/emotions, while fundamental analysis is more related to reason – that is why it is said that fundamental analysis tells you what to trade and technical analysis tells you when to trade. Thus, many stock traders use technical analysis as a timing tool for their entry and exit points. Technical analysis is more suitable for short-term trading and works best with large caps, for stock prices of large caps are more correlated with the general market, while small caps are more affected by company-specific news and speculation…: Perhaps small stock traders should not waste a lot of time on fundamental analysis; avoid overanalyzing the financial position, market position, and management of the focus companies. It is difficult to make wise trading decisions based only on fundamental analysis (company-specific news accounts for only about 25 percent of stock price fluctuations). There are only a few important figures and ratios to look at, such as: perhaps also: Furthermore, single ratios and figures do not tell much, so it is wise to use a few ratios and figures in combination. You should look at their trends and also compare them with the company’s main competitors and the industry average. Preferably, you want to see trend improvements in these above-mentioned figures and ratios, or at least some stability when the times are tough. Despite all the exotic names found in technical analysis, simply put, it is the study of supply and demand for the stock, in order to predict and follow the trend. Many stock traders claim stock price just represents the current supply and demand for that stock and moves to the greater side of the forces of supply and demand. If you focus on a few simple small caps, perhaps you should just use the basic principles of technical analysis, such as: I have no doubt that there are different ways to make money in the stock market. Some may succeed purely on the basis of technical analysis, some purely due to fundamental analysis, and others from a combination of these two like most of the great stock traders have done (Jesse Livermore, Bernard Baruch, Gerald Loeb, Nicolas Darvas, William O’Neil, and Steven Cohen). It is just a matter of finding out what best fits your personality. I hope the above little information from my small unique book was a little helpful! Mika (author of \"\"The small stock trader\"\")\"",
"title": ""
},
{
"docid": "af7535b950b00daa65f3e587fcb3e827",
"text": "Most of the “recommendations” are just total market allocations. Within domestic stocks, the performance rotates. Sometimes large cap outperform, sometimes small cap outperform. You can see the chart here (examine year by year): https://www.google.com/finance?chdnp=1&chdd=1&chds=1&chdv=1&chvs=maximized&chdeh=0&chfdeh=0&chdet=1428692400000&chddm=99646&chls=IntervalBasedLine&cmpto=NYSEARCA:VO;NYSEARCA:VB&cmptdms=0;0&q=NYSEARCA:VV&ntsp=0&ei=_sIqVbHYB4HDrgGA-oGoDA Conventional wisdom is to buy the entire market. If large cap currently make up 80% of the market, you would allocate 80% of domestic stocks to large cap. Same case with International Stocks (Developed). If Japan and UK make up the largest market internationally, then so be it. Similar case with domestic bonds, it is usually total bond market allocation in the beginning. Then there is the question of when you want to withdraw the money. If you are withdrawing in a couple years, you do not want to expose too much to currency risks, thus you would allocate less to international markets. If you are investing for retirement, you will get the total world market. Then there is the question of risk tolerance. Bonds are somewhat negatively correlated with Stocks. When stock dips by 5% in a month, bonds might go up by 2%. Under normal circumstances they both go upward. Bond/Stock allocation ratio is by age I’m sure you knew that already. Then there is the case of Modern portfolio theory. There will be slight adjustments to the ETF weights if it is found that adjusting them would give a smaller portfolio variance, while sacrificing small gains. You can try it yourself using Excel solver. There is a strategy called Sector Rotation. Google it and you will find examples of overweighting the winners periodically. It is difficult to time the rotation, but Healthcare has somehow consistently outperformed. Nonetheless, those “recommendations” you mentioned are likely to be market allocations again. The “Robo-advisors” list out every asset allocation in detail to make you feel overwhelmed and resort to using their service. In extreme cases, they can even break down the holdings to 2/3/4 digit Standard Industrial Classification codes, or break down the bond duration etc. Some “Robo-advisors” would suggest you as many ETF as possible to increase trade commissions (if it isn’t commission free). For example, suggesting you to buy VB, VO, VV instead a VTI.",
"title": ""
},
{
"docid": "32a43dc6ba76140884e09956a9c7bee8",
"text": "There is some convergence, but the chart seems to indicate that 5 star funds end up on the upper end of average (3 stars) whereas 1 star funds end up on the lower end of average (1.9 stars) over the long term. I would have thought that the stars would be completely useless as forward looking indicators, but they seem to have been slightly useful?",
"title": ""
},
{
"docid": "99a35d8a21693b605106176989414fed",
"text": "This is Rob Bennett, the fellow who developed the Valuation-Informed Indexing strategy and the fellow who is discussed in the comment above. The facts stated in that comment are accurate -- I went to a zero stock allocation in the Summer of 1996 because of my belief in Robert Shiller's research showing that valuations affect long-term returns. The conclusion stated, that I have said that I do not myself follow the strategy, is of course silly. If I believe in it, why wouldn't I follow it? It's true that this is a long-term strategy. That's by design. I see that as a benefit, not a bad thing. It's certainly true that VII presumes that the Efficient Market Theory is invalid. If I thought that the market were efficient, I would endorse Buy-and-Hold. All of the conventional investing advice of recent decades follows logically from a belief in the Efficient Market Theory. The only problem I have with that advice is that Shiller's research discredits the Efficient Market Theory. There is no one stock allocation that everyone following a VII strategy should adopt any more than there is any one stock allocation that everyone following a Buy-and-Hold strategy should adopt. My personal circumstances have called for a zero stock allocation. But I generally recommend that the typical middle-class investor go with a 20 percent stock allocation even at times when stock prices are insanely high. You have to make adjustments for your personal financial circumstances. It is certainly fair to say that it is strange that stock prices have remained insanely high for so long. What people are missing is that we have never before had claims that Buy-and-Hold strategies are supported by academic research. Those claims caused the biggest bull market in history and it will take some time for the widespread belief in such claims to diminish. We are in the process of seeing that happen today. The good news is that, once there is a consensus that Buy-and-Hold can never work, we will likely have the greatest period of economic growth in U.S. history. The power of academic research has been used to support Buy-and-Hold for decades now because of the widespread belief that the market is efficient. Turn that around and investors will possess a stronger belief in the need to practice long-term market timing than they have ever possessed before. In that sort of environment, both bull markets and bear markets become logical impossibilities. Emotional extremes in one direction beget emotional extremes in the other direction. The stock market has been more emotional in the past 16 years than it has ever been in any earlier time (this is evidenced by the wild P/E10 numbers that have applied for that entire time-period). Now that we are seeing the losses that follow from investing in highly emotional ways, we may see rational strategies becoming exceptionally popular for an exceptionally long period of time. I certainly hope so! The comment above that this will not work for individual stocks is correct. This works only for those investing in indexes. The academic research shows that there has never yet in 140 years of data been a time when Valuation-Informed Indexing has not provided far higher long-term returns at greatly diminished risk. But VII is not a strategy designed for stock pickers. There is no reason to believe that it would work for stock pickers. Thanks much for giving this new investing strategy some thought and consideration and for inviting comments that help investors to understand both points of view about it. Rob",
"title": ""
},
{
"docid": "c28eb69add00010b45511f54bf8ebe0e",
"text": "\"Maria, there are a few questions I think you must consider when considering this problem. Do fundamental or technical strategies provide meaningful information? Are the signals they produce actionable? In my experience, and many quantitative traders will probably say similar things, technical analysis is unlikely to provide anything meaningful. Of course you may find phenomena when looking back on data and a particular indicator, but this is often after the fact. One cannot action-ably trade these observations. On the other hand, it does seem that fundamentals can play a crucial role in the overall (typically long run) dynamics of stock movement. Here are two examples, Technical: suppose we follow stock X and buy every time the price crosses above the 30 day moving average. There is one obvious issue with this strategy - why does this signal have significance? If the method is designed arbitrarily then the answer is that it does not have significance. Moreover, much of the research supports that stocks move close to a geometric brownian motion with jumps. This supports the implication that the system is meaningless - if the probability of up or down is always close to 50/50 then why would an average based on the price be predictive? Fundamental: Suppose we buy stocks with the best P/E ratios (defined by some cutoff). This makes sense from a logical perspective and may have some long run merit. However, there is always a chance that an internal blowup or some macro event creates a large loss. A blended approach: for sake of balance perhaps we consider fundamentals as a good long-term indication of growth (what quants might call drift). We then restrict ourselves to equities in a particular index - say the S&P500. We compare the growth of these stocks vs. their P/E ratios and possibly do some regression. A natural strategy would be to sell those which have exceeded the expected return given the P/E ratio and buy those which have underperformed. Since all equities we are considering are in the same index, they are most likely somewhat correlated (especially when traded in baskets). If we sell 10 equities that are deemed \"\"too high\"\" and buy 10 which are \"\"too low\"\" we will be taking a neutral position and betting on convergence of the spread to the market average growth. We have this constructed a hedged position using a fundamental metric (and some helpful statistics). This method can be categorized as a type of index arbitrage and is done (roughly) in a similar fashion. If you dig through some data (yahoo finance is great) over the past 5 years on just the S&P500 I'm sure you'll find plenty of signals (and perhaps profitable if you calibrate with specific numbers). Sorry for the long and rambling style but I wanted to hit a few key points and show a clever methods of using fundamentals.\"",
"title": ""
}
] |
fiqa
|
122868b781060baba5d8023a1e0e3b7b
|
Whole life insurance - capped earnings
|
[
{
"docid": "772da6197e39317935aba6165983c49b",
"text": "\"The question that I walk away with is \"\"What is the cost of the downside protection?\"\" Disclaimer - I don't sell anything. I am not a fan of insurance as an investment, with rare exceptions. (I'll stop there, all else is a tangent) There's an appeal to looking at the distribution of stock returns. It looks a bit like a bell curve, with a median at 10% or so, and a standard deviation of 15 or so. This implies that there are some number of years on average that the market will be down, and others, about 2/3, up. Now, you wish to purchase a way of avoiding that negative return, and need to ask yourself what it's worth to do so. The insurance company tells you (a) 2% off the top, i.e. no dividends and (b) we will clip the high end, over 9.5%. I then am compelled to look at the numbers. Knowing that your product can't be bought and sold every year, it's appropriate to look at 10-yr rolling returns. The annual returns I see, and the return you'd have in any period. I start with 1900-2012. I see an average 9.8% with STD of 5.3%. Remember, the 10 year rolling will do a good job pushing the STD down. The return the Insurance would give you is an average 5.4%, with STD of .01. You've bought your way out of all risk, but at what cost? From 1900-2012, my dollar grows to $30080, yours, to $406. For much of the time, treasuries were higher than your return. Much higher. It's interesting to see how often the market is over 10% for the year, clip too many of those and you really lose out. From 1900-2012, I count 31 negative years (ouch) but 64 years over 9.5%. The 31 averaged -13.5%, the 64, 25.3%. The illusion of \"\"market gains\"\" is how this product is sold. Long term, they lag safe treasuries.\"",
"title": ""
},
{
"docid": "5091949ed7952e25b0a8a025af0aa5ee",
"text": "Pretty simple: When is Cash Value Life Insurance a good or bad idea? It is never a good idea. How can life insurance possibly work as investment? It can't. Just as car, home, or health insurance is not an investment. Note for counter example providers: intent to commit insurance fraud is not an investment. Why not live your life so in 15 or 20 years you are debt free, have a nice emergency fund built and have a few 100 thousand in investments? Then you can self-insure. If you die with a paid off home, no debt, 20K in a money market, and 550,000 in retirement accounts would your spouse and children be taken care of?",
"title": ""
},
{
"docid": "1592c7926967d762c261dca26cb01931",
"text": "I need to see the policy you are referring to give a more accurate answer. However what could be happening, it’s again the way these instruments are structured; For example if the insurance premium is say 11,000 of which 1000 is toward expenses and Term insurance amount. The Balance 10,000 is invested in growth. The promise is that this will grow max of 9.5% and never below zero. IE say if we are talking only about a year, you can get anything between 10,000 to 10,950. The S & P long-term average return is in the range of 12 -15% [i don't remember correctly] So the company by capping it at 9.5% is on average basis making a profit of 2.5% to 5.5%. IE in a good year say the S & P return is around 18%, the company has pocketed close to 9% more money; On a bad year say the Index gave a -ve return of say 5% ... The Insurance company would take this loss out of the good years. If say when your policy at the S & P for that year has given poor returns, you would automatically get less returns. Typically one enters into Life Insurance on a long term horizon and hence the long term averages should be used as a better reference, and going by that, one would make more money just by investing this in an Index directly. As you whether you want to invest in such a scheme, should be your judgment, in my opinion I prefer to stay away from things that are not transparent.",
"title": ""
}
] |
[
{
"docid": "47d614e8a3ff344a63666b7022297125",
"text": "Something to consider is how broad is Yahoo! Finance taking in their data for making some comparisons. For example, did you look at the other companies in the same industry? On the Industry page, the Top Life Insurance Companies by Market Cap are mostly British companies which could make things a bit different than you'd think. Another point is how this is just for one quarter which may be an anomaly as the data could get a bit awkward if some companies are just coming back to being profitable and could have what appears to be great growth but this is because their earnings grow from $.01/share to $1/share which is a growth of $10,000 percent as this is an increase of 100 times but really this may just be from various accounting charges the company had that hit its reserves and caused its earnings to dip temporarily.",
"title": ""
},
{
"docid": "895013a6e3fcc28dde28e80f40f5ca36",
"text": "Let's look at some numbers. These are just example rates that I found online. You can substitute your own quotes and compare yourself. I'm not going to name the company, but these advertised rates are all from one nationally-known company for a 25-year old female. If you went with the whole life option, you would be paying $937.56 per year. The policy builds a cash value; the amount this grows can vary greatly, and you'll need to look at the fine print to see how it will grow, but let's pretend that after 30 years, the cash value of the policy is $50,000 (a reasonable guess, in my opinion). Let's look at what this means: You can cash out your policy, but at that point, you'll stop paying payments, and your heirs won't get your $100,000 death benefit. You can borrow against it, but you'll have to pay it back. You could use it to pay your premium, in which case you'll stop paying payments. However, keep in mind that if you do pass away, you lose the cash value you've built up; your beneficiaries only get the $100,000 death benefit. Now let's look at the term insurance option. We'll go with the 30-year term. It will only cost you $242.76 per year, and the death benefit is more than double the whole life coverage. If you were to take the difference between the two premiums ($58 per month) and invest it in a mutual fund growing at 8% per year, you would have $86,441 in your account after 30 years. This money is yours (or your heirs), whether or not you pass away before your term is up. After the 30 years is up, your insurance is over, but you are now almost all the way up to the death benefit of the whole life policy anyway. In my opinion, term life insurance is better than whole life for just about everybody. I don't want to be morbid here, but the earlier someone dies, the more benefit they get with term insurance vs whole life. If someone does have reason to believe that his life expectancy is shorter than average, term insurance makes even more sense, as he is more likely to get the death benefit for much less money in premiums than he would in whole life.",
"title": ""
},
{
"docid": "b1234f8cb5c80b9ebe1fd888e157d3d7",
"text": "\"The short answer is \"\"No\"\". There a 2 ways to get cash from a life insurance policy. If the policy has cash value greater than the surrender value, then the difference can be borrowed, but will generally increase premiums in the future. The other method, available on many term policies allows the owner to receive part of the death benefit if the insured has a physician willing to certify that he/she will probably pass away within a 12 month period. Several carriers also offer cash benefits for critical care.\"",
"title": ""
},
{
"docid": "f6fd10563d0d4551a3a1675269f6b210",
"text": "Term life insurance is just that - life insurance that pays out if you die, just like car insurance pays out if you have an accident. Like car insurance, it's easy to compare amongst term life insurance policies - you can even compare quotes online. Whole life insurance is life insurance plus an investment component. The money that you pay goes to pay for your life insurance and it also is invested by the insurance company. Insurance companies love whole life because it is not a commodity; they can come up with a large variety of variants, and that fact plus the fact that it combines insurance and investment means that is very difficult to compare policies. Not to mention that fact that none of the companies - as far as I can tell - publish their whole life insurance rates, so it is very difficult to shop around.",
"title": ""
},
{
"docid": "ad993468933429d86a8b8e460cafd7cb",
"text": "\"Short answer: google finance's market cap calculation is nonstandard (a.k.a. wrong). The standard way of computing the market capitalization of a firm is to take the price of its common stock and multiply by the number of outstanding common stock shares. If you do this using the numbers from google's site you get around $13.4B. This can be verified by going to other sites like yahoo finance and bloomberg, which have the correct market capitalization already computed. The Whole Foods acquisition appears to be very cut-and-dry. Investors will be compensated with $42 cash per share. Why are google finance's numbers wrong for market cap? Sometimes people will add other things to \"\"market capitalization,\"\" like the value of the firm's debt and other debt-like securities. My guess is that google has done something like this. Whole Foods has just over $3B in total liabilities, which is around the size of the discrepancy you have found.\"",
"title": ""
},
{
"docid": "6c045370f584af27f067804285d8c044",
"text": "It's hard to say for smaller cap firms because they are all so different. Take a look at SandP or other rating agencies at about the BB range. Then decide how much of a buffer you'd like. If all goes to hell, do you want to be able to cover all you salaries, debt etc for three months? Six? What kind of seasonal volatility does your industry face? Do you plan on any significant investment or FTE uplift any time soon? This will all play into how much retained earnings you will chose to have.",
"title": ""
},
{
"docid": "cdf68c6c26bc84f1100866f1718ccd61",
"text": "\"I would refer you to this question and answers. Here in the US we have two basic types of life insurance: term and whole life. Universal life is a marketing response to whole life being such a bad deal, and is whole life just not quite as bad. I am not familiar with the products in India, but given the acronym (ULIP), it is probably universal life, and as you describe is variable universal life. Likely Description \"\"Under the hood\"\", or in effect, you are purchasing a term life policy and investing excess premiums in a collection of stock mutual funds. This is a bad deal for a few reasons: A much better option is to buy \"\"level term insurance\"\" and invest on your own. You won't necessarily lose money, but you can make better financial decisions. It is good to invest, it is good to have life. A better decision would not to combine the two into a single product.\"",
"title": ""
},
{
"docid": "90766eb89e7b14ba266fbcc81ccffeb6",
"text": "\"There are a lot of false claims around the internet about this concept - the fact of the matter is you are giving yourself the ability to have money in a tax favored environment with consistent, steady growth as well as the ability to access it whenever you want. Compare this to a 401k plan for example....money is completely at risk, you can't touch it, and you're penalized if you don't follow the government's rules. As far as commissions to the agent - an agent will cut his commission in half by selling you an \"\"infinite banking\"\" style policy as opposed to a traditional whole life policy. @duffbeer703 clearly doesn't understand life insurance in the slightest when he says that the first three years of your premium payements will go to the agents pocket. And as usual offers no alternative except \"\"pick some high yielding dividen stocks and MLPs\"\" - Someone needs to wake up from the Dave Ramsey coma and realize that there is no such thing as a 12% mutual fund....do your research on the stock market (crestmont research). don't just listen to dave ramseys disciples who still thinking getting 12-15% year in and year out is possible. It's frustrating to listen to people who are so uneducated on the subject - remember the internet has turned everyone into \"\"experts\"\" if you want real advice talk to a legitimate expert that understands life insurance and how it actually works.\"",
"title": ""
},
{
"docid": "d138445f43465adaec68e65a19c491f3",
"text": "There's a cool calculator at Money Chimp that lets you plug in a start and end year and see what the compound annual growth rate of the S&P 500. The default date range of 1871 through 2010 gives a rate of 8.92% for example. Something you need to take into account when comparing returns to a whole life policy is what happens to the cash value in your policy when you die. Many of these policies are written so that your beneficiaries only get the face value of the policy, and the insurance company keeps the cash value.",
"title": ""
},
{
"docid": "b266013fea10adc50a12245328216415",
"text": "There are a lot of moving parts, individual premiums and annual increases have little to do with employer premiums and annual increases and vice versa. Most people think of XYZ insurer as a single company with a single pool of insured folks. This common knowledge isn't accurate. Insurers pool their business segments separately. This means that Individual, small business, mid-size business, and large business are all different operating segments from the viewpoint of the insurer. It's possible to argue that because so many people are covered by employer plans that individual plans have a hard time accumulating the required critical mass of subscribers to keep increases reasonable. Age banded rating: Individual coverage and small group coverage is age rated, meaning every year you get older. In addition to your age increase, the premium table for your plan also receives an increase. Employers with 100+ eligible employees are composite rated (in general), meaning every employee costs the same amount. The 18 year old employee costs $500 per month, the 64 year old costs $500 per month. Generally, the contributions an employee pays to participate in the plan are also common among all ages. This means that on a micro level increases can be more incremental because the employer is abstracting the gross premium. Composite rating generally benefits older folks while age rating generally benefits younger folks. Employer Morale Incentive: Generally the cost to an employee covered by an employer plan isn't directly correlated to the gross premium, and increases to the contribution(s) aren't necessarily correlated to the increases the employer receives. Employers are incentivised by employee morale. It's pretty common for employers to shoulder a disproportionate amount of an increase to keep everyone happy. Employers may offset the increase by shopping some ancillary benefit like group life insurance, or bundling the dental program with the medical carrier. Remember, employees don't pay premiums they pay contributions and some employers are more generous than others. Employers are also better at budgeting for planned increases than individuals are. Regulators: In many of the states that are making the news because of their healthcare premium increases there simply isn't a regulator scrutinizing increases. California requires all individual and small group premiums to be filed with the state and increases must be justified with some sort of math and approved by a regulator. Without this kind of oversight insurers have only the risk of subscriber flight to adjust plan provisions and press harder during provider contract negotiations. Expiring Transitional Reinsurance Fee and Funds: One of the fees introduced by healthcare reform paid by insurers and self-insured employers established a pot of money that individual plans could tap to cope with the new costs of the previously uninsurable folks. This fee and corresponding pot of money is set to expire and can no longer be taken in to account by underwriters. Increased Treatment Availability: It's important that as new facilities go online, insurer costs will increase. If a little town gets a new cancer clinic, that pool will see more cancer treatment costs simply as a result of increased treatment availability. Consider that medical care inflation is running at about 4.9% annually as of the most recent CPI table, the rest of the increases will result from the performance of that specific risk pool. If that risk pool had a lot of cancer diagnoses, you're looking at a big increase. If that risk pool was under priced the prior year you will see an above average increase, etc.",
"title": ""
},
{
"docid": "2294a2beb03b21030b8461a02940df7a",
"text": "This depends. Quite a few stock exchanges / country report total capitalisation in terms of free float. I.E total shares that can be traded, ignoring the promoters shares. The market cap reported by company takes all shares.",
"title": ""
},
{
"docid": "f5aa74ee3c7e92d2297cb3dbe1e37866",
"text": "\"Real target of commisions is providing \"\"risk shelter\"\". It is kind of \"\"insurance\"\", which is actually last step for external risks to delete all your money. In part it cuts some of risks which you provide, brokers track history of all your actions for you (nobody else does). When brokerage firm fails, all your money is zero. It depends from case to case if whole account goes zero, but I wouldn't count on that.\"",
"title": ""
},
{
"docid": "6241d19ae4f4a34d2000f940bf82e549",
"text": "The issue is the time frame. With a one year investment horizon the only way for a fund manager to be confident that they are not going to lose their shirt is to invest your money in ultra conservative low volatility investments. Otherwise a year like 2008 in the US stock market would break them. Note if you are willing to expand your payback time period to multiple years then you are essentially looking at an annuity and it's market loss rider. Of course those contacts are always structured such that the insurance company is extremely confident that they will be able to make more in the market than they are promising to pay back (multiple decade time horizons).",
"title": ""
},
{
"docid": "89854b2a03b341b24944bb2af2a27fbf",
"text": "\"If I read your figures correctly, then the cost difference is negligible. ($1.84 difference) The main determining factor, I'd think, would be the coverage. Do you get more, or less, coverage now than you would if you went together on the same plan? You'd both be covered, but what is the cap? Plans, and employer contributions, change all the time. How is business in both of your companies? Are you likely to get cut? Are you able to get back into a plan at each of your employers if you quit the plan for a while? These rules may be unpleasant surprises if, say, your wife cancels her plan, goes on yours, and you lose your job. She may not be able to get back into her insurance immediately, or possibly not at all. A spouse losing a job isn't a \"\"qualifying life event\"\" the way marriage, birth of a child, divorce, etc., is.\"",
"title": ""
},
{
"docid": "b08e15959e01191e6cf76c05c4b50af0",
"text": "The problem you'll have is that premium income is a vague term so you have to figure out what they mean by a) premium and b) income. Gross or net of reinsurance and acquisition costs? Written or earned basis? Combined ratios are also a pig, very commonly they are loss ratio + expense ratio --- but of course loss ratio is losses incurred / premium earned while expense ratio is expenses paid / premium written so it's a self-inconsistent measure. And then there's investment income, and then there's reserve releases...",
"title": ""
}
] |
fiqa
|
9bdcb7791fecbeed1fcc4283cf9efc34
|
How are startup shares worth more than the total investment funding?
|
[
{
"docid": "ee8a6f97c97ef7941969a41f0081da28",
"text": "\"What littleadv said is correct. His worth is based on the presumed worth of the total company value (which is much greater than all investment dollars combined because of valuation growth)*. In other words, his \"\"worth\"\" is based on the potential return for his share of ownership at a rate based on the latest valuation of the company. He is worth $17.5 billion today, but the total funding for Facebook is only $2.4 billion? I don't understand this. In private companies, valuations typically come from either speculation/analysts or from investments. Investment valuations are the better gauge, because actual money traded hands for a percentage ownership. However, just as with public companies on the stock market, there are (at least) two caveats. Just because someone else sold their shares at a given rate, doesn't mean that rate... In both cases, it's possible the value may be much lower or much higher. Some high-value purchases surprise for how high they are, such as Microsoft's acquisition of Skype for $8.5 billion. The formula for one owner's \"\"worth\"\" based on a given acquisition is: Valuation = Acquisition amount / Acquisition percent Worth = Owner's percent × Valuation According to Wikipedia Zuckerberg owns 24%. In January, Goldman Sach's invested $500 million at a $50 billion valuation. That is the latest investment and puts Zuckerberg's worth at $12 billion. However, some speculation places a Facebook IPO at a much higher valuation, such as as $100 billion. I don't know what your reference is for $17 billion, but it puts their valuation at $70.8 billion, between the January Goldman valuation and current IPO speculation. * For instance, Eduardo Saverin originally invested $10,000, which, at his estimated 5% ownership, would now be worth $3-5 billion.\"",
"title": ""
},
{
"docid": "91c1f60c9ac92a5e9629c21ba800d911",
"text": "The net worth is based on an estimate of how much he would get if he relinquished his stake. The total funding is based on how much he has relinquished thus far. Suppose I have a candy jar with 100 candies. I'm not sure how much these candies are worth, so I start off by selling 10% of the jar for $10. Now I have 90 candies and $10, a total value of $100. Then someone comes along offering $100 for another 10% (of the original jar, or 10 candies), which I accept. Now I have 80 candies and $110. Since I value each candy at $10 now, I calculate my worth as $910. Then I do another deal selling 10% for $1000. Now I have $1110 in cash and 70 candies valued at $100 each. My total worth is now $8110 (cash + remaining candies), while the candy jar has only received $1110 in funding. Replace candies with equity in The Facebook, Inc. and you get the idea.",
"title": ""
},
{
"docid": "b9bde54954b659f05d07dfa2c0a7ec94",
"text": "He is worth $17.5 billion today Note that he is worth that dollar figure, but he doesn't have that many dollars. That's the worth of his stake in the company (number of shares he owns times the assumed value per share), i.e. assuming its total value being several hundreds of billions, as pundits assume. However, it is not a publicly traded company, so we don't really know much about its financials.",
"title": ""
}
] |
[
{
"docid": "922ae0ac97a125d6aea9d7bae67c61cf",
"text": "No. Not directly. A company issues stock in order to raise capital for building its business. Once the initial shares are sold to the public, the company doesn't receive additional funds from future transactions of those shares of stock between the public. However, the company could issue more shares at the new higher price to raise more capital.",
"title": ""
},
{
"docid": "adbdd54925b565f216b4280ab7340fb6",
"text": "Selling stock means selling a portion of ownership in your company. Any time you issue stock, you give up some control, unless you're issuing non-voting stock, and even non-voting stock owns a portion of the company. Thus, issuing (voting) shares means either the current shareholders reduce their proportion of owernship, or the company reissues stock it held back from a previous offering (in which case it no longer has that stock available to issue and thus has less ability to raise funds in the future). From Investopedia, for exmaple: Secondary offerings in which new shares are underwritten and sold dilute the ownership position of stockholders who own shares that were issued in the IPO. Of course, sometimes a secondary offering is more akin to Mark Zuckerberg selling some shares of Facebook to allow him to diversify his holdings - the original owner(s) sell a portion of their holdings off. That does not dilute the ownership stake of others, but does reduce their share of course. You also give up some rights to dividends etc., even if you issue non-voting stock; of course that is factored into the price presumably (either the actual dividend or the prospect of eventually getting a dividend). And hopefully more growth leads to more dividends, though that's only true if the company can actually make good use of the incoming funds. That last part is somewhat important. A company that has a good use for new funds should raise more funds, because it will turn those $100 to $150 or $200 for everyone, including the current owners. But a company that doesn't have a particular use for more money would be wasting those funds, and probably not earning back that full value for everyone. The impact on stock price of course is also a major factor and not one to discount; even a company issuing non-voting stock has a fiduciary responsibility to act in the interest of those non-voting shareholders, and so should not excessively dilute their value.",
"title": ""
},
{
"docid": "030434531674e30800c6f5ed5d97f02c",
"text": "\"There is a legal document called a \"\"Stock Purchase Agreement\"\" and it depends on who is the other party to the buyer in the Agreement. In almost all startups the sale goes through the company, so the company keeps the money. In your example, the company would be worth $10,000 \"\"Post-Money\"\" because the $1k got 10% of the Company.\"",
"title": ""
},
{
"docid": "4218b3b9f76e1089d835b39e4b1f631a",
"text": "There are a LOT of variables at play here, so with the info you've provided we can't give you an exact answer. Generally speaking, employee options at a startup are valued by a 409a valuation (http://en.wikipedia.org/wiki/Internal_Revenue_Code_section_409A) once a year or more often. But it's entirely possible that the company split, or took a round of funding that reduced their valuation, or any other number of things. We'd need a good bit more information (which you may or may not have) to really answer the question.",
"title": ""
},
{
"docid": "59ed460e51c03b18119d4006de23a159",
"text": "Similar premise, yes. It's an investment so you're definitely hoping it grows so you can sell it for a profit/gain. Public (stock market) vs. private (shark tank) are a little different though in terms of how much money you get and the form of income. With stocks, if you buy X number of shares at a certain price, you definitely want to sell them when they are worth more. However, you don't get, say 0.001% (or whatever percentage you own, it would be trivial) of the profits. They just pay a dividend to you based on a pre-determined amount and multiply it by the number of shares you own and that would be your income. Unless you're like Warren Buffet and Berkshire who can buy significant stakes of companies through the stock market, then they can likely put the investment on the balance sheet of his company, but that's a different discussion. It would also be expensive as hell to do that. With shark tank investors, the main benefit they get is significant ownership of a company for a cheap price, however the risk can be greater too as these companies don't have a strong foundation of sales and are just beginning. Investing in Apple vs. a small business is pretty significant difference haha. These companies are so small and in such a weak financial position which is why they're seeking money to grow, so they have almost no leverage. Mark Cuban could swoop in and offer $50k for 25% and that's almost worth it relative to what $50k in Apple shares would get him. It's all about the return. Apple and other big public companies are mature and most of the growth has already happened so there is little upside. With these startups, if they ever take off then and you own 25% of the company, it can be worth billions.",
"title": ""
},
{
"docid": "764624b0e84789c70bc3f1b715a280c3",
"text": "Shares in a company represent a portion of a company. If that company takes in money and doesn't pay it out as a dividend (e.g. Apple), the company is still more valuable because it has cold hard cash as an asset. Theoretically, it's all the same whether your share of the money is inside the company or outside the company; the only immediate difference is tax treatment. Of course, for large bank accounts that means that an investment in the company is a mix of investment in the bank account and investment in the business-value of the company, which may stymie investors who aren't particularly interested in buying larve amounts of bank accounts (known for low returns) and would prefer to receive their share of the cash to invest elsewhere (or in the business portion of the company.) Companies like Apple have in fact taken criticism for this. Your company could also use that cash to invest in itself (growing the value of its profits) or buy other companies that are worth money, essentially doing the job for you. Of course, they can do the job well or they can do it poorly... A company could also be acquired by a larger company, or taken private, in exchange for cash or the stock of another company. This is another way that the company's value could be returned to its shareholders.",
"title": ""
},
{
"docid": "0b36fbeef3d2e0382204ce3a2d75bfba",
"text": "\"Hi Amy, thank you for your article. Got to say however that I tend to disagree. I've been through the venture rabbit hole a number of times. Each one was an experience I'll never forget and wouldn't trade for anything. I learned so much more about how the business world actually works (or...doesn't) than I would have at some more established company. That said, I am also quite sour on the whole VC thing and at my most recent startup we've foregone outside investors and bootstrapped things from the get go. It was probably the best decision we made because it allowed us to be flexible in our strategy and not always beholden to the \"\"quick exit\"\" that VC money always drives. However, I realize that not all businesses can be like ours. We started off as a consulting company and moved into build products as our cash reserve grew. If we had wanted to do something big, or fast, or perhaps manufacture something, we would never have had the capital to get it going. Those types of business *need* outside funding, and generally it's only VCs who are willing to take the 1 in 20 bet that startups usually entail. For that, I'm glad that VCs are there, and think they provide a very valuable service and part of our economy. I just don't ever want to have to deal with them again...\"",
"title": ""
},
{
"docid": "2c22c52e4aaebff770a0c2e1acd89cf3",
"text": "\"A share of stock is a share of the underlying business. If one believes the underlying business will grow in value, then one would expect the stock price to increase commensurately. Participants in the stock market, in theory, assign value based on some combination of factors like capital assets, cash on hand, revenue, cash flow, profits, dividends paid, and a bunch of other things, including \"\"intangibles\"\" like customer loyalty. A dividend stream may be more important to one investor than another. But, essentially, non-dividend paying companies (and, thus, their shares) are expected by their owners to become more valuable over time, at which point they may be sold for a profit. EDIT TO ADD: Let's take an extremely simple example of company valuation: book value, or the sum of assets (capital, cash, etc) and liabilities (debt, etc). Suppose our company has a book value of $1M today, and has 1 million shares outstanding, and so each share is priced at $1. Now, suppose the company, over the next year, puts another $1M in the bank through its profitable operation. Now, the book value is $2/share. Suppose further that the stock price did not go up, so the market capitalization is still $1M, but the underlying asset is worth $2M. Some extremely rational market participant should then immediately use his $1M to buy up all the shares of the company for $1M and sell the underlying assets for their $2M value, for an instant profit of 100%. But this rarely happens, because the existing shareholders are also rational, can read the balance sheet, and refuse to sell their shares unless they get something a lot closer to $2--likely even more if they expect the company to keep getting bigger. In reality, the valuation of shares is obviously much more complicated, but this is the essence of it. This is how one makes money from growth (as opposed to income) stocks. You are correct that you get no income stream while you hold the asset. But you do get money from selling, eventually.\"",
"title": ""
},
{
"docid": "010b125cc1d4bd32e988b62c1b1cffdd",
"text": "\"No, a jump in market capitalization does not equal the amount that has been invested. Market cap is simply the stock price times the total number of shares. This represents a theoretical value of the company. I say \"\"theoretical\"\" because the company might not be able to be sold for that at all. The quoted stock price is simply what the last buyer and seller of stock agreed upon for the price of their trade. They really only represent themselves; other investors may decide that the stock is worth more or less than that. The stock price can move on very little volume. In this case, Amazon had released a very good earnings report after the bell yesterday, and the price jumped in after hours trading. The stock price is up, but that simply means that the few shares traded overnight sold for much higher than the closing price yesterday. After the market opens today and many more shares are traded, we'll get a better idea what large numbers of investors feel about the price. But no matter what the price does, the change in market cap does not equal the amount of new money being invested in the company. Market cap is the price of the most recent trades extrapolated out across all the shares.\"",
"title": ""
},
{
"docid": "a26da9e8aaa057b993b4972726e78b83",
"text": "For each class A share (GOOGL) there's a class C share (GOOG), hence the missing half in your calculation. The almost comes from the slightly higher market price of the class A shares (due to them having voting powers) over class C (which have no voting powers). There's also class B share which is owned by the founders (Larry, Sergei, Eric and perhaps some to Stanford University and others) and differs from class A by the voting power. These are not publicly traded.",
"title": ""
},
{
"docid": "8399543fe9b611cc89a88cecf78f9c74",
"text": "It's been awhile since my last finance course, so school me here: What is the market cap of a company actually supposed to represent? I get that it's the stock price X the # of shares, but what is that actually representing? Revenues? PV of all future revenues? PV of future cash flows? In any case, good write up. Valuation of tech stocks is quite the gambit, and you've done a good job of dissecting it for a layman.",
"title": ""
},
{
"docid": "7ffa49547ede3ac0898ebc62bf9ffbc6",
"text": "Yep, a lot of startup funding these days is called equity, which makes for nice valuation, but there are often so many extra stipulations (I've even read of caps on upside; wish I could find the Matt Levine column on it now) that it really is effectively debt.",
"title": ""
},
{
"docid": "3ccaab31cbf55185b353f68bf4441bad",
"text": "Presumably you're talking about the different share class introduced in the recent stock split, which mean that there are now three Google share classes: Due to the voting rights, Class A shares should be worth more than class C, but how much only time will tell. Actually, one could very well argue that a non-voting share of a company that pays no dividends has no value at all. It's unlikely the markets will see it that way, though.",
"title": ""
},
{
"docid": "14f2999deae606e6f6c4ece90479ef58",
"text": "If the company's ownership is structured similarly to a typical start-up then an 1% employee ownership in a company which sells for 1 million will yield far less than 10k due to various liquidation preferences of the investors, different share classes, etc. It's pretty hard to get a specific number because it depends a lot on the details of earlier fundraising and stock grants. That said, unless the company is circling the drain and the sale was just to avoid BK, the share price you get should be higher unless the share class structure and acquisition deal are completely unfair.",
"title": ""
},
{
"docid": "6ea060c6609dda916ca73e499a6d44a5",
"text": "A company generally sells a portion of its ownership in an IPO, with existing investors retaining some ownership. In your example, they believe that the entire company is worth $25MM, so in order to raise $3MM it is issuing stock representing 12% of the ownership stake (3/25), which dilutes some or all of the existing stockholders' claims.",
"title": ""
}
] |
fiqa
|
8d12fd3b7170c0ae7e002f760f5b061a
|
How some mutual funds pay such high dividends
|
[
{
"docid": "a0eec544d315db7e5254c9d5f48969a7",
"text": "Look at their dividend history. The chart there is simply reporting the most recent dividend (or a recent time period, in any event). GF for example: http://www.nasdaq.com/symbol/gf/dividend-history It's had basically two significant dividends and a bunch of small dividends. Past performance is not indicative of future returns and all that. It might never have a similar dividend again. What you're basically looking at with that chart is a list of recently well-performing funds - funds who had a good year. They obviously may or may not have such a good year next year. You also have funds that are dividend-heavy (intended explicitly to return significant dividends). Those may return large dividends, but could still fall in value significantly. Look at ACP for example: it's currently trading near it's 2-year low. You got a nice dividend, but the price dropped quite a bit, so you lost a chunk of that money. (I don't know if ACP is a dividend-heavy fund, but it looks like it might be.) GF's chart is also indicative of something interesting: it fell off a cliff right after it gave its dividend (at the end of the year). Dropped $4. I think that's because this is a mutual fund priced based on the NAV of its holdings - so it dividended some of those holdings, which dropped the share price (and the NAV of the fund) by that amount. IE, $18 a share, $4 a share dividend, so after that $14 a share. (The rest of the dividends are from stock holdings which pay dividends themselves, if I understand properly). Has a similar drop in Dec 2013. They may simply be trying to keep the price of the fund in the ~$15 a share range; I suspect (but don't know) that some funds have in their charter a requirement to stay in a particular range and dividend excess value.",
"title": ""
}
] |
[
{
"docid": "aade973ed2fc9f2d0cc26bc56b1d2607",
"text": "This looks more like an aggregation problem. The Dividends and Capital Gains are on quite a few occassions not on same day and hence the way Yahoo is aggregating could be an issue. There is a seperate page with Dividends and capital gains are shown seperately, however as these funds have not given payouts every year, it seems there is some bug in aggregating this info at yahoo's end. For FBMPX http://uk.finance.yahoo.com/q/hp?s=FBMPX&b=2&a=00&c=1987&e=17&d=01&f=2014&g=v https://fundresearch.fidelity.com/mutual-funds/fees-and-prices/316390681 http://uk.finance.yahoo.com/q/pr?s=FBMPX",
"title": ""
},
{
"docid": "3aeb17bf4b73d0f13117216075ec7f99",
"text": "\"What you are describing is a very specific case of the more general principle of how dividend payments work. Broadly speaking, if you own common shares in a corporation, you are a part owner of that corporation; you have the right to a % of all of that corporation's assets. The value in having that right is ultimately because the corporation will pay you dividends while it operates, and perhaps a final dividend when it liquidates at the end of its life. This is why your shares have value - because they give you ownership of the business itself. Now, assume you own 1k shares in a company with 100M shares, worth a total of $5B. You own 0.001% of the company, and each of your shares is worth $50; the total value of all your shares is $50k. Assume further that the value of the company includes $1B in cash. If the company pays out a dividend of $1B, it will now be only worth $4B. Your shares have just gone down in value by 20%! But, you have a right to 0.001% of the dividend, which equals a $10k cash payment to you. Your personal holdings are now $40k worth of shares, plus $10k in cash. Except for taxes, financial theory states that whether a corporation pays a dividend or not should not impact the value to the individual shareholder. The difference between a regular corporation and a mutual fund, is that the mutual fund is actually a pool of various investments, and it reports a breakdown of that pool to you in a different way. If you own shares directly in a corporation, the dividends you receive are called 'dividends', even if you bought them 1 minute before the ex-dividend date. But a payment from a mutual fund can be divided between, for example, a flow through of dividends, interest, or a return of capital. If you 'looked inside' your mutual fund you when you bought it, you would see that 40% of its value comes from stock A, 20% comes from stock B, etc etc., including maybe 1% of the value coming from a pile of cash the fund owns at the time you bought your units. In theory the mutual fund could set aside the cash it holds for current owners only, but then it would need to track everyone's cash-ownership on an individual basis, and there would be thousands of different 'unit classes' based on timing. For simplicity, the mutual fund just says \"\"yes, when you bought $50k in units, we were 1/3 of the year towards paying out a $10k dividend. So of that $10k dividend, $3,333k of it is assumed to have been cash at the time you bought your shares. Instead of being an actual 'dividend', it is simply a return of capital.\"\" By doing this, the mutual fund is able to pay you your owed dividend [otherwise you would still have the same number of units but no cash, meaning you would lose overall value], without forcing you to be taxed on that payment. If the mutual fund didn't do this separate reporting, you would have paid $50k to buy $46,667k of shares and $3,333k of cash, and then you would have paid tax on that cash when it was returned to you. Note that this does not \"\"falsely exaggerate the investment return\"\", because a return of capital is not earnings; that's why it is reported separately. Note that a 'close-ended fund' is not a mutual fund, it is actually a single corporation. You own units in a mutual fund, giving you the rights to a proportion of all the fund's various investments. You own shares in a close-ended fund, just as you would own shares in any other corporation. The mutual fund passes along the interest, dividends, etc. from its investments on to you; the close-ended fund may pay dividends directly to its shareholders, based on its own internal dividend policy.\"",
"title": ""
},
{
"docid": "85b1a08cb97369960f092c4dede5bb8d",
"text": "Dividends are a form of passive income.",
"title": ""
},
{
"docid": "38bdbd4c2225ed3344f2d36eb24aa6d8",
"text": "You can use a tool like WikiInvest the advantage being it can pull data from most brokerages and you don't have to enter them manually. I do not know how well it handles dividends though.",
"title": ""
},
{
"docid": "6fdf8698afbbce4fdfcff1a82a3e7435",
"text": "\"A growth fund is looking to invest in stocks that will appreciate in stock price over time as the companies grow revenues and market share. A dividend fund is looking to invest in stocks of companies that pay dividends per share. These may also be called \"\"income\"\" funds. In general, growth stocks tend to be younger companies and tend to have a higher volatility - larger up and down swings in stock price as compared to more established companies. So, growth stocks are a little riskier than stocks of more established/stable companies. Stocks that pay dividends are usually more established companies with a good revenue stream and well established market share who don't expect to grow the company by leaps and bounds. Having a stable balance sheet over several years and paying dividends to shareholders tends to stabilize the stock price - lower volatility, less speculation, smaller swings in stock price. So, income stocks are considered lower risk than growth stocks. Funds that invest in dividend stocks are looking for steady reliable returns - not necessarily the highest possible return. They will favor lower, more reliable returns in order to avoid the drama of high volatility and possible loss of capital. Funds that invest in growth stocks are looking for higher returns, but with that comes a greater risk of losing value. If the fund manager believes an industry sector is on a growth path, the fund may invest in several small promising companies in the hopes that one or two of them will do very well and make up for lackluster performance by the rest. As with all stock investments, there are no guarantees. Investing in funds instead of individual stocks allows you invest in multiple companies to ride the average - avoid large losses if a single company takes a sudden downturn. Dividend funds can lose value if the market in general or the industry sector that the fund focuses on takes a downturn.\"",
"title": ""
},
{
"docid": "eec00fac4023bd89d4a52ab034993c41",
"text": "If you want to go far upstream, you can get mutual fund NAV and dividend data from the Nasdaq Mutual Fund Quotation Service (MFQS). This isn't for end-users but rather is offered as a part of the regulatory framework. Not surprisingly, there is a fee for data access. From Nasdaq's MFQS specifications page: To promote market transparency, Nasdaq operates the Mutual Fund Quotation Service (MFQS). MFQS is designed to facilitate the collection and dissemination of daily price, dividends and capital distributions data for mutual funds, money market funds, unit investment trusts (UITs), annuities and structured products.",
"title": ""
},
{
"docid": "0d133fdf8af7ed7e81a929aefa9fb736",
"text": "The company gets it worth from how well it performs. For example if you buy company A for $50 a share and it beats its expected earnings, its price will raise and lets say after a year or two it can be worth around $70 or maybe more.This is where you can sell it and make more money than dividends.",
"title": ""
},
{
"docid": "93272704c3255f614b4bc281253cb3a1",
"text": "The Telegraph had an interesting article recently going back 30 years for Mutual's in the UK that had beaten the market and trackers for both IT and UT http://www.telegraph.co.uk/finance/personalfinance/investing/11489789/The-funds-that-have-returned-more-than-12pc-per-year-for-THIRTY-years.html",
"title": ""
},
{
"docid": "ce25b1830452e713b8ff2b84a9d71f11",
"text": "\"Mutual funds generally make distributions once a year in December with the exact date (and the estimated amount) usually being made public in late October or November. Generally, the estimated amounts can get updated as time goes on, but the date does not change. Some funds (money market, bond funds, GNMA funds etc) distribute dividends on the last business day of each month, and the amounts are rarely made available beforehand. Capital gains are usually distributed once a year as per the general statement above. Some funds (e.g. S&P 500 index funds) distribute dividends towards the end of each quarter or on the last business day of the quarter, and capital gains once a year as per the general statement above. Some funds make semi-annual distributions but not necessarily at six-month intervals. Vanguard's Health Care Fund has distributed dividends and capital gains in March and December for as long as I have held it. VDIGX claims to make semi-annual distributions but made distributions three times in 2014 (March, June, December) and has made/will make two distributions this year already (March is done, June is pending -- the fund has gone ex-dividend with re-investment today and payment on 22nd). You can, as Chris Rea suggests, call the fund company directly, but in my experience, they are reluctant to divulge the date of the distribution (\"\"The fund manager has not made the date public as yet\"\") let alone an estimated amount. Even getting a \"\"Yes, the fund intends to make a distribution later this month\"\" was difficult to get from my \"\"Personal Representative\"\" in early March, and he had to put me on hold to talk to someone at the fund before he was willing to say so.\"",
"title": ""
},
{
"docid": "354b30beb9a55fa25cc1a12b002fd1ca",
"text": "This is how capital shares in split capital investment trusts work they never get any dividend they just get the capital when the company is wound up",
"title": ""
},
{
"docid": "148fe3c6b836d3b733d3f1f75a6f917a",
"text": "\"In the case of a specific fund, I'd be tempted to get get an annual report that would disclose distribution data going back up to 5 years. The \"\"View prospectus and reports\"\" would be the link on the site to note and use that to get to the PDF of the report to get the data that was filed with the SEC as that is likely what matters more here. Don't forget that mutual fund distributions can be a mix of dividends, bond interest, short-term and long-term capital gains and thus aren't quite as simple as stock dividends to consider here.\"",
"title": ""
},
{
"docid": "c8e6b1e733931958f9180e8ad4a2b7d7",
"text": "No, they do not. Stock funds and bonds funds collect income dividends in different ways. Stock funds collect dividends (as well as any capital gains that are realized) from the underlying stocks and incorporates these into the funds’ net asset value, or daily share price. That’s why a stock fund’s share price drops when the fund makes a distribution – the distribution comes out of the fund’s total net assets. With bond funds, the internal accounting is different: Dividends accrue daily, and are then paid out to shareholders every month or quarter. Bond funds collect the income from the underlying bonds and keep it in a separate internal “bucket.” A bond fund calculates a daily accrual rate for the shares outstanding, and shareholders only earn income for the days they actually hold the fund. For example, if you buy a bond fund two days before the fund’s month-end distribution, you would only receive two days’ worth of income that month. On the other hand, if you sell a fund part-way through the month, you will still receive a partial distribution at the end of the month, pro-rated for the days you actually held the fund. Source Also via bogleheads: Most Vanguard bond funds accrue interest to the share holders daily. Here is a typical statement from a prospectus: Each Fund distributes to shareholders virtually all of its net income (interest less expenses) as well as any net capital gains realized from the sale of its holdings. The Fund’s income dividends accrue daily and are distributed monthly. The term accrue used in this sense means that the income dividends are credited to your account each day, just like interest in a savings account that accrues daily. Since the money set aside for your dividends is both an asset of the fund and a liability, it does not affect the calculated net asset value. When the fund distributes the income dividends at the end of the month, the net asset value does not change as both the assets and liabilities decrease by exactly the same amount. [Note that if you sell all of your bond fund shares in the middle of the month, you will receive as proceeds the value of your shares (calculated as number of shares times net asset value) plus a separate distribution of the accrued income dividends.]",
"title": ""
},
{
"docid": "d80050a6bd73daa905840127c9a38dd5",
"text": "For all stocks, expected Dividends are a part of the price it is traded for - consider that originally, the whole idea of stocks was to participate in the earnings of the company = get dividends. The day the dividend is paid, that expectation is of course removed, and thereby the stock value reduced by just the amount of dividend paid. You will see that behavior for all stocks, everywhere. The dividend in your example is just uncommonly high relative to the stock price; but that is a company decision - they can decide whatever amount they want as a dividend. In other words, the day before dividend payments, investors value the stock at ~14 $, plus an expected dividend payment of 12 $, which adds to 26 $. The day after the dividend payment, investors still value the stock at ~14 $, plus no more dividend payment = 0 $. Nothing changed really in the valuation.",
"title": ""
},
{
"docid": "215e36b5c385dc311d8f50b10a82be08",
"text": "Generally speaking, each year, mutual funds distribute to their shareholders the dividends that are earned by the stocks that they hold and also the net capital gains that they make when they sell stocks that they hold. If they did not do so, the money would be income to the fund and the fund would have to pay taxes on the amount not distributed. (On the other hand, net capital losses are held by the fund and carried forward to later years to offset future capital gains). You pay taxes on the amounts of the distributions declared by the fund. Whether the fund sold a particular stock for a loss or a gain (and if so, how much) is not the issue; what the fund declares as its distribution is. This is why it is not a good idea to buy a mutual fund just before it makes a distribution; your share price drops by the per-share amount of the distribution, and you have to pay taxes on the distribution.",
"title": ""
},
{
"docid": "0ccdc6551bab3d553a85e58f297e935e",
"text": "A share is more than something that yields dividends, it is part ownership of the company and all of its assets. If the company were to be liquidated immediately the shareholders would get (a proportion of) the net value (assets - liabilities) of the company because they own it. If a firm is doing well then its assets are increasing (i.e. more cash assets from profits) therefore the value of the underlying company has risen and the intrinsic value of the shares has also increased. The price will not reflect the current value of the firms assets and liabilities because it will also include the net present value of expected future flows. Working out the expected future flows is a science on par with palmistry and reading chicken entrails so don't expect to work out why a company is trading at a price so much higher than current assets - liabilities (or so much lower in companies that are expected to fail). This speculation is in addition to price speculation that you mention in the question.",
"title": ""
}
] |
fiqa
|
4a2e883f305abfda08ba50d74bf29ee9
|
What's an Exchange-Traded Fund (ETF)?
|
[
{
"docid": "413d3bf0ea58ed81d3f3075a50cae56d",
"text": "Wikipedia has a fairly detailed explanation of ETFs. http://en.wikipedia.org/wiki/Exchange-traded_fund",
"title": ""
},
{
"docid": "7f9f73f44252859c960d718bd477fefc",
"text": "ETFs offer the flexibility of stocks while retaining many of the benefits of mutual funds. Since an ETF is an actual fund, it has the diversification of its potentially many underlying securities. You can find ETFs with stocks at various market caps and style categories. You can have bond or mixed ETFs. You can even get ETFs with equal or fundamental weighting. In short, all the variety benefits of mutual funds. ETFs are typically much less expensive than mutual funds both in terms of management fees (expense ratio) and taxable gains. Most of them are not actively managed; instead they follow an index and therefore have a low turnover. A mutual fund may actively trade and, if not balanced with a loss, will generate capital gains that you pay taxes on. An ETF will produce gains only when shifting to keep inline with the index or you yourself sell. As a reminder: while expense ratio always matters, capital gains and dividends don't matter if the ETF or mutual fund is in a tax-advantaged account. ETFs have no load fees. Instead, because you trade it like a stock, you will pay a commission. Commissions are straight, up-front and perfectly clear. Much easier to understand than the various ways funds might charge you. There are no account minimums to entry with ETFs, but you will need to buy complete shares. Only a few places allow partial shares. It is generally harder to dollar-cost average into an ETF with regular automated investments. Also, like trading stocks, you can do those fancy things like selling short, buying on margin, options, etc. And you can pay attention to the price fluctuations throughout the day if you really want to. Things to make you pause: if you buy (no-load) mutual funds through the parent company, you'll get them at no commission. Many brokerages have No Transaction Fee (NTF) agreements with companies so that you can buy many funds for free. Still look out for that expense ratio though (which is probably paying for that NTF advantage). As sort of a middle ground: index funds can have very low expense ratios, track the same index as an ETF, can be tax-efficient or tax-managed, free to purchase, easy to dollar-cost average and easier to automate/understand. Further reading:",
"title": ""
}
] |
[
{
"docid": "c630f8c6a118525e354eb02b4005abf8",
"text": "No ETN or ETF yet. There are beta funds, that aim to track the market. What's really needed is a liquid market for cat risk trading/transfer, enabling users to buy protection, or take the other side. You can write cat swaps, so derivative forms, including ILW's or with parametric triggers. But these aren't liquid at all yet. Cat bonds are most liquid, but it dries up pretty quickly when events threaten as there's no true hedging market yet.",
"title": ""
},
{
"docid": "3214d417942a98cc97c5269f2ec52458",
"text": "ETF is essentially a stock, from accounting perspective. Treat it as just another stock in the portfolio.",
"title": ""
},
{
"docid": "13804378135ed6bfb6d6e7517aac9d40",
"text": "index ETF tracks indented index (if fund manager spend all money on Premium Pokemon Trading Cards someone must cover resulting losses) Most Index ETF are passively managed. ie a computer algorithm would do automatic trades. The role of fund manager is limited. There are controls adopted by the institution that generally do allow such wide deviations, it would quickly be flagged and reported. Most financial institutions have keyman fraud insurance. fees are not higher that specified in prospectus Most countries have regulation where fees need to be reported and cannot exceed the guideline specified. at least theoretically possible to end with ETF shares that for weeks cannot be sold Yes some ETF's can be illiquid at difficult to sell. Hence its important to invest in ETF that are very liquid.",
"title": ""
},
{
"docid": "134de673a4f035e8dc6161165f501759",
"text": "A closed-end fund is a collective investment scheme that is closed to new investment once the fund starts operating. A typical open-ended fund will allow you to buy more shares of the fund anytime you want and the fund will create those new shares for you and invest your new money to continue growing assets under management. A closed-end fund only using the initial capital invested when the fund started operating and no new shares are typically created (always exception in the financial community). Normally you buy and sell an open-end fund from the fund company directly. A closed-end fund will usually be bought and sold on the secondary market. Here is some more information from Wikipedia Some characteristics that distinguish a closed-end fund from an ordinary open-end mutual fund are that: Another distinguishing feature of a closed-end fund is the common use of leverage or gearing to enhance returns. CEFs can raise additional investment capital by issuing auction rate securities, preferred shares, long-term debt, and/or reverse-repurchase agreements. In doing so, the fund hopes to earn a higher return with this excess invested capital.",
"title": ""
},
{
"docid": "5a2597ff9b7701bb15d381e14a0bc724",
"text": "\"What does ETFs have to do with this or Amazon? Actually, investing in ETFs means you are killing actively managed Mutual Funds (managed by people, fund managers) to get an average return (and loss) of the market that a computer manage instead of a person. And the ETF will surely have Amazon stocks because they are part of the index. I only invest in actively managed mutual funds. Yes, most actively managed mutual funds can't do better than the index, but if you work a bit harder, you can find the many that do much better than the \"\"average\"\" that an index give you.\"",
"title": ""
},
{
"docid": "1ca4aa43255f1b1f575ff0e602651839",
"text": "\"Remember that in most news outlets journalists do not get to pick the titles of their articles. That's up to the editor. So even though the article was primarily about ETFs, the reporter made the mistake of including some tangential references to mutual funds. The editor then saw that the article talked about ETFs and mutual funds and -- knowing even less about the subject matter than the reporter, but recognizing that more readers' eyeballs would be attracted to a headline about mutual funds than to a headline about ETFs -- went with the \"\"shocking\"\" headline about the former. In any case, as you already pointed out, ETFs need to know their value throughout the day, as do the investors in that ETF. Even momentary outages of price sources can be disastrous. Although mutual funds do not generally make transactions throughout the day, and fund investors are not typically interested in the fund's NAV more than once per day, the fund managers don't just sit around all day doing nothing and then press a couple buttons before the market closes. They do watch their NAV very closely during the day and think very carefully about which buttons to press at the end of the day. If their source of stock price data goes offline, then they're impacted almost as severely as -- if less visibly than -- an ETF. Asking Yahoo for prices seems straightforward, but (1) you get what you pay for, and (2) these fund companies are built on massive automated infrastructures that expect to receive their data from a certain source in a certain way at a certain time. (And they pay a lot of money in order to be able to expect that.) It would be quite difficult to just feed in manual data, although in the end I suspect some of these companies did just that. Either they fell back to a secondary data supplier, or they manually constructed datasets for their programs to consume.\"",
"title": ""
},
{
"docid": "446c12b0d6ce872ec6a585017050af10",
"text": "\"Does the bolded sentence apply for ETFs and ETF companies? No, the value of an ETF is determined by an exchange and thus the value of the share is whatever the trading price is. Thus, the price of an ETF may go up or down just like other securities. Money market funds can be a bit different as the mutual fund company will typically step in to avoid \"\"Breaking the Buck\"\" that could happen as a failure for that kind of fund. To wit, must ETF companies invest a dollar in the ETF for every dollar that an investor deposited in this aforesaid ETF? No, because an ETF is traded as shares on the market, unless you are using the creation/redemption mechanism for the ETF, you are buying and selling shares like most retail investors I'd suspect. If you are using the creation/redemption system then there are baskets of other securities that are being swapped either for shares in the ETF or from shares in the ETF.\"",
"title": ""
},
{
"docid": "08c3f5e83dd7e845ab352290781bcd70",
"text": "Dividends are not paid immediately upon reception from the companies owned by an ETF. In the case of SPY, they have been paid inconsistently but now presumably quarterly.",
"title": ""
},
{
"docid": "9a2fb8987853dd7bb42da0a18d64dd5a",
"text": "The ETF price quoted on the stock exchange is in principle not referenced to NAV. The fund administrator will calculate and publish the NAV net of all fees, but the ETF price you see is determined by the market just like for any other security. Having said that, the market will not normally deviate greatly from the NAV of the fund, so you can safely assume that ETF quoted price is net of relevant fees.",
"title": ""
},
{
"docid": "9680062e8d91759cbf38b661420710a6",
"text": "\"As with ANY investment the first answer is....do not invest in any that you do not fully understand. ETF's are very versatile and can be used for many different people for many different parts of their portfolio, so I don't think there can be a blanket statement of \"\"this\"\" one is good or bad for all.\"",
"title": ""
},
{
"docid": "b8bc5ac6fc7eafb3ec03c29d82e651ec",
"text": "\"The London Stock Exchange offers a wealth of exchange traded products whose variety matches those offered in the US. Here is a link to a list of exchange traded products listed on the LSE. The link will take you to the list of Vanguard offerings. To view those offered by other managers, click on the letter choices at the top of the page. For example, to view the iShares offerings, click on \"\"I\"\". In the case of Vanguard, the LSE listed S&P500 ETF is traded under the code VUSA. Similarly, the Vanguard All World ETF trades under the code VWRL. You will need to be patient viewing iShares offerings since there are over ten pages of them, and their description is given by the abbreviation \"\"ISH name\"\". Almost all of these funds are traded in GBP. Some offer both currency hedged and currency unhedged versions. Obviously, with the unhedged version you are taking on additional currency risk, so if you wish to avoid currency risk then choose a currency hedged version. Vanguard does not appear to offer currency hedged products in London while iShares does. Here is a list of iShares currency hedged products. As you can see, the S&P500 currency hedged trades under the code IGUS while the unhedged version trades under the code IUSA. The effects of BREXIT on UK markets and currency are a matter of opinion and difficult to quantify currently. The doom and gloom warnings of some do not appear to have materialised, however the potential for near-term volatility remains so longs as the exit agreement is not formalised. In the long-term, I personally believe that BREXIT will, on balance, be a positive for the UK, but that is just my opinion.\"",
"title": ""
},
{
"docid": "48c24049376a347959f8f744d9e66517",
"text": "Bond ETFs are traded like normal stock. It just so happens to be that the underlying fund (for which you own shares) is invested in bonds. Such funds will typically own many bonds and have them laddered so that they are constantly maturing. Such funds may also trade bonds on the OTC market. Note that with bond ETFs you're able to lose money as well as gain depending on the situation with the bond market. The issuer of the bond does not need to default in order for this to happen. The value of a bond (and thus the value of the bond fund which holds the bonds) is, much like a stock, determined based on factors like supply/demand, interest rates, credit ratings, news, etc.",
"title": ""
},
{
"docid": "1d78a5b716489ff3fa60038e90e411c1",
"text": "\"Don't put money in things that you don't understand. ETFs won't kill you, ignorance will. The leveraged ultra long/short ETFs hold swaps that are essentially bets on the daily performance of the market. There is no guarantee that they will perform as designed at all, and they frequently do not. IIRC, in most cases, you shouldn't even be holding these things overnight. There aren't any hidden fees, but derivative risk can wipe out portions of the portfolio, and since the main \"\"asset\"\" in an ultra long/short ETF are swaps, you're also subject to counterparty risk -- if the investment bank the fund made its bet with cannot meet it's obligation, you're may lost alot of money. You need to read the prospectus carefully. The propectus re: strategy. The Fund seeks daily investment results, before fees and expenses, that correspond to twice the inverse (-2x) of the daily performance of the Index. The Fund does not seek to achieve its stated investment objective over a period of time greater than a single day. The prospectus re: risk. Because of daily rebalancing and the compounding of each day’s return over time, the return of the Fund for periods longer than a single day will be the result of each day’s returns compounded over the period, which will very likely differ from twice the inverse (-2x) of the return of the Index over the same period. A Fund will lose money if the Index performance is flat over time, and it is possible that the Fund will lose money over time even if the Index’s performance decreases, as a result of daily rebalancing, the Index’s volatility and the effects of compounding. See “Principal Risks” If you want to hedge your investments over a longer period of time, you should look at more traditional strategies, like options. If you don't have the money to make an option strategy work, you probably can't afford to speculate with leveraged ETFs either.\"",
"title": ""
},
{
"docid": "b13d7ac82c5befb55fd314984545bbf5",
"text": "\"I used to use etfconnect before they went paid and started concentrating on closed end funds. These days my source of information is spread out. The primary source about the instrument (ETF) itself is etfdb, backed by information from Morningstar and Yahoo Finance. For comparison charts Google Finance can't be beat. For actual solid details about a specific ETF, would check read the prospectus from the managing firm itself. One other comment, never trust a site that \"\"tells you\"\" which securities to buy. The idea is that you need sources of solid information about financial instruments to make a decision, not a site that makes the decision for you. This is due to the fact that everyone has different strategies and goals for their money and a single site saying buy X sell Y will probably lead you to lose your money.\"",
"title": ""
},
{
"docid": "fcc5c09042f1b8f94def4d09030f3687",
"text": "As keshlam said, an ETF holds various assets, but the level of diversification depends on the individual ETF. A bond ETF can focus on short term bonds, long term bonds, domestic bonds, foreign bonds, government bonds, corporate bonds, low risk, high risk, or a mixture of any of those. Vanguard Total International Bond ETF (BNDX) for instance tries to be geographically diverse.",
"title": ""
}
] |
fiqa
|
f19bf3baa4e05dd3d9f8844b88b0c928
|
Does keeping 'long-term' safety net in bonds make sense?
|
[
{
"docid": "bab6ea73a159b162acf0efe1a8be6b24",
"text": "\"The answer to your question depends very much on your definition of \"\"long-term\"\". Because let's make something clear: an investment horizon of three to six months is not long term. And you need to consider the length of time from when an \"\"emergency\"\" develops until you will need to tap into the money. Emergencies almost by definition are unplanned. When talking about investment risk, the real word that should be used is volatility. Stocks aren't inherently riskier than bonds issued by the same company. They are likely to be a more volatile instrument, however. This means that while stocks can easily gain 15-20 percent or more in a year if you are lucky (as a holder), they can also easily lose just as much (which is good if you are looking to buy, unless the loss is precipitated by significantly weaker fundamentals such as earning lookout). Most of the time stocks rebound and regain lost valuation, but this can take some time. If you have to sell during that period, then you lose money. The purpose of an emergency fund is generally to be liquid, easily accessible without penalties, stable in value, and provide a cushion against potentially large, unplanned expenses. If you live on your own, have good insurance, rent your home, don't have any major household (or other) items that might break and require immediate replacement or repair, then just looking at your emergency fund in terms of months of normal outlay makes sense. If you own your home, have dependents, lack insurance and have major possessions which you need, then you need to factor those risks into deciding how large an emergency fund you might need, and perhaps consider not just normal outlays but also some exceptional situations. What if the refrigerator and water heater breaks down at the same time that something breaks a few windows, for example? What if you also need to make an emergency trip near the same time because a relative becomes seriously ill? Notice that the purpose of the emergency fund is specifically not to generate significant interest or dividend income. Since it needs to be stable in value (not depreciate) and liquid, an emergency fund will tend towards lower-risk and thus lower-yield investments, the extreme being cash or the for many more practical option of a savings account. Account forms geared toward retirement savings tend to not be particularly liquid. Sure, you can usually swap out one investment vehicle for another, but you can't easily withdraw your money without significant penalties if at all. Bonds are generally more stable in value than stocks, which is a good thing for a longer-term portion of an emergency fund. Just make sure that you are able to withdraw the money with short notice without significant penalties, and pick bonds issued by stable companies (or a fund of investment-grade bonds). However, in the present investment climate, this means that you are looking at returns not significantly better than those of a high-yield savings account while taking on a certain amount of additional risk. Bonds today can easily have a place if you have to pick some form of investment vehicle, but if you have the option of keeping the cash in a high-yield savings account, that might actually be a better option. Any stock market investments should be seen as investments rather than a safety net. Hopefully they will grow over time, but it is perfectly possible that they will lose value. If what triggers your financial emergency is anything more than local, it is certainly possible to have that same trigger cause a decline in the stock market. Money that you need for regular expenses, even unplanned ones, should not be in investments. Thus, you first decide how large an emergency fund you need based on your particular situation. Then, you build up that amount of money in a savings vehicle rather than an investment vehicle. Once you have the emergency fund in savings, then by all means continue to put the same amount of money into investments instead. Just make sure to, if you tap into the emergency fund, replenish it as quickly as possible.\"",
"title": ""
},
{
"docid": "9b06e7307088dc7210864a5d44d88371",
"text": "I am understanding the OP to mean that this is for an emergency fund savings account meant to cover 3 to 6 months of living expenses, not a 3-6 month investment horizon. Assuming this is the case, I would recommend keeping these funds in a Money Market account and not in an investment-grade bond fund for three reasons:",
"title": ""
}
] |
[
{
"docid": "5b70a0767127af96e29b1b5b41b93e99",
"text": "\"I can think of a few reasons for this. First, bonds are not as correlated with the stock market so having some in your portfolio will reduce volatility by a bit. This is nice because it makes you panic less about the value changes in your portfolio when the stock market is acting up, and I'm sure that fund managers would rather you make less money consistently then more money in a more volatile way. Secondly, you never know when you might need that money, and since stock market crashes tend to be correlated with people losing their jobs, it would be really unfortunate to have to sell off stocks when they are under-priced due to market shenanigans. The bond portion of your portfolio would be more likely to be stable and easier to sell to help you get through a rough patch. I have some investment money I don't plan to touch for 20 years and I have the bond portion set to 5-10% since I might as well go for a \"\"high growth\"\" position, but if you're more conservative, and might make withdrawals, it's better to have more in bonds... I definitely will switch over more into bonds when I get ready to retire-- I'd rather have slow consistent payments for my retirement than lose a lot in an unexpected crash at a bad time!\"",
"title": ""
},
{
"docid": "9e6a893421677586f657499d3a01381b",
"text": "\"It sounds like you want a place to park some money that's reasonably safe and liquid, but can sustain light to moderate losses. Consider some bond funds or bond ETFs filled with medium-term corporate bonds. It looks like you can get 3-3.5% or so. (I'd skip the municipal bond market right now, but \"\"why\"\" is a matter for its own question). Avoid long-term bonds or CDs if you're worried about inflation; interest rates will rise and the immediate value of the bonds will fall until the final payout value matches those rates.\"",
"title": ""
},
{
"docid": "90cf653a01b6f9a034dc013a6e16605f",
"text": "\"value slip below vs \"\"equal a bank savings account’s safety\"\" There is no conflict. The first author states that money market funds may lose value, precisely due to duration risk. The second author states that money market funds is as safe as a bank account. Safety (in the sense of a bond/loan/credit) mostly about default risk. For example, people can say that \"\"a 30-year U.S. Treasury Bond is safe\"\" because the United States \"\"cannot default\"\" (as said in the Constitution/Amendments) and the S&P/Moody's credit rating is the top/special. Safety is about whether it can default, ex. experience a -100% return. Safety does not directly imply Riskiness. In the example of T-Bond, it is ultra safe, but it is also ultra risky. The volatility of 30-year T-Bond could be higher than S&P 500. Back to Money Market Funds. A Money Market Fund could hold deposits with a dozen of banks, or hold short term investment grade debt. Those instruments are safe as in there is minimal risk of default. But they do carry duration risk, because the average duration of the instrument the fund holds is not 0. A money market fund must maintain a weighted average maturity (WAM) of 60 days or less and not invest more than 5% in any one issuer, except for government securities and repurchase agreements. If you have $10,000,000, a Money Market Fund is definitely safer than a savings account. 1 Savings Account at one institution with amount exceeding CDIC/FDIC terms is less safe than a Money Market Fund (which holds instruments issued by 20 different Banks). Duration Risk Your Savings account doesn't lose money as a result of interest rate change because the rate is set by the bank daily and accumulated daily (though paid monthly). The pricing of short term bond is based on market expectation of the interest rates in the future. The most likely cause of Money Market Funds losing money is unexpected change in expectation of future interest rates. The drawdown (max loss) is usually limited in terms of percentage and time through examining historical returns. The rule of thumb is that if your hold a fund for 6 months, and that fund has a weighted average time to maturity of 6 months, you might lose money during the 6 months, but you are unlikely to lose money at the end of 6 months. This is not a definitive fact. Using GSY, MINT, and SHV as an example or short duration funds, the maximum loss in the past 3 years is 0.4%, and they always recover to the previous peak within 3 months. GSY had 1.3% per year return, somewhat similar to Savings accounts in the US.\"",
"title": ""
},
{
"docid": "e2174f138c71e1504c17ffbbe56eb991",
"text": "\"If I don't need this money for decades, meaning I can ride out periodical market crashes, why would I invest in bonds instead of funds that track broad stock market indexes? You wouldn't. But you can never be 100% sure that you really won't need the money for decades. Also, even if you don't need it for decades, you can never be 100% certain that the market will not be way down at the time, decades in the future, when you do need the money. The amount of your portfolio you allocate to bonds (relative to stocks) can be seen as a measure of your desire to guard against that uncertainty. I don't think it's accurate to say that \"\"the general consensus is that your portfolio should at least be 25% in bonds\"\". For a young investor with high risk tolerance, many would recommend less than that. For instance, this page from T. Rowe Price suggests no more than 10% bonds for those in their 20s or 30s. Basically you would put money into bonds rather than stocks to reduce the volatility of your portfolio. If you care only about maximizing return and don't care about volatility, then you don't have to invest in bonds. But you probably actually do care about volatility, even if you don't think you do. You might not care enough to put 25% in bonds, but you might care enough to put 10% in bonds.\"",
"title": ""
},
{
"docid": "bd4931e1968953260f3368e895dd5e48",
"text": "Bonds provide protections against stock market crashes, diversity and returns as the other posters have said but the primary reason to invest in bonds is to receive relatively guaranteed income. By that I mean you receive regular payments as long as the debtor doesn't go bankrupt and stop paying. Even when this happens, bondholders are the first in line to get paid from the sale of the business's assets. This also makes them less risky. Stocks don't guarantee income and shareholders are last in line to get paid. When a stock goes to zero, you lose everything, where as a bondholder will get some face value redemption to the notes issue price and still keep all the previous income payments. In addition, you can use your bond income to buy more shares of stock and increase your gains there.",
"title": ""
},
{
"docid": "db66be504a892bc3ea02c50fdb954cbc",
"text": "\"In the quoted passage, the bonds are \"\"risky\"\" because you CAN lose money. Money markets can be insured by the FDIC, and thus are without risk in many instances. In general, there are a few categories of risks that affect bonds. These include: The most obvious general risk with long-term bonds versus short-term bonds today is that rates are historically low.\"",
"title": ""
},
{
"docid": "e82749a12bb0dc7acbcdae7eb3ee76e6",
"text": "\"For most people \"\"home ownership\"\" is a long term lifestyle strategy (i.e. the intention is to own a home for several decades, regardless of how many times one particular house might be \"\"swapped\"\" for a different one. In an economic environment with steady monetary inflation, taking out a long-term loan backed by a tangible non-depreciating \"\"permanent\"\" asset (e.g. real estate) is in practice a form of investing not borrowing, because over time the monetary value of the asset will increase in line with inflation, but the size of the loan remains constant in money terms. That strategy was always at risk in the short term because of temporary falls in house prices, but long-term inflation running at say 5% per year would cancel out even a 20% fall in house prices in 4 years. Downturns in the economy were often correlated with rises in the inflation rate, which fixed the short-term problem even faster. Car and student loans are an essentially different financial proposition, because you know from the start that the asset will not retain its value (unless you are \"\"investing in a vintage car\"\" rather than \"\"buying a means of personal transportation\"\", a new car will lose most of its monetary value within say 5 years) or there is no tangible asset at all (e.g. taking out a student loan, paying for a vacation trip by credit card, etc). The \"\"scariness\"\" over home loans was the widespread realization that the rules of the game had been changed permanently, by the combination of an economic downturn plus national (or even international) financial policies designed to enforce low inflation rates - with the consequence that \"\"being underwater\"\" had been changed from a short term problem to a long-term one.\"",
"title": ""
},
{
"docid": "7dec0fda4f7e40dbdf163ab81de3f0b1",
"text": "\"Depends on your definition of \"\"secure\"\". The most \"\"secure\"\" investment from a preservation of principal point of view is a non-tradable, general obligation government bond. (Like a US or Canadian savings bond.) Why? There is no interest rate risk -- you can't lose money. The downside is that the rate is not so good. If you want returns and a reasonably high level of security, you need a diversified portfolio.\"",
"title": ""
},
{
"docid": "dca1559289fffc177eda19252b171f5f",
"text": "Your goals are mutually exclusive. You cannot both earn a return that will outpace inflation while simultaneously having zero-risk of losing money, at least not in the 2011 market. In 2008, a 5+% CD would have been a good choice. Here's a potential compromise... sacrifice some immediate liquidity for more earnings. Say you had $10,000 saved: In this scheme, you've diversified a little bit, have access to 50% of your money immediately (either through online transfer or bringing your bonds to a teller), have an implicit US government guarantee for 50% of your money and low risk for the rest, and get inflation protection for 75% of your money.",
"title": ""
},
{
"docid": "5baab23655fcb5e43bd9fbdbbb8e2704",
"text": "So an investor would get their principal back in interest payments after 13.5 years if things remained stable, not accounting for discounting future cash flows/any return for the risk they are taking. The long maturity helps insurance companies and pensions properly match the duration of their liabilities. Still doesn't seem like a good bet, but it makes sense that it happened.",
"title": ""
},
{
"docid": "1856f12fa004f6ee1b1d9889a4827b0d",
"text": "Bonds by themselves aren't recession proof. No investment is, and when a major crash (c.f. 2008) occurs, all investments will be to some extent at risk. However, bonds add a level of diversification to your investment portfolio that can make it much more stable even during downturns. Bonds do not move identically to the stock market, and so many times investing in bonds will be more profitable when the stock market is slumping. Investing some of your investment funds in bonds is safer, because that diversification allows you to have some earnings from that portion of your investment when the market is going down. It also allows you to do something called rebalancing. This is when you have target allocation proportions for your portfolio; say 60% stock 40% bond. Then, periodically look at your actual portfolio proportions. Say the market is way up - then your actual proportions might be 70% stock 30% bond. You sell 10 percentage points of stocks, and buy 10 percentage points of bonds. This over time will be a successful strategy, because it tends to buy low and sell high. In addition to the value of diversification, some bonds will tend to be more stable (but earn less), in particular blue chip corporate bonds and government bonds from stable countries. If you're willing to only earn a few percent annually on a portion of your portfolio, that part will likely not fall much during downturns - and in fact may grow as money flees to safer investments - which in turn is good for you. If you're particularly worried about your portfolio's value in the short term, such as if you're looking at retiring soon, a decent proportion should be in this kind of safer bond to ensure it doesn't lose too much value. But of course this will slow your earnings, so if you're still far from retirement, you're better off leaving things in growth stocks and accepting the risk; odds are no matter who's in charge, there will be another crash or two of some size before you retire if you're in your 30s now. But when it's not crashing, the market earns you a pretty good return, and so it's worth the risk.",
"title": ""
},
{
"docid": "14cee9078b37b49a75d3694d935e28bd",
"text": "And this is bad why? What is the total funding? What is the total return? Do you have the necessary facts to evaluate this? Basing opinions on partial evidence makes poor public policy. Most municipal bonds might actually work out for the better good of communities. Certainly the total amount of bonds listed as going bad in this story is a tiny, tiny fraction of total bonds.",
"title": ""
},
{
"docid": "ce7aaa5ab63eb5f36f70ce6968d53cd8",
"text": "It wouldn't surprise me to see a country's return to show Inflation + 2-4%, on average. The members of this board are from all over the world, but those in a low inflation country, as the US,Canada, and Australia are right now, would be used to a long term return of 8-10%, with sub 2% inflation. In your case, the 20% return is looking backwards, hindsight, and not a guarantee. Your country's 10 year bonds are just under 10%. The difference between the 10% gov bond and the 20% market return reflects the difference between a 'guaranteed' return vs a risky one. Stocks and homes have different return profiles over the decades. A home tends to cost what some hour's pay per month can afford to finance. (To explain - In the US, the median home cost will center around what the median earner can finance with about a week's pay per month. This is my own observation, and it tends to be correct in the long term. When median homes are too high or low compared to this, they must tend back toward equilibrium.) Your home will grow in value according to my thesis, but an investment home has both value that can rise or fall, as well as the monthly rent. This provides total return as a stock had growth and dividends. Regardless of country, I can't predict the future, only point out a potential flaw in your plan.",
"title": ""
},
{
"docid": "bbe5397d9417e54c85543cd31c858101",
"text": "If your money market funds are short-term savings or an emergency fund, you might consider moving them into an online saving account. You can get interest rates close to 1% (often above 1% in higher-rate climates) and your savings are completely safe and easily accessible. Online banks also frequently offer perks such as direct deposit, linking with your checking account, and discounts on other services you might need occasionally (i.e. money orders or certified checks). If your money market funds are the lowest-risk part of your diversified long-term portfolio, you should consider how low-risk it needs to be. Money market accounts are now typically FDIC insured (they didn't used to be), but you can get the same security at a higher interest rate with laddered CD's or U.S. savings bonds (if your horizon is compatible). If you want liquidity, or greater return than a CD will give you, then a bond fund or ETF may be the right choice, and it will tend to move counter to your stock investments, balancing your portfolio. It's true that interest rates will likely rise in the future, which will tend to decrease the value of bond investments. If you buy and hold a single U.S. savings bond, its interest payments and final payoff are set at purchase, so you won't actually lose money, but you might make less than you would if you invested in a higher-rate climate. Another way to deal with this, if you want to add a bond fund to your long-term investment portfolio, is to invest your money slowly over time (dollar-cost averaging) so that you don't pay a high price for a large number of shares that immediately drop in value.",
"title": ""
},
{
"docid": "0882286a3e1d74b65a3bac64fc370be1",
"text": "I think you should consult a professional with experience in 83(b) election and dealing with the problems associated with that. The cost of the mistake can be huge, and you better make sure everything is done properly. For starters, I would look at the copy of the letter you sent to verify that you didn't write the year wrong. I know you checked it twice, but check again. Tax advisers can call a dedicated IRS help line for practitioners where someone may be able to provide more information (with your power of attorney on file), and they can also request the copy of the original letter you've sent to verify it is correct. In any case, you must attach the copy of the letter you sent to your 2014 tax return (as this is a requirement for the election to be valid).",
"title": ""
}
] |
fiqa
|
07bce1f2ae698230962d0da011cc3a6a
|
Do I need to own all the funds my target-date funds owns to mimic it?
|
[
{
"docid": "96d0479db259b1d1bbc57b467acf8cf2",
"text": "\"If you read Joel Greenblatt's The Little Book That Beats the Market, he says: Owning two stocks eliminates 46% of the non market risk of owning just one stock. This risk is reduced by 72% with 4 stocks, by 81% with 8 stocks, by 93% with 16 stocks, by 96% with 32 stocks, and by 99% with 500 stocks. Conclusion: After purchasing 6-8 stocks, benefits of adding stocks to decrease risk are small. Overall market risk won't be eliminated merely by adding more stocks. And that's just specific stocks. So you're very right that allocating a 1% share to a specific type of fund is not going to offset your other funds by much. You are correct that you can emulate the lifecycle fund by simply buying all the underlying funds, but there are two caveats: Generally, these funds are supposed to be cheaper than buying the separate funds individually. Check over your math and make sure everything is in order. Call the fund manager and tell him about your findings and see what they have to say. If you are going to emulate the lifecycle fund, be sure to stay on top of rebalancing. One advantage of buying the actual fund is that the portfolio distributions are managed for you, so if you're going to buy separate ETFs, make sure you're rebalancing. As for whether you need all those funds, my answer is a definite no. Consider Mark Cuban's blog post Wall Street's new lie to Main Street - Asset Allocation. Although there are some highly questionable points in the article, one portion is indisputably clear: Let me translate this all for you. “I want you to invest 5pct in cash and the rest in 10 different funds about which you know absolutely nothing. I want you to make this investment knowing that even if there were 128 hours in a day and you had a year long vacation, you could not possibly begin to understand all of these products. In fact, I don’t understand them either, but because I know it sounds good and everyone is making the same kind of recommendations, we all can pretend we are smart and going to make a lot of money. Until we don’t\"\" Standard theory says that you want to invest in low-cost funds (like those provided by Vanguard), and you want to have enough variety to protect against risk. Although I can't give a specific allocation recommendation because I don't know your personal circumstances, you should ideally have some in US Equities, US Fixed Income, International Equities, Commodities, of varying sizes to have adequate diversification \"\"as defined by theory.\"\" You can either do your own research to establish a distribution, or speak to an investment advisor to get help on what your target allocation should be.\"",
"title": ""
},
{
"docid": "a68a6190f8f1909ef9cf515c36ca5e0d",
"text": "\"The goal of the single-fund with a retirement date is that they do the rebalancing for you. They have some set of magic ratios (specific to each fund) that go something like this: Note: I completely made up those numbers and asset mix. When you invest in the \"\"Mutual-Fund Super Account 2025 fund\"\" you get the benefit that in 2015 (10 years until retirement) they automatically change your asset mix and when you hit 2025, they do it again. You can replace the functionality by being on top of your rebalancing. That being said, I don't think you need to exactly match the fund choices they provide, just research asset allocation strategies and remember to adjust them as you get closer to retirement.\"",
"title": ""
},
{
"docid": "52a68e315eefe0325f56476761a2d3ea",
"text": "Over time, fees are a killer. The $65k is a lot of money, of course, but I'd like to know the fees involved. Are you doubling from 1 to 2%? if so, I'd rethink this. Diversification adds value, I agree, but 2%/yr? A very low cost S&P fund will be about .10%, others may go a bit higher. There's little magic in creating the target allocation, no two companies are going to be exactly the same, just in the general ballpark. I'd encourage you to get an idea of what makes sense, and go DIY. I agree 2% slices of some sectors don't add much, don't get carried away with this.",
"title": ""
}
] |
[
{
"docid": "78324133f5ee24f7ae0dc6de65f65c25",
"text": "I strongly suggest you go to www.investor.gov as it has excellent information regarding these types of questions. A mutual fund is a company that pools money from many investors and invests the money in securities such as stocks, bonds, and short-term debt. The combined holdings of the mutual fund are known as its portfolio. Investors buy shares in mutual funds. Each share represents an investor’s part ownership in the fund and the income it generates. When you buy shares of a mutual fund you're buying it at NAV, or net asset value. The NAV is the value of the fund’s assets minus its liabilities. SEC rules require funds to calculate the NAV at least once daily. Different funds may own thousands of different stocks. In order to calculate the NAV, the fund company must value every security it owns. Since each security's valuation is changing throughout the day it's difficult to determine the valuation of the mutual fund except for when the market is closed. Once the market has closed (4pm eastern) and securities are no longer trading, the company must get accurate valuations for every security and perform the valuation calculations and distribute the results to the pricing vendors. This has to be done by 6pm eastern. This is a difficult and, more importantly, a time consuming process to get it done right once per day. Having worked for several fund companies I can tell you there are many days where companies are getting this done at the very last minute. When you place a buy or sell order for a mutual fund it doesn't matter what time you placed it as long as you entered it before 4pm ET. Cutoff times may be earlier depending on who you're placing the order with. If companies had to price their funds more frequently, they would undoubtedly raise their fees.",
"title": ""
},
{
"docid": "f87db8d477d31f9aafafbeeae7a91cd3",
"text": "\"One approach is to invest in \"\"allocation\"\" mutual funds that use various methods to vary their asset allocation. Some examples (these are not recommendations; just to show you what I am talking about): A good way to identify a useful allocation fund is to look at the \"\"R-squared\"\" (correlation) with indexes on Morningstar. If the allocation fund has a 90-plus R-squared with any index, it probably isn't doing a lot. If it's relatively uncorrelated, then the manager is not index-hugging, but is making decisions to give you different risks from the index. If you put 10% of your portfolio in a fund that varies allocation to stocks from 25% to 75%, then your allocation to stocks created by that 10% would be between 2.5% to 7.5% depending on the views of the fund manager. You can use that type of calculation to invest enough in allocation funds to allow your overall allocation to vary within a desired range, and then you could put the rest of your money in index funds or whatever you normally use. You can think of this as diversifying across investment discipline in addition to across asset class. Another approach is to simply rely on your already balanced portfolio and enjoy any downturns in stocks as an opportunity to rebalance and buy some stocks at a lower price. Then enjoy any run-up as an opportunity to rebalance and sell some stocks at a high price. The difficulty of course is going through with the rebalance. This is one advantage of all-in-one funds (target date, \"\"lifecycle,\"\" balanced, they have many names), they will always go through with the rebalance for you - and you can't \"\"see\"\" each bucket in order to get stressed about it. i.e. it's important to think of your portfolio as a whole, not look at the loss in the stocks portion. An all-in-one fund keeps you from seeing the stocks-by-themselves loss number, which is a good way to trick yourself into behaving sensibly. If you want to rebalance \"\"more aggressively\"\" then look at value averaging (search for \"\"value averaging\"\" on this site for example). A questionable approach is flat-out market-timing, where you try to get out and back in at the right times; a variation on this would be to buy put options at certain times; the problem is that it's just too hard. I think it makes more sense to buy an allocation fund that does this for you. If you do market time, you want to go in and out gradually, and value averaging is one way to do that.\"",
"title": ""
},
{
"docid": "830b49fbf6cee0a1daf7ab15d3d6d535",
"text": "I think we resolved this via comments above. Many finance authors are not fans of target date funds, as they have higher fees than you'd pay constructing the mix yourself, and they can't take into account your own risk tolerance. Not every 24 year old should have the same mix. That said - I suggest you give thought to the pre-tax / post tax (i.e. traditional vs Roth) mix. I recently wrote The 15% solution, which attempts to show how to minimize your lifetime taxes by using the split that's ideal for your situation.",
"title": ""
},
{
"docid": "d1015ffe029820bd6079017d96a071be",
"text": "Like an S&P 500 ETF? So you're getting in some cash inflow each day, cash outflows each day. And you have to buy and sell 500 different stocks, at the same time, in order for your total fund assets to match the S&P 500 index proportions, as much as possible. At any given time, the prices you get from the purchase/sale of stock is probably going to be somewhat different than the theoretical amounts you are supposed to get to match, so it's quite a tangle. This is my understanding of things. Some funds are simpler - a Dow 30 fund only has 30 stocks to balance out. Maybe that's easier, or maybe it's harder because one wonky trade makes a bigger difference? I'm not sure this is how it really operates. The closest I've gotten is a team that has submitted products for indexing, and attempted to develop funds from those indexes. Turns out finding the $25-50 million of initial investments isn't as easy as anyone would think.",
"title": ""
},
{
"docid": "80923207a6f183be4e8cc88ae83b06f9",
"text": "Here is a simple example of how daily leverage fails, when applied over periods longer than a day. It is specifically adjusted to be more extreme than the actual market so you can see the effects more readily. You buy a daily leveraged fund and the index is at 1000. Suddenly the market goes crazy, and goes up to 2000 - a 100% gain! Because you have a 2x ETF, you will find your return to be somewhere near 200% (if the ETF did its job). Then tomorrow it goes back to normal and falls back down to 1000. This is a fall of 50%. If your ETF did its job, you should find your loss is somewhere near twice that: 100%. You have wiped out all your money. Forever. You lose. :) The stock market does not, in practice, make jumps that huge in a single day. But it does go up and down, not just up, and if you're doing a daily leveraged ETF, your money will be gradually eroded. It doesn't matter whether it's 2x leveraged or 8x leveraged or inverse (-1x) or anything else. Do the math, get some historical data, run some simulations. You're right that it is possible to beat the market using a 2x ETF, in the short run. But the longer you hold the stock, the more ups and downs you experience along the way, and the more opportunity your money has to decay. If you really want to double your exposure to the market over the intermediate term, borrow the money yourself. This is why they invented the margin account: Your broker will essentially give you a loan using your existing portfolio as collateral. You can then invest the borrowed money, increasing your exposure even more. Alternatively, if you have existing assets like, say, a house, you can take out a mortgage on it and invest the proceeds. (This isn't necessarily a good idea, but it's not really worse than a margin account; investing with borrowed money is investing with borrowed money, and you might get a better interest rate. Actually, a lot of rich people who could pay off their mortgages don't, and invest the money instead, and keep the tax deduction for mortgage interest. But I digress.) Remember that assets shrink; liabilities (loans) never shrink. If you really want to double your return over the long term, invest twice as much money.",
"title": ""
},
{
"docid": "582d3445c75e76dd671a28f85595a0fc",
"text": "It is true that this is possible, however, it's very remote in the case of the large and reputable fund companies such as Vanguard. FDIC insurance protects against precisely this for bank accounts, but mutual funds and ETFs do not have an equivalent to FDIC insurance. One thing that does help you in the case of a mutual fund or ETF is that you indirectly (through the fund) own actual assets. In a cash account at a bank, you have a promise from the bank to pay, and then the bank can go off and use your money to make loans. You don't in any sense own the bank's loans. With a fund, the fund company cannot (legally) take your money out of the fund, except to pay the expense ratio. They have to use your money to buy stocks, bonds, or whatever the fund invests in. Those assets are then owned by the fund. Legally, a mutual fund is a special kind of company defined in the Investment Company Act of 1940, and is a separate company from the investment advisor (such as Vanguard): http://www.sec.gov/answers/mfinvco.htm Funds have their own boards, and in principle a fund board can even fire the company advising the fund, though this is not likely since boards aren't usually independent. (a quick google found this article for more, maybe someone can find a better one: http://www.marketwatch.com/story/mutual-fund-independent-board-rule-all-but-dead) If Vanguard goes under, the funds could continue to exist and get a new adviser, or could be liquidated with investors receiving whatever the assets are worth. Of course, all this legal stuff doesn't help you with outright fraud. If a fund's adviser says it bought the S&P 500, but really some guy bought himself a yacht, Madoff-style, then you have a problem. But a huge well-known ETF has auditors, tons of different employees, lots of brokerage and exchange traffic, etc. so to me at least it's tough to imagine a risk here. With a small fund company with just a few people - and there are lots of these! - then there's more risk, and you'd want to carefully look at what independent agent holds their assets, who their auditors are, and so forth. With regular mutual funds (not ETFs) there are more issues with diversifying across fund companies: With ETFs, there probably isn't much downside to diversifying since you could buy them all from one brokerage account. Maybe it even happens naturally if you pick the best ETFs you can find. Personally, I would just pick the best ETFs and not worry about advisor diversity. Update: maybe also deserving a mention are exchange-traded notes (ETNs). An ETN's legal structure is more like the bank account, minus the FDIC insurance of course. It's an IOU from the company that runs the ETN, where they promise to pay back the value of some index. There's no investment company as with a fund, and therefore you don't own a share of any actual assets. If the ETN's sponsor went bankrupt, you would indeed have a problem, much more so than if an ETF's sponsor went bankrupt.",
"title": ""
},
{
"docid": "cce033f385da61f67b0c492443451b1d",
"text": "\"It's easy for me to look at an IRA, no deposits or withdrawal in a year, and compare the return to some index. Once you start adding transactions, not so easy. Here's a method that answers your goal as closely as I can offer: SPY goes back to 1993. It's the most quoted EFT that replicates the S&P 500, and you specifically asked to compare how the investment would have gone if you were in such a fund. This is an important distinction, as I don't have to adjust for its .09% expense, as you would have been subject to it in this fund. Simply go to Yahoo, and start with the historical prices. Easy to do this on a spreadsheet. I'll assume you can find all your purchases inc dates & dollars invested. Look these up and treat those dollars as purchases of SPY. Once the list is done, go back and look up the dividends, issues quarterly, and on the dividend date, add the shares it would purchase based on that day's price. Of course, any withdrawals get accounted for the same way, take out the number of SPY shares it would have bought. Remember to include the commission on SPY, whatever your broker charges. If I've missed something, I'm sure we'll see someone point that out, I'd be happy to edit that in, to make this wiki-worthy. Edit - due to the nature of comments and the inability to edit, I'm adding this here. Perhaps I'm reading the question too pedantically, perhaps not. I'm reading it as \"\"if instead of doing whatever I did, I invested in an S&P index fund, how would I have performed?\"\" To measure one's return against a benchmark, the mechanics of the benchmarks calculation are not needed. In a comment I offer an example - if there were an ETF based on some type of black-box investing for which the investments were not disclosed at all, only day's end pricing, my answer above still applies exactly. The validity of such comparisons is a different question, but the fact that the formulation of the EFT doesn't come into play remains. In my comment below which I removed I hypothesized an ETF name, not intending it to come off as sarcastic. For the record, if one wishes to start JoesETF, I'm ok with it.\"",
"title": ""
},
{
"docid": "6241d19ae4f4a34d2000f940bf82e549",
"text": "The issue is the time frame. With a one year investment horizon the only way for a fund manager to be confident that they are not going to lose their shirt is to invest your money in ultra conservative low volatility investments. Otherwise a year like 2008 in the US stock market would break them. Note if you are willing to expand your payback time period to multiple years then you are essentially looking at an annuity and it's market loss rider. Of course those contacts are always structured such that the insurance company is extremely confident that they will be able to make more in the market than they are promising to pay back (multiple decade time horizons).",
"title": ""
},
{
"docid": "18ba65edf360c23887d0043f4696facb",
"text": "Now, if I'm not mistaken, tracking a value-weighted index is extremely easy - just buy the shares in the exact amount they are in the index and wait. Yes in theory. In practise this is difficult. Most funds that track S&P do it on sample basis. This is to maintain the fund size. Although I don't have / know the exact number ... if one wants to replicate the 500 stocks in the same %, one would need close to billion in fund size. As funds are not this large, there are various strategies adopted, including sampling of companies [i.e. don't buy all]; select a set of companies that mimic the S&P behaviour, etc. All these strategies result in tracking errors. There are algorithms to reduce this. The only time you would need to rebalance your holdings is when there is a change in the index, i.e. a company is dropped and a new one is added, right? So essentially rebalance is done to; If so, why do passive ETFs require frequent rebalancing and generally lose to their benchmark index? lets take an Index with just 3 companies, with below price. The total Market cap is 1000 The Minimum required to mimic this index is 200 or Multiples of 200. If so you are fine. More Often, funds can't be this large. For example approx 100 funds track the S&P Index. Together they hold around 8-10% of Market Cap. Few large funds like Vangaurd, etc may hold around 2%. But most of the 100+ S&P funds hold something in 0.1 to 0.5 range. So lets say a fund only has 100. To maintain same proportion it has to buy shares in fraction. But it can only buy shares in whole numbers. This would then force the fund manager to allocate out of proportion, some may remain cash, etc. As you can see below illustrative, there is a tracking error. The fund is not truly able to mimic the index. Now lets say after 1st April, the share price moved, now this would mean more tracking error if no action is taken [block 2] ... and less tracking error if one share of company B is sold and one share of company C is purchased. Again the above is a very simplified view. Tracking error computation is involved mathematics. Now that we have the basic concepts, more often funds tracking S&P; Thus they need to rebalance.",
"title": ""
},
{
"docid": "745af972c291ab920e3b2690a6d0ef9d",
"text": "Yes, it depends on the fund it's trying to mirror. The ETF for the S&P that's best known (in my opinion) is SPY and you see the breakdown of its holdings. Clearly, it's not an equal weighted index.",
"title": ""
},
{
"docid": "e710be66cacaa43a6b7e4b7df6033b02",
"text": "Yes, each of Vanguard's mutual funds looks only at its own shares when deciding to upgrade/downgrade the shares to/from Admiral status. To the best of my knowledge, if you hold a fund in an IRA as well as a separate investment, the shares are not totaled in deciding whether or not the shares are accorded Admiral shares status; each account is considered separately. Also, for many funds, the minimum investment value is not $10K but is much larger (used to be $100K a long time ago, but recently the rules have been relaxed somewhat).",
"title": ""
},
{
"docid": "aa0ef326df4465ff87ce2aea2d17493a",
"text": "What is your time horizon? Over long horizons, you absolutely want to minimise the expense ratio – a seemingly puny 2% fee p.a. can cost you a third of your savings over 35 years. Over short horizons, the cost of trading in and trading out might matter more. A mutual fund might be front-loaded, i.e. charge a fixed initial percentage when you first purchase it. ETFs, traded daily on an exchange just like a stock, don't have that. What you'll pay there is the broker commission, and the bid-ask spread (and possibly any premium/discount the ETF has vis-a-vis the underlying asset value). Another thing to keep in mind is tracking error: how closely does the fond mirror the underlying index it attempts to track? More often than not it works against you. However, not sure there is a systematic difference between ETFs and funds there. Size and age of a fund can matter, indeed - I've had new and smallish ETFs that didn't take off close down, so I had to sell and re-allocate the money. Two more minor aspects: Synthetic ETFs and lending to short sellers. 1) Some ETFs are synthetic, that is, they don't buy all the underlying shares replicating the index, actually owning the shares. Instead, they put the money in the bank and enter a swap with a counter-party, typically an investment bank, that promises to pay them the equivalent return of holding that share portfolio. In this case, you have (implicit) credit exposure to that counter-party - if the index performs well, and they don't pay up, well, tough luck. The ETF was relying on that swap, never really held the shares comprising the index, and won't necessarily cough up the difference. 2) In a similar vein, some (non-synthetic) ETFs hold the shares, but then lend them out to short sellers, earning extra money. This will increase the profit of the ETF provider, and potentially decrease your expense ratio (if they pass some of the profit on, or charge lower fees). So, that's a good thing. In case of an operational screw up, or if the short seller can't fulfil their obligations to return the shares, there is a risk of a loss. These two considerations are not really a factor in normal times (except in improving ETF expense ratios), but during the 2009 meltdown they were floated as things to consider. Mutual funds and ETFs re-invest or pay out dividends. For a given mutual fund, you might be able to choose, while ETFs typically are of one type or the other. Not sure how tax treatment differs there, though, sorry (not something I have to deal with in my jurisdiction). As a rule of thumb though, as alex vieux says, for a popular index, ETFs will be cheaper over the long term. Very low cost mutual funds, such as Vanguard, might be competitive though.",
"title": ""
},
{
"docid": "5e1a32fd89b6eb8df2bf94f74df763da",
"text": "\"First of all, it's great you're now taking full advantage of your employer match – i.e. free money. Next, on the question of the use of a life cycle / target date fund as a \"\"hedge\"\": Life cycle funds were introduced for hands-off, one-stop-shopping investors who don't like a hassle or don't understand. Such funds are gaining in popularity: employers can use them as a default choice for automatic enrollment, which results in more participation in retirement savings plans than if employees had to opt-in. I think life cycle funds are a good innovation for that reason. But, the added service and convenience typically comes with higher fees. If you are going to be hands-off, make sure you're cost-conscious: Fees can devastate a portfolio's performance. In your case, it sounds like you are willing to do some work for your portfolio. If you are confident that you've chosen a good equity glide path – that is, the initial and final stock/bond allocations and the rebalancing plan to get from one to the other – then you're not going to benefit much by having a life cycle fund in your portfolio duplicating your own effort with inferior components. (I assume you are selecting great low-cost, liquid index funds for your own strategy!) Life cycle are neat, but replicating them isn't rocket science. However, I see a few cases in which life cycle funds may still be useful even if one has made a decision to be more involved in portfolio construction: Similar to your case: You have a company savings plan that you're taking advantage of because of a matching contribution. Chances are your company plan doesn't offer a wide variety of funds. Since a life cycle fund is available, it can be a good choice for that account. But make sure fees aren't out of hand. If much lower-cost equity and bond funds are available, consider them instead. Let's say you had another smaller account that you were unable to consolidate into your main account. (e.g. a Traditional IRA vs. your Roth, and you didn't necessarily want to convert it.) Even if that account had access to a wide variety of funds, it still might not be worth the added hassle or trading costs of owning and rebalancing multiple funds inside the smaller account. There, perhaps, the life cycle fund can help you out, while you use your own strategy in your main account. Finally, let's assume you had a single main account and you buy partially into the idea of a life cycle fund and you find a great one with low fees. Except: you want a bit of something else in your portfolio not provided by the life cycle fund, e.g. some more emerging markets, international, or commodity stock exposure. (Is this one reason you're doing it yourself?) In that case, where the life cycle fund doesn't quite have everything you want, you could still use it for the bulk of the portfolio (e.g. 85-95%) and then select one or two specific additional ETFs to complement it. Just make sure you factor in those additional components into the overall equity weighting and adjust your life cycle fund choice accordingly (e.g. perhaps go more conservative in the life cycle, to compensate.) I hope that helps! Additional References:\"",
"title": ""
},
{
"docid": "f50a77edeff46066dd58bbd93707a0f4",
"text": "Here are the specific Vanguard index funds and ETF's I use to mimic Ray Dalio's all weather portfolio for my taxable investment savings. I invest into this with Vanguard personal investor and brokerage accounts. Here's a summary of the performance results from 2007 to today: 2007 is when the DBC commodity fund was created, so that's why my results are only tested back that far. I've tested the broader asset class as well and the results are similar, but I suggest doing that as well for yourself. I use portfoliovisualizer.com to backtest the results of my portfolio along with various asset classes, that's been tremendously useful. My opinionated advice would be to ignore the local investment advisor recommendations. Nobody will ever care more about your money than you, and their incentives are misaligned as Tony mentions in his book. Mutual funds were chosen over ETF's for the simplicity of auto-investment. Unfortunately I have to manually buy the ETF shares each month (DBC and GLD). I'm 29 and don't use this for retirement savings. My retirement is 100% VSMAX. I'll adjust this in 20 years or so to be more conservative. However, when I get close to age 45-50 I'm planning to shift into this allocation at a market high point. When I approach retirement, this is EXACTLY where I want to be. Let's say you had $2.7M in your retirement account on Oct 31, 2007 that was invested in 100% US Stocks. In Feb of 2009 your balance would be roughly $1.35M. If you wanted to retire in 2009 you most likely couldn't. If you had invested with this approach you're account would have dropped to $2.4M in Feb of 2009. Disclaimer: I'm not a financial planner or advisor, nor do I claim to be. I'm a software engineer and I've heavily researched this approach solely for my own benefit. I have absolutely no affiliation with any of the tools, organizations, or funds mentioned here and there's no possible way for me to profit or gain from this. I'm not recommending anyone use this, I'm merely providing an overview of how I choose to invest my own money. Take or leave it, that's up to you. The loss/gain incured from this is your responsibility, and I can't be held accountable.",
"title": ""
},
{
"docid": "0b13393accc83213c5973089554b85d3",
"text": "\"A budget that you both agree on is a great goal. X% to charity, y% to savings, $z a month to a reserve for house repairs, and so on. Your SO is likely to agree with this, especially if you say it like this: I know you're concerned that I might want to give too much to charity. Why don't we go through the numbers and work out a cap on what I can give away each year? Like, x% of our gross income or y% of our disposable income? Work out x and y in advance so you say real percentages in this \"\"meeting request\"\", but be prepared to actually end up at a different x and y later. Perhaps even suggest an x and y that are a little lower than you would really wish for. If your SO thinks you earn half what you really do, then mental math if you say 5% will lead to half what you want to donate, but don't worry about that at the moment. That could even work in your favour if you've already said you want to give $5000 (or $50,000) a year and mental math with the percentage leads your SO to $2500 (or $25,000), (s)he might think \"\"yes, if we have this meeting I can rein in that crazy generosity.\"\" Make sure your budget is complete. You don't want your SO worrying that if the furnace wears out or the roof needs to be replaced, the money won't be there because you gave it away. Show how these contingencies, and your retirement, will all be taken care of. Show how much you are setting aside to spend on vacations, and so on. That will make it clear that there is room to give to those who are not as fortunate as you. If your SO's motivations are only worry that there won't be money when it's needed, you will not only get permission to donate, you'll get a happier SO. (For those who don't know how this can happen, I knew a woman just like this. The only income she believed they had was her husband's pension. He had several overseas companies and significant royalty income, but she never accounted for that when talking of what they could afford. Her mental image of their income was perhaps a quarter of what it really was, leading to more than one fight about whether they could take a trip, or give a gift, that she thought was too extravagant. For her own happiness I wish he had gone through the budget with her in detail.)\"",
"title": ""
}
] |
fiqa
|
5f9416049c8b375d68c88cd3b74423a0
|
Should I overpay to end a fixed-rate mortgage early? [duplicate]
|
[
{
"docid": "8c97110c32f226e776d5bfe11a4844d3",
"text": "I would strongly encourage you to either find specifically where in your written contract the handling of early/over payments are defined and post it for us to help you, or that you go and visit a licensed real estate attorney. Even at a ridiculously high price of 850 pounds per hour for a top UK law firm (and I suspect you can find a competent lawyer for 10-20% of that amount), it would cost you less than a year of prepayment penalty to get professional advice on what to do with your mortgage. A certified public accountant (CPA) might be able to advise you, as well, if that's any easier for you to find. I have the sneaking suspicion that the company representatives are not being entirely forthcoming with you, thus the need for outside advice. Generally speaking, loans are given an interest rate per period (such as yearly APR), and you pay a percentage (the interest) of the total amount of money you owe (the principle). So if you owe 100,000 at 5% APR, you accrue 5,000 in interest that year. If you pay only the interest each year, you'll pay 50,000 in interest over 10 years - but if you pay everything off in year 8, at a minimum you'd have paid 10,000 less in interest (assuming no prepayment penalties, which you have some of those). So paying off early does not change your APR or your principle amount paid, but it should drastically reduce the interest you pay. Amortization schedules don't change that - they just keep the payments even over the scheduled full life of the loan. Even with prepayment penalties, these are customarily billed at less than 6 months of interest (at the rate you would have payed if you kept the loan), so if you are supposedly on the hook for more than that again I highly suspect something fishy is going on - in which case you'd probably want legal representation to help you put a stop to it. In short, something is definitely and most certainly wrong if paying off a loan years in advance - even after taking into account pre-payment penalties - costs you the same or more than paying the loan off over the full term, on schedule. This is highly abnormal, and frankly even in the US I'd consider it scandalous if it were the case. So please, do look deeper into this - something isn't right!",
"title": ""
},
{
"docid": "4634cd7d88c054161302a975e8f7587c",
"text": "\"The simplest argument for overpayment is this: Let's suppose your fixed rate mortgage has an interest rate of 4.00%. Every £1 you can afford to overpay gives you a guaranteed effective return of 4.00% gross. Yes your monthly mortgage payment will stay the same; however, the proportion of it that's paying off interest every month will be less, and the amount that's actually going into acquiring the bricks and mortar of your home will be greater. So in a sense your returns are \"\"inverted\"\" i.e. because every £1 you overpay is £1 you don't need to keep paying 4% a year to continue borrowing. In your case this return will be locked away for a few more years, until you can remortgage the property. However, compared to some other things you could do with your excess £1s, this is a very generous and safe return that is well above the average rate of UK inflation for the past ten years. Let's compare that to some other options for your extra £1s: Cash savings: The most competitive rate I can currently find for instant access is 1.63% from ICICI. If you are prepared to lock your money away until March 2020, Melton Mowbray Building Society has a fixed rate bond that will pay you 2.60% gross. On these accounts you pay income tax at your marginal rate on any interest received. For a basic rate taxpayer that's 20%. If you're a higher rate taxpayer that means 40% of this interest is deducted as tax. In other words: assuming you pay income tax at one of these rates, to get an effective return of 4.00% on cash savings you'd have to find an account paying: Cash ISAs: these accounts are tax sheltered, so the income tax equation isn't an issue. However, the best rate I can find on a 4 year fixed rate cash ISA is 2.35% from Leeds Building Society. As you can see, it's a long way below the returns you can get from overpaying. To find returns such as that you would have to take a lot more risk with your money – for example: Stock market investments: For example, an index fund tracking the FTSE 100 (UK-listed blue chip companies) could have given you a total return of 3.62% over the last 3 years (past performance does not equal future returns). Over a longer time period this return should be better – historical performance suggests somewhere between 5 to 6% is the norm. But take a closer look and you'll see that over the last six months of 2015 this fund had a negative return of 6.11%, i.e. for a time you'd have been losing money. How would you feel about that kind of volatility? In conclusion: I understand your frustration at having locked in to a long term fixed rate (effectively insuring against rates going up), then seeing rates stay low for longer than most commentators thought. However, overpaying your mortgage is one way you can turn this situation into a pretty good deal for yourself – a 4% guaranteed return is one that most cash savers would envy. In response to comments, I've uploaded a spreadsheet that I hope will make the numbers clearer. I've used an example of owing £100k over 25 years at an unvarying 4% interest, and shown the scenarios with and without making a £100/month voluntary overpayment, assuming your lender allows this. Here's the sheet: https://www.scribd.com/doc/294640994/Mortgage-Amortization-Sheet-Mortgage-Overpayment-Comparison After one year you have made £1,200 in overpayments. You now owe £1,222.25 less than if you hadn't overpaid. After five years you owe £6,629 less on your mortgage, having overpaid in £6,000 so far. Should you remortgage at this point that £629 is your return so far, and you also have £6k more equity in the property. If you keep going: After 65 months you are paying more capital than interest out of your monthly payment. This takes until 93 months without overpayments. In total, if you keep up £100/month overpayment, you pay £15,533 less interest overall, and end your mortgage six years early. You can play with the spreadsheet inputs to see the effect of different overpayment amounts. Hope this helps.\"",
"title": ""
}
] |
[
{
"docid": "ade1a70a1ee0761e9bad174726ff779e",
"text": "\"I've heard that the bank may agree to a \"\"one time adjustment\"\" to lower the payments on Mortgage #2 because of paying a very large payment. Is this something that really happens? It's to the banks advantage to reduce the payments in that situation. If they were willing to loan you money previously, they should still be willing. If they keep the payments the same, then you'll pay off the loan faster. Just playing with a spreadsheet, paying off a third of the mortgage amount would eliminate the back half of the payments or reduces payments by around two fifths (leaving off any escrow or insurance). If you can afford the payments, I'd lean towards leaving them at the current level and paying off the loan early. But you know your circumstances better than we do. If you are underfunded elsewhere, shore things up. Fully fund your 401k and IRA. Fill out your emergency fund. Buy that new appliance that you don't quite need yet but will soon. If you are paying PMI, you should reduce the principal down to the point where you no longer have to do so. That's usually more than 20% equity (or less than an 80% loan). There is an argument for investing the remainder in securities (stocks and bonds). If you itemize, you can deduct the interest on your mortgage. And then you can deduct other things, like local and state taxes. If you're getting a higher return from securities than you'd pay on the mortgage, it can be a good investment. Five or ten years from now, when your interest drops closer to the itemization threshold, you can cash out and pay off more of the mortgage than you could now. The problem is that this might not be the best time for that. The Buffett Indicator is currently higher than it was before the 2007-9 market crash. That suggests that stocks aren't the best place for a medium term investment right now. I'd pay down the mortgage. You know the return on that. No matter what happens with the market, it will save you on interest. I'd keep the payments where they are now unless they are straining your budget unduly. Pay off your thirty year mortgage in fifteen years.\"",
"title": ""
},
{
"docid": "4ff9a2c9f2705c9fce04ad454be5d44d",
"text": "I wouldn't pay down your mortgage faster until you have a huge emergency fund. Like two years' worth of expenses. Once you put extra money toward principal you can't get it out unless you get a HELOC, which costs money. You're in a position now to build that up in a hurry. I suggest you do so. Your mortgage is excellent. In the land of inflation it gets easier and easier to make that fixed-dollar payment: depreciating dollars. You seem like a go-getter. Once you have your huge emergency fund, why not buy a few websites and monetize the heck out of them? Or look for an investment property from someone who needs to sell desperately? Get a cushion that you can do something with.",
"title": ""
},
{
"docid": "f3e741b5c1797f90f2eff5a53ade7927",
"text": "Consider not buying the house? Consider a cheaper property? What are your actual goals? Owning vs renting? Perhaps an actual investment goal? What is your rent now vs the mortgage on the house? What is the time frame for the mortgage you are considering? Those are the real questions you need to ask yourself. It does sound like you can become overleveraged with this property, although your down payment is quite substantial, but one single thing goes wrong and your cash flow is irreparably constricted. I personally wouldn't take that risk if I had the same forecast of expenditures, but this could be altered if there were particular investment goals I had in mind.",
"title": ""
},
{
"docid": "bbb5cf86b6ab4784bc588a20e2d96659",
"text": "Regardless of how long the mortgage has left, the return you get on prepayments is identical to the mortgage rate. (What happens on your tax return is a different matter.) It's easier to get a decent financial calculator (The TI BA-35 is my favorite) than to construct spreadsheets which may or may not contain equation errors. When I duplicate John's numbers, $100K mortgage, 4% rate, I get a 60 mo remaining balance of 90,447.51 and with $50 extra, $87132.56, a diff of $3314.95. $314.95 return on the $3000. $315 over 5yrs is $63/yr, over an average $1500 put in, 63/1500 = 4.2%. Of course the simple math of just averaging the payment creates that .2% error. A 60 payment $50 returning $314.95 produces 4.000%. @Peter K - with all due respect, there's nothing for me about time value of money calculations that can be counter-intuitive. While I like playing with spreadsheets, the first thing I do is run a few scenarios and double check using the calculator. Your updated sheet is now at 3.76%? A time vaule of money calculation should not have rounding errors that larger. It's larger than my back of envelope calculation. @Kaushik - if you don't need the money, and would buy a CD at the rate of your mortgage, then pay early. Nothing wrong with that.",
"title": ""
},
{
"docid": "b9eba242b203aa7e35353ed409783780",
"text": "I'd rent and put the $30K/ yr into savings. When the short sale comes off your credit, you'll have a substantial downpayment. You don't mention the balance, but the current rate you're paying is 3% too high. Even if you get the rate reduced, you have a $100K issue. I recommend reading through Will Short Sale Prevent Me From Getting VA Home Loan Later? A bit different question, but it talks more about the short sale. A comment for that question makes a key point - if you have a short sale, will the bank chase you for the balance? If not, you have a choice to make. Adding note after user11043 commented - First, run the numbers. If you were to pay the $100K off over 7 years, it's $1534/mo extra. Nearly $130K, and even then, you might not be at 80% LTV. I don't know what rents are like in your area, but do the math. First, if the rent is less than the current mortgage+property tax and maintenance, you will immediately have better cash flow each month, and over time, save towards the newer house. If you feel compelled to work this out and stay put, I'd go to the bank and tell them you'd like them to recast the loan to a new rate. They have more to lose than you do, and there's nothing wrong with a bit of a threat. You can walk away, or they can do what's reasonable, to just fix your rate. With a 4% rate, you'd easily attack the principal if you wish. As you commented above, if the bank offers no option, I'd seriously consider the short sale. There's nothing wrong with that option from a moral standpoint, in my opinion. This is not Bedford Falls, and you are not hurting your neighbors. The bank is amoral, if not immoral.",
"title": ""
},
{
"docid": "f9dce05a7255e9cf5cd86ec82fce3395",
"text": "This is more of an interesting question then it looks on first sight. In the USA there are some tax reliefs for mortgage payments, which we don’t have in the UK unless you are renting out the property with the mortgage. So firstly work out the interest rate on each loan taking into account any tax reliefs, etc. Then you need to consider the charges for paying off a loan, for example often there is a charge if you pay off a mortgage. These days in the UK, most mortgagees allow you to pay off at least 10% a year without hitting such a charge – but check your mortgage offer document. How interest is calculated when you make an early payment may be different between your loans – so check. Then you need to consider what will happen if you need another loan. Some mortgages allow you to take back any overpayments, most don’t. Re-mortgaging to increase the size of your mortgage often has high charges. Then there is the effect on your credit rating: paying more of a loan each month then you need to, often improves your credit rating. You also need to consider how interest rates may change, for example if you mortgage is a fixed rate but your car loan is not and you expect interest rates to rise, do the calculations based on what you expect interest rates to be over the length of the loans. However, normally it is best to pay off the loan with the highest interest rate first. Reasons for penalties for paying of some loans in the UK. In the UK some short term loans (normally under 3 years) add on all the interest at the start of the loan, so you don’t save any interest if you pay of the loan quicker. This is due to the banks having to cover their admin costs, and there being no admin charge to take out the loan. Fixed rate loans/mortgagees have penalties for overpayment, as otherwise when interest rates go down, people will change to other lenders, so making it a “one way bet” that the banks will always loose. (I believe in the USA, the central bank will under right such loans, so the banks don’t take the risk.)",
"title": ""
},
{
"docid": "8481a2039b2bc140fa374e80e6830c32",
"text": "If there's no prepayment penalty, and if the extra is applied to principal rather than just toward later payments, then paying extra saves you money. Paying more often, by itself, doesn't. Paying early within a single month (ie, paying off the loan at the same average rate) doesn't save enough you be worth considering",
"title": ""
},
{
"docid": "9a74ce917b8bba32d778ccb34fe977c9",
"text": "Depending on your bank you may receive an ACH discount for doing automatic withdrawals from a deposit account at that bank. Now, this depends on your bank and you need to do independent research on that topic. As far as dictating what your extra money goes towards each month (early payments, principal payments, interest payments) you need to discuss that with your bank. I'm sure it's not too difficult to find. In my experience most banks, so long as you didn't sign a contract on your mortgage where you're penalized for sending additional money, will apply extra money toward early payments, and not principal. I would suggest calling them. I know for my student loans I have to send a detailed list of my loans and in what order I want my extra payments toward each, otherwise it will be considered an early payment, or it will be spread evenly among them all.",
"title": ""
},
{
"docid": "6e0f5a5bd8fcf16434ed72e82e14daf0",
"text": "Consider that the bank of course makes money on the money in your escrow. It is nothing but a free loan you give the bank, and the official reasons why they want it are mostly BS - they want your free loan, nothing else. As a consequence, to let you out of it, they want the money they now cannot make on your money upfront, in form of a 'fee'. That explains the amount; it is right their expected loss by letting you out. Unfortunately, knowing this doesn't change your options. Either way, you will have to pay that money; either as a one-time fee, or as a continuing loss of interest. As others mentioned, you cannot calculate with 29 years, as chances are the mortgage will end earlier - by refinancing or sale. Then you are back to square one with another mandatory escrow; so paying the fee is probably not a good idea. If you are an interesting borrower for other banks, you might be able to refinance with no escrow; you can always try to negotiate this and make it a part of the contract. If they want your business, they might agree to that.",
"title": ""
},
{
"docid": "6c33bf1dbc4fda12b28dadf262162d4b",
"text": "\"Given that the 6 answers all advocate similar information, let me offer you the alternate scenario - You earn $60K and have an employer offering a 50% match on all deposits. All deposits. (Note, I recently read a Q&A here describing such an offer. If I see it again, I'll link). Let the thought of the above settle in. You think about the fact that $42K isn't a bad salary, and decide to deposit 30%, to gain the full match on your $18K deposit. Now, you budget to live your life, pay your bills, etc, but it's tight. When you accumulate $2000, and a strong want comes up (a toy, a trip, anything, no judgement) you have a tough decision. You think to yourself, \"\"after the match, I am literally saving 45% of my income. I'm on a pace to have the ability to retire in 20 years. Why do I need to save even more?\"\" Your budget has enough discretionary spending that if you have a $2000 'emergency', you charge it and pay it off over the next 6-8 months. Much larger, and you know that your super-funded 401(k) has the ability to tap a loan. Your choice to turn away from the common wisdom has the recommended $20K (about 6 months of your spending) sitting in your 401(k), pretax deposited as $26K, and matched to nearly $40K, growing long term. Note: This is a devil's advocate answer. Had I been the first to answer, it would reflect the above. In my own experience, when I got married, we built up the proper emergency fund. As interest rates fell, we looked at our mortgage balance, and agreed that paying down the loan would enable us to refinance and save enough in mortgage interest that the net effect was as if we were getting 8% on the money. At the same time as we got that new mortgage, the bank offered a HELOC, which I never needed to use. Did we somehow create high risk? Perhaps. Given that my wife and I were both still working, and had similar incomes, it seemed reasonable.\"",
"title": ""
},
{
"docid": "fc667cc46903d9bf2c8fd48ffd853d9e",
"text": "\"I'll start by focussing on the numbers. I highly recommend you get comfortable with spreadsheets to do these calculations on your own. I assume a $200K loan, the mortgage for a $250K house. Scale this up or down as appropriate. For the rate, I used the current US average for the 30 and 15 year fixed loans. You can see 2 things. First, even with that lower rate to go 15 years, the payment required is 51% higher than with the 30. I'll get back to that. Second, to pay the 30 at 15 years, you'd need an extra $73. Because now you are paying at a 15 year pace, but with a 30 year rate. This is $876/yr to keep that flexibility. These are the numbers. There are 2 camps in viewing the longer term debt. There are those who view debt as evil, the $900/mo payment would keep them up at night until it's gone, and they would prefer to have zero debt regardless of the lifestyle choices they'd need to make or the alternative uses of that money. To them, it's not your house as long as you have a mortgage. (But they're ok with the local tax assessor having a statutory lien and his hand out every quarter.) The flip side are those who will say this is the cheapest money you'll ever see, and you should have as large a mortgage as you can, for as long as you can. Treat the interest like rent, and invest your money. My own view is more in the middle. Look at your situation. I'd prioritize In my opinion, it makes little sense to focus on the mortgage unless and until the first 5 items above are in place. The extra $459 to go to 15? If it's not stealing from those other items or making your cash flow tight, go for it. Keep one subtle point in mind, risk is like matter and energy, it's not created or destroyed but just moved around. Those who offer the cliche \"\"debt creates risk\"\" are correct, but the risk is not yours, it's the lender's. Looking at your own finances, liquidity is important. You can take the 15 year mortgage, and 10 years in, lose your job. The bank still wants its payments every month. Even if you had no mortgage, the tax collector is still there. To keep your risk low, you want a safety net that will cover you between jobs, illness, new babies being born, etc. I've gone head to head with people insisting on prioritizing the mortgage payoff ahead of the matched 401(k) deposit. Funny, they'd prefer to owe $75K less, while that $75K could have been deposited pretax (so $100K, for those in the 25% bracket) and matched, to $200K. Don't make that mistake.\"",
"title": ""
},
{
"docid": "359f122d6d4d0d34ad6b2e80ef5a9e87",
"text": "First, check with your lender to see if the terms of the loan allow early payoff. If you are able to payoff early without penalty, with the numbers you are posting, I would hesitate to refinance. This is simply because if you actually do pay 5k/month on this loan you will have it paid off so quickly that refinancing will probably not save you much money. Back-of-the-napkin math at 5k/month has you paying 60k pounds a year, which will payoff in about 5 years. Even if you can afford 5k/month, I would recommend not paying extra on this debt ahead of other high-interest debt or saving in a tax-advantaged retirement account. If these other things are being taken care of, and you have liquid assets (cash) for emergencies, I would recommend paying off the mortgage without refinancing.",
"title": ""
},
{
"docid": "924774d4073dfdeaf3be9c130b7d6b1d",
"text": "Fixed-rate mortgage is supposed to give you security. You are not going to get the best possible rate, but it is safe and predictable. Your argument is the same as complaining that you are paying for home insurance and your home hasn't burnt down. Switching to a variable rate mortgage right now seems a bad idea, because there is some expectations that rates are going up. If you can overpay, that is probably what you should do unless you can invest with better return after tax than your mortgage interest. It doesn't just shorten the time of your mortgage; every time you overpay £500 your mortgage principal is down by £500, and you pay interest on £500 less. And if the interest rate goes up over the next five years as you seem to hope, that just means you will pay higher interest when your mortgage needs renewing. You can't hope to always make the optimal decision. You made a decision with very low risk. As with any decision, you don't know what's in the future; a decision that is low risk if the risk could lead to fatal results is not unwise. You could have picked a variable rate mortgage and could be paying twice as much interest today.",
"title": ""
},
{
"docid": "26f799670bf8a32dc2cc09fa3609cb0e",
"text": "My advice to you? Act like responsible adults and owe up to your financial commitments. When you bought your house and took out a loan from the bank, you made an agreement to pay it back. If you breach this agreement, you deserve to have your credit score trashed. What do you think will happen to the $100K+ if you decide to stiff the bank? The bank will make up for its loss by increasing the mortgage rates for others that are taking out loans, so responsible borrowers get to subsidize those that shirk their responsibilities. If you were in a true hardship situation, I would be inclined to take a different stance. But, as you've indicated, you are perfectly able to make the payments -- you just don't feel like it. Real estate fluctuates in value, just like any other asset. If a stock I bought drops in value, does the government come and bail me out? Of course not! What I find most problematic about your plan is that not only do you wish to breach your agreement, but you are also looking for ways to conceal your breach. Please think about this. Best of luck with your decision.",
"title": ""
},
{
"docid": "205ee66f682f0c4c21792a31c0241a1e",
"text": "Varying the amount to reflect income during the quarter is entirely legitimate -- consider someone like a salesman whose income is partly driven by commissions, and who therefore can't predict the total. The payments are quarterly precisely so you can base them on actual results. Having said that, I suspect that as long as you show Good Intent they won't quibble if your estimate is off by a few percent. And they'll never complain if you overpay. So it may not be worth the effort to change the payment amount for that last quarter unless the income is very different.",
"title": ""
}
] |
fiqa
|
ca375bbe967f4a346f70e10602ed3a8e
|
Spatial Predictive Control for Agile Semi-Autonomous Ground Vehicles
|
[
{
"docid": "6cb7cded3c10f00228ac58ff3b82d45e",
"text": "This paper presents a hierarchical control framework for the obstacle avoidance of autonomous and semi-autonomous ground vehicles. The high-level planner is based on motion primitives created from a four-wheel nonlinear dynamic model. Parameterized clothoids and drifting maneuvers are used to improve vehicle agility. The low-level tracks the planned trajectory with a nonlinear Model Predictive Controller. The first part of the paper describes the proposed control architecture and methodology. The second part presents simulative and experimental results with an autonomous and semi-autonomous ground vehicle traveling at high speed on an icy surface.",
"title": ""
}
] |
[
{
"docid": "543218f4bb3516a1d588715b7ede8730",
"text": "In this paper, we present a CMOS digital image stabilization algorithm based on the characteristics of a rolling shutter camera. Due to the rolling shuttering mechanism of a CMOS sensor, a CMOS video frame shows CMOS distortions which are not observed in a CCD video frame, and previous video stabilization techniques cannot handle these distortions properly even though they can make a visually stable CMOS video sequence. In our proposed algorithm, we first suggest a CMOS distortion model. This model is based on a rolling shutter mechanism which provides a solution to solve the CMOS distortion problem. Next, we estimate the global image motion and the CMOS distortion transformation directly from the homography between CMOS frames. Using the two transformations, we remove CMOS distortions as well as jittering motions in a CMOS video. In the experimental results, we demonstrate that our proposed algorithm can handle the CMOS distortion problem more effectively as well as the jittering problem in a CMOS video compared to previous CCD-based digital image stabilization techniques.",
"title": ""
},
{
"docid": "a7ab755978c9309513ac79dbd6b09763",
"text": "In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.",
"title": ""
},
{
"docid": "f3f4cb6e7e33f54fca58c14ce82d6b46",
"text": "In this letter, a novel slot array antenna with a substrate-integrated coaxial line (SICL) technique is proposed. The proposed antenna has radiation slots etched homolaterally along the mean line in the top metallic layer of SICL and achieves a compact transverse dimension. A prototype with 5 <inline-formula><tex-math notation=\"LaTeX\">$\\times$ </tex-math></inline-formula> 10 longitudinal slots is designed and fabricated with a multilayer liquid crystal polymer (LCP) process. A maximum gain of 15.0 dBi is measured at 35.25 GHz with sidelobe levels of <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 28.2 dB (<italic>E</italic>-plane) and <inline-formula> <tex-math notation=\"LaTeX\">$-$</tex-math></inline-formula> 33.1 dB (<italic>H</italic>-plane). The close correspondence between experimental results and designed predictions on radiation patterns has validated the proposed excogitation in the end.",
"title": ""
},
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
},
{
"docid": "d308f448dc6d951948ccf4319aef359f",
"text": "Spondylolysis is an osseous defect of the pars interarticularis, thought to be a developmental or acquired stress fracture secondary to chronic low-grade trauma. It is encountered most frequently in adolescents, most commonly involving the lower lumbar spine, with particularly high prevalence among athletes involved in certain sports or activities. Spondylolysis can be asymptomatic or can be a cause of spine instability, back pain, and radiculopathy. The biomechanics and pathophysiology of spondylolysis are complex and debated. Imaging is utilized to detect spondylolysis, distinguish acute and active lesions from chronic inactive non-union, help establish prognosis, guide treatment, and to assess bony healing. Radiography with satisfactory technical quality can often demonstrate a pars defect. Multislice CT with multiplanar reformats is the most accurate modality for detecting the bony defect and may also be used for assessment of osseous healing; however, as with radiographs, it is not sensitive for detection of the early edematous stress response without a fracture line and exposes the patient to ionizing radiation. Magnetic resonance (MR) imaging should be used as the primary investigation for adolescents with back pain and suspected stress reactions of the lumbar pars interarticularis. Several imaging pitfalls render MR imaging less sensitive than CT for directly visualizing the pars defects (regional degenerative changes and sclerosis). Nevertheless, the presence of bone marrow edema on fluid-sensitive images is an important early finding that may suggest stress response without a visible fracture line. Moreover, MR is the imaging modality of choice for identifying associated nerve root compression. Single-photon emission computed tomography (SPECT) use is limited by a high rate of false-positive and false-negative results and by considerable ionizing radiation exposure. In this article, we provide a review of the current concepts regarding spondylolysis, its epidemiology, pathogenesis, and general treatment guidelines, as well as a detailed review and discussion of the imaging principles for the diagnosis and follow-up of this condition.",
"title": ""
},
{
"docid": "61f9711b65d142b5537b7d3654bbbc3c",
"text": "Now-a-days as there is prohibitive demand for agricultural industry, effective growth and improved yield of fruit is necessary and important. For this purpose farmers need manual monitoring of fruits from harvest till its progress period. But manual monitoring will not give satisfactory result all the times and they always need satisfactory advice from expert. So it requires proposing an efficient smart farming technique which will help for better yield and growth with less human efforts. We introduce a technique which will diagnose and classify external disease within fruits. Traditional system uses thousands of words which lead to boundary of language. Whereas system that we have come up with, uses image processing techniques for implementation as image is easy way for conveying. In the proposed work, OpenCV library is applied for implementation. K-means clustering method is applied for image segmentation, the images are catalogue and mapped to their respective disease categories on basis of four feature vectors color, morphology, texture and structure of hole on the fruit. The system uses two image databases, one for implementation of query images and the other for training of already stored disease images. Artificial Neural Network (ANN) concept is used for pattern matching and classification of diseases.",
"title": ""
},
{
"docid": "a411780d406e8b720303d18cd6c9df68",
"text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "15d70d12d8c410907675c528ae1bafda",
"text": "This is an extremely welcome addition to the Information Retrieval (IR) literature. Because of its technical approach it is much different from most of the available books on IR. The book consists of five sections containing eighteen chapters. The chapters are written by different authors.",
"title": ""
},
{
"docid": "100b4df0a86534cba7078f4afc247206",
"text": "Presented in this article is a review of manufacturing techniques and introduction of reconfigurable manufacturing systems; a new paradigm in manufacturing which is designed for rapid adjustment of production capacity and functionality, in response to new market conditions. A definition of reconfigurable manufacturing systems is outlined and an overview of available manufacturing techniques, their key drivers and enablers, and their impacts, achievements and limitations is presented. A historical review of manufacturing from the point-of-view of the major developments in the market, technology and sciences issues affecting manufacturing is provided. The new requirements for manufacturing are discussed and characteristics of reconfigurable manufacturing systems and their key role in future manufacturing are explained. The paper is concluded with a brief review of specific technologies and research issues related to RMSs.",
"title": ""
},
{
"docid": "fa55a893ff3c77928644f7bfdac0c643",
"text": "Evaluating the maxilla-mandibular vertical dimension is of great importance in constructing successful denture prosthesis, however it is a tedious process which may be misinterpreted leading to false readings. Hence with the aid of cephalometric analysis a cephalogram may present a graphic representation .this study aims to introduce a new mathematical method for of determination the occlusal vertical dimension (O.V.D.). : The first part was conducted to derive a clinical ratio between the O.V.D. and the ear-eye distance, as well as to derive a radiographical ratio between the same distances on a lateral cephalometric view. The second part of this study aimed to evaluate the accuracy of clinical and radiographical application of the ratios that were derived from the first part in estimating the O.V.D from the ear-eye distance measured in dentate subjects.",
"title": ""
},
{
"docid": "9006586ffd85d5c2fb7611b3b0332519",
"text": "Systematic compositionality is the ability to recombine meaningful units with regular and predictable outcomes, and it’s seen as key to the human capacity for generalization in language. Recent work (Lake and Baroni, 2018) has studied systematic compositionality in modern seq2seq models using generalization to novel navigation instructions in a grounded environment as a probing tool. Lake and Baroni’s main experiment required the models to quickly bootstrap the meaning of new words. We extend this framework here to settings where the model needs only to recombine well-trained functional words (such as “around” and “right”) in novel contexts. Our findings confirm and strengthen the earlier ones: seq2seq models can be impressively good at generalizing to novel combinations of previously-seen input, but only when they receive extensive training on the specific pattern to be generalized (e.g., generalizing from many examples of “X around right” to “jump around right”), while failing when generalization requires novel application of compositional rules (e.g., inferring the meaning of “around right” from those of “right” and “around”).",
"title": ""
},
{
"docid": "e082b7792f72d54c63ed025ae5c7fa0f",
"text": "Cloud computing's pay-per-use model greatly reduces upfront cost and also enables on-demand scalability as service demand grows or shrinks. Hybrid clouds are an attractive option in terms of cost benefit, however, without proper elastic resource management, computational resources could be over-provisioned or under-provisioned, resulting in wasting money or failing to satisfy service demand. In this paper, to accomplish accurate performance prediction and cost-optimal resource management for hybrid clouds, we introduce Workload-tailored Elastic Compute Units (WECU) as a measure of computing resources analogous to Amazon EC2's ECUs, but customized for a specific workload. We present a dynamic programming-based scheduling algorithm to select a combination of private and public resources which satisfy a desired throughput. Using a loosely-coupled benchmark, we confirmed WECUs have 24 (J% better runtime prediction ability than ECUs on average. Moreover, simulation results with a real workload distribution of web service requests show that our WECU-based algorithm reduces costs by 8-31% compared to a fixed provisioning approach.",
"title": ""
},
{
"docid": "c7de7b159579b5c8668f2a072577322c",
"text": "This paper presents a method for effectively using unlabeled sequential data in the learning of hidden Markov models (HMMs). With the conventional approach, class labels for unlabeled data are assigned deterministically by HMMs learned from labeled data. Such labeling often becomes unreliable when the number of labeled data is small. We propose an extended Baum-Welch (EBW) algorithm in which the labeling is undertaken probabilistically and iteratively so that the labeled and unlabeled data likelihoods are improved. Unlike the conventional approach, the EBW algorithm guarantees convergence to a local maximum of the likelihood. Experimental results on gesture data and speech data show that when labeled training data are scarce, by using unlabeled data, the EBW algorithm improves the classification performance of HMMs more robustly than the conventional naive labeling (NL) approach. keywords Unlabeled data, sequential data, hidden Markov models, extended Baum-Welch algorithm.",
"title": ""
},
{
"docid": "bf4f90ff70dd8b195983f55bf3752718",
"text": "In this paper, we consider cooperative spectrum sensing based on energy detection in cognitive radio networks. Soft combination of the observed energy values from different cognitive radio users is investigated. Maximal ratio combination (MRC) is theoretically proved to be nearly optimal in low signal- to-noise ratio (SNR) region, an usual scenario in the context of cognitive radio. Both MRC and equal gain combination (EGC) exhibit significant performance improvement over conventional hard combination. Encouraged by the performance gain of soft combination, we propose a new softened hard combination scheme with two-bit overhead for each user and achieve a good tradeoff between detection performance and complexity. While traditionally energy detection suffers from an SNR wall caused by noise power uncertainty, it is shown in this paper that an SNR wall reduction can be achieved by employing cooperation among independent cognitive radio users.",
"title": ""
},
{
"docid": "cc04572df87def5dab42962ab42ce1f3",
"text": "Increasing the level of transparency in rehabilitation devices has been one of the main goals in robot-aided neurorehabilitation for the past two decades. This issue is particularly important to robotic structures that mimic the human counterpart's morphology and attach directly to the limb. Problems arise for complex joints such as the human wrist, which cannot be accurately matched with a traditional mechanical joint. In such cases, mechanical differences between human and robotic joint cause hyperstaticity (i.e. overconstraint) which, coupled with kinematic misalignments, leads to uncontrolled force/torque at the joint. This paper focuses on the prono-supination (PS) degree of freedom of the forearm. The overall force and torque in the wrist PS rotation is quantified by means of a wrist robot. A practical solution to avoid hyperstaticity and reduce the level of undesired force/torque in the wrist is presented, which is shown to reduce 75% of the force and 68% of the torque.",
"title": ""
},
{
"docid": "c999bd0903b53285c053c76f9fcc668f",
"text": "In this paper, a bibliographical review on reconfigurable (active) fault-tolerant control systems (FTCS) is presented. The existing approaches to fault detection and diagnosis (FDD) and fault-tolerant control (FTC) in a general framework of active fault-tolerant control systems (AFTCS) are considered and classified according to different criteria such as design methodologies and applications. A comparison of different approaches is briefly carried out. Focuses in the field on the current research are also addressed with emphasis on the practical application of the techniques. In total, 376 references in the open literature, dating back to 1971, are compiled to provide an overall picture of historical, current, and future developments in this area. # 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3d5165a30aac97d9548d19c907eed466",
"text": "A battery management system (BMS) based on the CAN-bus was designed for the Li-ion battery pack which consisted of many series-connected battery cells and was distributed dispersedly on the electric vehicle (EV). The BMS consisted of one master module and several sampling modules. The hardware design of the sampling circuit and the CAN expanding circuit was introduced. The strategies of the battery SOC (state of charge) estimation and the battery safety management were also presented.",
"title": ""
},
{
"docid": "3682143e9cfe7dd139138b3b533c8c25",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "4a30caf967a8b8d6b4913043514ad99a",
"text": "Massive MIMO involves the use of large scale antenna arrays for high-gain adaptive beamforming and high-order spatial multiplexing. An important design challenge in Massive MIMO systems is the acquisition of channel state information at the transmit array, where accurate channel knowledge is critical for obtaining the best performance with Multi-User MIMO transmission on the downlink. In this paper, we explore the use of a product codebook feedback methodology for two-dimensional antenna arrays where the codebook feedback strategy is decomposed into two separate feedback processes, one for azimuth and one for elevation. We specifically address the case where the transmit array consists of cross-polarized antennas and show how two separate codebook feedback processes can reduce reference signal overhead and simplify the mobile complexity while providing significant gains in performance over existing LTE configurations.",
"title": ""
}
] |
scidocsrr
|
caa129e6cd0128cef40ae5345c395cb2
|
Comparative analysis of piezoelectric power harvesting circuits for rechargeable batteries
|
[
{
"docid": "108058f1814d7520003b44f1ffc99cb5",
"text": "The process of acquiring the energy surrounding a system and converting it into usable electrical energy is termed power harvesting. In the last few years, there has been a surge of research in the area of power harvesting. This increase in research has been brought on by the modern advances in wireless technology and low-power electronics such as microelectromechanical systems. The advances have allowed numerous doors to open for power harvesting systems in practical real-world applications. The use of piezoelectric materials to capitalize on the ambient vibrations surrounding a system is one method that has seen a dramatic rise in use for power harvesting. Piezoelectric materials have a crystalline structure that provides them with the ability to transform mechanical strain energy into electrical charge and, vice versa, to convert an applied electrical potential into mechanical strain. This property provides these materials with the ability to absorb mechanical energy from their surroundings, usually ambient vibration, and transform it into electrical energy that can be used to power other devices. While piezoelectric materials are the major method of harvesting energy, other methods do exist; for example, one of the conventional methods is the use of electromagnetic devices. In this paper we discuss the research that has been performed in the area of power harvesting and the future goals that must be achieved for power harvesting systems to find their way into everyday use.",
"title": ""
},
{
"docid": "48036770f56e84df8b05c198e8a89018",
"text": "Advances in low power VLSI design, along with the potentially low duty cycle of wireless sensor nodes open up the possibility of powering small wireless computing devices from scavenged ambient power. A broad review of potential power scavenging technologies and conventional energy sources is first presented. Low-level vibrations occurring in common household and office environments as a potential power source are studied in depth. The goal of this paper is not to suggest that the conversion of vibrations is the best or most versatile method to scavenge ambient power, but to study its potential as a viable power source for applications where vibrations are present. Different conversion mechanisms are investigated and evaluated leading to specific optimized designs for both capacitive MicroElectroMechancial Systems (MEMS) and piezoelectric converters. Simulations show that the potential power density from piezoelectric conversion is significantly higher. Experiments using an off-the-shelf PZT piezoelectric bimorph verify the accuracy of the models for piezoelectric converters. A power density of 70 mW/cm has been demonstrated with the PZT bimorph. Simulations show that an optimized design would be capable of 250 mW/cm from a vibration source with an acceleration amplitude of 2.5 m/s at 120 Hz. q 2002 Elsevier Science B.V.. All rights reserved.",
"title": ""
},
{
"docid": "960f5bd8b673236d3b44a77e876e10c4",
"text": "This paper describes an approach to harvesting electrical energy from a mechanically excited piezoelectric element. A vibrating piezoelectric device differs from a typical electrical power source in that it has a capacitive rather than inductive source impedance, and may be driven by mechanical vibrations of varying amplitude. An analytical expression for the optimal power flow from a rectified piezoelectric device is derived, and an “energy harvesting” circuit is proposed which can achieve this optimal power flow. The harvesting circuit consists of an ac–dc rectifier with an output capacitor, an electrochemical battery, and a switch-mode dc–dc converter that controls the energy flow into the battery. An adaptive control technique for the dc–dc converter is used to continuously implement the optimal power transfer theory and maximize the power stored by the battery. Experimental results reveal that use of the adaptive dc–dc converter increases power transfer by over 400% as compared to when the dc–dc converter is not used.",
"title": ""
}
] |
[
{
"docid": "13ac8eddda312bd4ef3ba194c076a6ea",
"text": "With the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset, a novel dataset was introduced to the computer vision and multimedia research community. To maximize the benefit for the research community and utilize its potential, this dataset has to be made accessible by tools allowing to search for target concepts within the dataset and mechanism to browse images and videos of the dataset. Following best practice from data collections, such as ImageNet and MS COCO, this paper presents means of accessibility for the YFCC100m dataset. This includes a global analysis of the dataset and an online browser to explore and investigate subsets of the dataset in real-time. Providing statistics of the queried images and videos will enable researchers to refine their query successively, such that the users desired subset of interest can be narrowed down quickly. The final set of image and video can be downloaded as URLs from the browser for further processing.",
"title": ""
},
{
"docid": "21472ce2bf66d84a8fce106832e0fe97",
"text": "Every time you go to one of the top 100 book/music e-commerce sites, you will come into contact with personalisation systems that attempt to judge your interests to increase sales. There are 3 methods for making these personalised recommendations: Content-based filtering, Collaborative filtering and a hybrid of the two. Understanding each of these methods will give you insight as to how your personal information is used on the Internet, and remove some of the mystery associated with the systems. This will allow you understand how these systems work and how they could be improved, so as to make an informed decision as to whether this is a good thing.",
"title": ""
},
{
"docid": "b230400ee47b40751623561e11b1944c",
"text": "Many mHealth apps have been developed to assist people in self-care management. Most of them aim to engage users and provide motivation to increase adherence. Gamification has been introduced to identify the left and right brain drives in order to engage users and motivate them. We are using Octalysis framework to map how top rated stress management apps address the right brain drives. 12 stress management mHealth are classified based on this framework. In this paper, we explore how Gamification has been used in mHealth apps, the intrinsic motivation using self-determination theory, methodology, and findings. In the discussion, we identify design principles that will better suited to enhance intrinsic motivation for people who seek self-stress management.",
"title": ""
},
{
"docid": "a0a2037d04dd0e2b0defa8fbfd3072a4",
"text": "The sequential parameter optimization (spot) package for R (R Development Core Team, 2008) is a toolbox for tuning and understanding simulation and optimization algorithms. Model-based investigations are common approaches in simulation and optimization. Sequential parameter optimization has been developed, because there is a strong need for sound statistical analysis of simulation and optimization algorithms. spot includes methods for tuning based on classical regression and analysis of variance techniques; tree-based models such as CART and random forest; Gaussian process models (Kriging) and combinations of different metamodeling approaches. This article exemplifies how spot can be used for automatic and interactive tuning.",
"title": ""
},
{
"docid": "9d1dc15130b9810f6232b4a3c77e8038",
"text": "This paper argues that we should seek the golden middle way between dynamically and statically typed languages.",
"title": ""
},
{
"docid": "8b8be2f7a34f14c24443599cb570343f",
"text": "We present Audiopad, an interface for musical performance that aims to combine the modularity of knob based controllers with the expressive character of multidimensional tracking interfaces. The performer's manipulations of physical pucks on a tabletop control a real-time synthesis process. The pucks are embedded with LC tags that the system tracks in two dimensions with a series of specially shaped antennae. The system projects graphical information on and around the pucks to give the performer sophisticated control over the synthesis process. INTRODUCTION The late nineties saw the emergence of a new musical performance paradigm. Sitting behind the glowing LCDs on their laptops, electronic musicians could play their music in front of audiences without bringing a truckload of synthesizers and patch cables. However, the transition to laptop based performance created a rift between the performer and the audience as there was almost no stage presence for an onlooker to latch on to. Furthermore, the performers lost much of the real-time expressive power of traditional analog instruments. Their on-the-fly arrangements relied on inputs from their laptop keyboards and therefore lacked nuance, finesse, and improvisational capabilities.",
"title": ""
},
{
"docid": "920c977ce3ed5f310c97b6fcd0f5bef4",
"text": "In this paper, different automatic registration schemes base d on different optimization techniques in conjunction with different similarity measures are compared in term s of accuracy and efficiency. Results from every optimizat ion procedure are quantitatively evaluated with respect to t he manual registration, which is the standard registration method used in clinical practice. The comparison has shown automatic regi st ation schemes based on CD consist of an accurate and reliable method that can be used in clinical ophthalmology, as a satisfactory alternative to the manual method. Key-Words: multimodal image registration, optimization algorithms, sim ilarity metrics, retinal images",
"title": ""
},
{
"docid": "ef068ddc1d7cd8dd26acf4fafc54254d",
"text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named “few-shot object detection”. The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC’07 and ILSVRC’13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.",
"title": ""
},
{
"docid": "a8553e9f90e8766694f49dcfdeab83b7",
"text": "The need for solid-state ac-dc converters to improve power quality in terms of power factor correction, reduced total harmonic distortion at input ac mains, and precisely regulated dc output has motivated the investigation of several topologies based on classical converters such as buck, boost, and buck-boost converters. Boost converters operating in continuous-conduction mode have become particularly popular because reduced electromagnetic interference levels result from their utilization. Within this context, this paper introduces a bridgeless boost converter based on a three-state switching cell (3SSC), whose distinct advantages are reduced conduction losses with the use of magnetic elements with minimized size, weight, and volume. The approach also employs the principle of interleaved converters, as it can be extended to a generic number of legs per winding of the autotransformers and high power levels. A literature review of boost converters based on the 3SSC is initially presented so that key aspects are identified. The theoretical analysis of the proposed converter is then developed, while a comparison with a conventional boost converter is also performed. An experimental prototype rated at 1 kW is implemented to validate the proposal, as relevant issues regarding the novel converter are discussed.",
"title": ""
},
{
"docid": "2e9d6ad38bd51fbd7af165e4b9262244",
"text": "BACKGROUND\nThe assessment of blood lipids is very frequent in clinical research as it is assumed to reflect the lipid composition of peripheral tissues. Even well accepted such relationships have never been clearly established. This is particularly true in ophthalmology where the use of blood lipids has become very common following recent data linking lipid intake to ocular health and disease. In the present study, we wanted to determine in humans whether a lipidomic approach based on red blood cells could reveal associations between circulating and tissue lipid profiles. To check if the analytical sensitivity may be of importance in such analyses, we have used a double approach for lipidomics.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nRed blood cells, retinas and optic nerves were collected from 9 human donors. The lipidomic analyses on tissues consisted in gas chromatography and liquid chromatography coupled to an electrospray ionization source-mass spectrometer (LC-ESI-MS). Gas chromatography did not reveal any relevant association between circulating and ocular fatty acids except for arachidonic acid whose circulating amounts were positively associated with its levels in the retina and in the optic nerve. In contrast, several significant associations emerged from LC-ESI-MS analyses. Particularly, lipid entities in red blood cells were positively or negatively associated with representative pools of retinal docosahexaenoic acid (DHA), retinal very-long chain polyunsaturated fatty acids (VLC-PUFA) or optic nerve plasmalogens.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nLC-ESI-MS is more appropriate than gas chromatography for lipidomics on red blood cells, and further extrapolation to ocular lipids. The several individual lipid species we have identified are good candidates to represent circulating biomarkers of ocular lipids. However, further investigation is needed before considering them as indexes of disease risk and before using them in clinical studies on optic nerve neuropathies or retinal diseases displaying photoreceptors degeneration.",
"title": ""
},
{
"docid": "47d9f0976e6a91a30330beb142ffe84e",
"text": "Department of Defense (DoD) systems are required to be trusted and effective in a wide range of operational contexts with the ability to respond to new or changing conditions through modified tactics, appropriate reconfiguration, or replacement. As importantly, these systems are required to exhibit predictable and graceful degradation outside their designed performance envelope. For these systems to be included in the force structure, they need to be manufacturable, readily deployable, sustainable, easily modifiable, and cost-effective. Collectively, these requirements inform the definition of resilient DoD systems. This paper explores the properties and tradeoffs for engineered resilient systems in the military context. It reviews various perspectives on resilience, overlays DoD requirements on these perspectives, and presents DoD challenges in realizing and rapidly fielding resilient systems. This paper also presents promising research themes that need to be pursued by the research community to help the DoD realize the vision of affordable, adaptable, and effective systems. This paper concludes with a discussion of specific DoD systems that can potentially benefit from resilience and stresses the need for sustaining a community of interest in this important area. © 2014 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the University of Southern California.",
"title": ""
},
{
"docid": "9430b0f220538e878d99ef410fdc1ab2",
"text": "The prevalence of pregnancy, substance abuse, violence, and delinquency among young people is unacceptably high. Interventions for preventing problems in large numbers of youth require more than individual psychological interventions. Successful interventions include the involvement of prevention practitioners and community residents in community-level interventions. The potential of community-level interventions is illustrated by a number of successful studies. However, more inclusive reviews and multisite comparisons show that although there have been successes, many interventions did not demonstrate results. The road to greater success includes prevention science and newer community-centered models of accountability and technical assistance systems for prevention.",
"title": ""
},
{
"docid": "c0a04710b74f0bb15da13794974d1fec",
"text": "In this paper we propose a novel Conditional Random Field (CRF) formulation for the semantic scene labeling problem which is able to enforce temporal consistency between consecutive video frames and take advantage of the 3D scene geometry to improve segmentation quality. The main contribution of this work lies in the novel use of a 3D scene reconstruction as a means to temporally couple the individual image segmentations, allowing information flow from 3D geometry to the 2D image space. As our results show, the proposed framework outperforms state-of-the-art methods and opens a new perspective towards a tighter interplay of 2D and 3D information in the scene understanding problem.",
"title": ""
},
{
"docid": "b20aa2222759644b4b60b5b450424c9e",
"text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d45b23d061e4387f45a0dad03f237f5a",
"text": "Cultural appropriation is often mentioned but undertheorized in critical rhetorical and media studies. Defined as the use of a culture’s symbols, artifacts, genres, rituals, or technologies by members of another culture, cultural appropriation can be placed into 4 categories: exchange, dominance, exploitation, and transculturation. Although each of these types can be understood as relevant to particular contexts or eras, transculturation questions the bounded and proprietary view of culture embedded in other types of appropriation. Transculturation posits culture as a relational phenomenon constituted by acts of appropriation, not an entity that merely participates in appropriation. Tensions exist between the need to challenge essentialism and the use of essentialist notions such as ownership and degradation to criticize the exploitation of colonized cultures.",
"title": ""
},
{
"docid": "253ed173337319171a0dce877d23b7db",
"text": "This paper describes a driving control algorithm for four-wheel-drive (4WD) electric vehicles equipped with two motors at front and rear driving shafts to improve vehicle maneuverability, lateral stability, and rollover prevention. The driving control algorithm consists of the following three parts: 1) a supervisory controller that determines the control mode, the admissible control region, and the desired dynamics, such as the desired speed and yaw rate; 2) an upper level controller that computes the traction force input and the yaw moment input to track the desired dynamics; and 3) a lower level controller that determines actual actuator commands, such as the front/rear driving motor torques and independent brake torques. The supervisory controller computes the admissible control region, namely, the relationship between the vehicle speed and the maximum curvature of the vehicle considering the maximum steering angle, lateral stability, and rollover prevention. In the lower level controller, a wheel slip controller is designed to keep the slip ratio at each wheel below a limit value. In addition, an optimization-based control allocation strategy is used to map the upper level and wheel slip control inputs to actual actuator commands, taking into account the actuator constraints. Numerical simulation studies have been conducted to evaluate the proposed driving control algorithm. It has been shown from simulation studies that vehicle maneuverability, lateral stability, and rollover mitigation performance can be significantly improved by the proposed driving controller.",
"title": ""
},
{
"docid": "780019eebab5504fa0d8bc7c6d3fb0fd",
"text": "Sentiment analysis or opinion mining is the computational study of people’s opinions, appraisals, attitudes, and emotions toward entities, individuals, issues, events, topics and their attributes. The task is technically challenging and practically very useful. For example, businesses always want to find public or consumer opinions about their products and services. Potential customers also want to know the opinions of existing users before they use a service or purchase a product. With the explosive growth of social media (i.e., reviews, forum discussions, blogs and social networks) on the Web, individuals and organizations are increasingly using public opinions in these media for their decision making. However, finding and monitoring opinion sites on the Web and distilling the information contained in them remains a formidable task because of the proliferation of diverse sites. Each site typically contains a huge volume of opinionated text that is not always easily deciphered in long forum postings and blogs. The average human reader will have difficulty identifying relevant sites and accurately summarizing the information and opinions contained in them. Moreover, it is also known that human analysis of text information is subject to considerable biases, e.g., people often pay greater attention to opinions that are consistent with their own preferences. People also have difficulty, owing to their mental and physical limitations, producing consistent",
"title": ""
},
{
"docid": "a550969fc708fa6d7898ea29c0cedef8",
"text": "This paper describes the findings of a research project whose main objective is to compile a character frequency list based on a very large collection of Chinese texts collected from various online sources. As compared with several previous studies on Chinese character frequencies, this project uses a much larger corpus that not only covers more subject fields but also contains a better proportion of informative versus imaginative Modern Chinese texts. In addition, this project also computes two bigram frequency lists that can be used for compiling a list of most frequently used two-character words in Chinese.",
"title": ""
},
{
"docid": "77a09b094d4622d01d09f042f1ae3045",
"text": "Depth maps captured by consumer-level depth cameras such as Kinect are usually degraded by noise, missing values, and quantization. In this paper, we present a data-driven approach for refining degraded RAWdepth maps that are coupled with an RGB image. The key idea of our approach is to take advantage of a training set of high-quality depth data and transfer its information to the RAW depth map through multi-scale dictionary learning. Utilizing a sparse representation, our method learns a dictionary of geometric primitives which captures the correlation between high-quality mesh data, RAW depth maps and RGB images. The dictionary is learned and applied in a manner that accounts for various practical issues that arise in dictionary-based depth refinement. Compared to previous approaches that only utilize the correlation between RAW depth maps and RGB images, our method produces improved depth maps without over-smoothing. Since our approach is data driven, the refinement can be targeted to a specific class of objects by employing a corresponding training set. In our experiments, we show that this leads to additional improvements in recovering depth maps of human faces.",
"title": ""
},
{
"docid": "3fadc0d79b5ab97854f42d53919cf1a1",
"text": "The field of biology has been revolutionized by the recent advancement of an adaptive bacterial immune system as a universal genome engineering tool. Bacteria and archaea use repetitive genomic elements termed clustered regularly interspaced short palindromic repeats (CRISPR) in combination with an RNA-guided nuclease (CRISPR-associated nuclease: Cas) to target and destroy invading DNA. By choosing the appropriate sequence of the guide RNA, this two-component system can be used to efficiently modify, target, and edit genomic loci of interest in plants, insects, fungi, mammalian cells, and whole organisms. This has opened up new frontiers in genome engineering, including the potential to treat or cure human genetic disorders. Now the potential risks as well as the ethical, social, and legal implications of this powerful new technique move into the limelight.",
"title": ""
}
] |
scidocsrr
|
c4d19b13e92558c0cfab7f6748d7a35e
|
Ensemble diversity measures and their application to thinning
|
[
{
"docid": "0fb2afcd2997a1647bb4edc12d2191f9",
"text": "Many databases have grown to the point where they cannot fit into the fast memory of even large memory machines, to say nothing of current workstations. If what we want to do is to use these data bases to construct predictions of various characteristics, then since the usual methods require that all data be held in fast memory, various work-arounds have to be used. This paper studies one such class of methods which give accuracy comparable to that which could have been obtained if all data could have been held in core and which are computationally fast. The procedure takes small pieces of the data, grows a predictor on each small piece and then pastes these predictors together. A version is given that scales up to terabyte data sets. The methods are also applicable to on-line learning.",
"title": ""
}
] |
[
{
"docid": "a880d38d37862b46dc638b9a7e45b6ee",
"text": "This paper presents the modeling, simulation, and analysis of the dynamic behavior of a fictitious 2 × 320 MW variable-speed pump-turbine power plant, including a hydraulic system, electrical equipment, rotating inertias, and control systems. The modeling of the hydraulic and electrical components of the power plant is presented. The dynamic performances of a control strategy in generating mode and one in pumping mode are investigated by the simulation of the complete models in the case of change of active power set points. Then, a pseudocontinuous model of the converters feeding the rotor circuits is described. Due to this simplification, the simulation time can be reduced drastically (approximately factor 60). A first validation of the simplified model of the converters is obtained by comparison of the simulated results coming from the simplified and complete models for different modes of operation of the power plant. Experimental results performed on a 2.2-kW low-power test bench are also compared with the simulated results coming from both complete and simplified models related to this case and confirm the validity of the proposed simplified approach for the converters.",
"title": ""
},
{
"docid": "833c110e040311909aa38b05e457b2af",
"text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.",
"title": ""
},
{
"docid": "db4bb32f6fdc7a05da41e223afac3025",
"text": "Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: \"noise\" characterization and suppression, and \"signal\" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources.",
"title": ""
},
{
"docid": "7dcba854d1f138ab157a1b24176c2245",
"text": "Essential oils distilled from members of the genus Lavandula have been used both cosmetically and therapeutically for centuries with the most commonly used species being L. angustifolia, L. latifolia, L. stoechas and L. x intermedia. Although there is considerable anecdotal information about the biological activity of these oils much of this has not been substantiated by scientific or clinical evidence. Among the claims made for lavender oil are that is it antibacterial, antifungal, carminative (smooth muscle relaxing), sedative, antidepressive and effective for burns and insect bites. In this review we detail the current state of knowledge about the effect of lavender oils on psychological and physiological parameters and its use as an antimicrobial agent. Although the data are still inconclusive and often controversial, there does seem to be both scientific and clinical data that support the traditional uses of lavender. However, methodological and oil identification problems have severely hampered the evaluation of the therapeutic significance of much of the research on Lavandula spp. These issues need to be resolved before we have a true picture of the biological activities of lavender essential oil.",
"title": ""
},
{
"docid": "83b8944584693b9568f6ad3533ad297b",
"text": "BACKGROUND\nChemotherapy is the standard of care for incurable advanced gastric cancer. Whether the addition of gastrectomy to chemotherapy improves survival for patients with advanced gastric cancer with a single non-curable factor remains controversial. We aimed to investigate the superiority of gastrectomy followed by chemotherapy versus chemotherapy alone with respect to overall survival in these patients.\n\n\nMETHODS\nWe did an open-label, randomised, phase 3 trial at 44 centres or hospitals in Japan, South Korea, and Singapore. Patients aged 20-75 years with advanced gastric cancer with a single non-curable factor confined to either the liver (H1), peritoneum (P1), or para-aortic lymph nodes (16a1/b2) were randomly assigned (1:1) in each country to chemotherapy alone or gastrectomy followed by chemotherapy by a minimisation method with biased-coin assignment to balance the groups according to institution, clinical nodal status, and non-curable factor. Patients, treating physicians, and individuals who assessed outcomes and analysed data were not masked to treatment assignment. Chemotherapy consisted of oral S-1 80 mg/m(2) per day on days 1-21 and cisplatin 60 mg/m(2) on day 8 of every 5-week cycle. Gastrectomy was restricted to D1 lymphadenectomy without any resection of metastatic lesions. The primary endpoint was overall survival, analysed by intention to treat. This study is registered with UMIN-CTR, number UMIN000001012.\n\n\nFINDINGS\nBetween Feb 4, 2008, and Sept 17, 2013, 175 patients were randomly assigned to chemotherapy alone (86 patients) or gastrectomy followed by chemotherapy (89 patients). After the first interim analysis on Sept 14, 2013, the predictive probability of overall survival being significantly higher in the gastrectomy plus chemotherapy group than in the chemotherapy alone group at the final analysis was only 13·2%, so the study was closed on the basis of futility. Overall survival at 2 years for all randomly assigned patients was 31·7% (95% CI 21·7-42·2) for patients assigned to chemotherapy alone compared with 25·1% (16·2-34·9) for those assigned to gastrectomy plus chemotherapy. Median overall survival was 16·6 months (95% CI 13·7-19·8) for patients assigned to chemotherapy alone and 14·3 months (11·8-16·3) for those assigned to gastrectomy plus chemotherapy (hazard ratio 1·09, 95% CI 0·78-1·52; one-sided p=0·70). The incidence of the following grade 3 or 4 chemotherapy-associated adverse events was higher in patients assigned to gastrectomy plus chemotherapy than in those assigned to chemotherapy alone: leucopenia (14 patients [18%] vs two [3%]), anorexia (22 [29%] vs nine [12%]), nausea (11 [15%] vs four [5%]), and hyponatraemia (seven [9%] vs four [5%]). One treatment-related death occurred in a patient assigned to chemotherapy alone (sudden cardiopulmonary arrest of unknown cause during the second cycle of chemotherapy) and one occurred in a patient assigned to chemotherapy plus gastrectomy (rapid growth of peritoneal metastasis after discharge 12 days after surgery).\n\n\nINTERPRETATION\nSince gastrectomy followed by chemotherapy did not show any survival benefit compared with chemotherapy alone in advanced gastric cancer with a single non-curable factor, gastrectomy cannot be justified for treatment of patients with these tumours.\n\n\nFUNDING\nThe Ministry of Health, Labour and Welfare of Japan and the Korean Gastric Cancer Association.",
"title": ""
},
{
"docid": "ddb77ec8a722c50c28059d03919fb299",
"text": "Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.",
"title": ""
},
{
"docid": "872ccba4f0a0ba6a57500d4b73384ce1",
"text": "This research demonstrates the application of association rule mining to spatio-temporal data. Association rule mining seeks to discover associations among transactions encoded in a database. An association rule takes the form A → B where A (the antecedent) and B (the consequent) are sets of predicates. A spatio-temporal association rule occurs when there is a spatio-temporal relationship in the antecedent or consequent of the rule. As a case study, association rule mining is used to explore the spatial and temporal relationships among a set of variables that characterize socioeconomic and land cover change in the Denver, Colorado, USA region from 1970–1990. Geographic Information Systems (GIS)-based data pre-processing is used to integrate diverse data sets, extract spatio-temporal relationships, classify numeric data into ordinal categories, and encode spatio-temporal relationship data in tabular format for use by conventional (non-spatio-temporal) association rule mining software. Multiple level association rule mining is supported by the development of a hierarchical classification scheme (concept hierarchy) for each variable. Further research in spatiotemporal association rule mining should address issues of data integration, data classification, the representation and calculation of spatial relationships, and strategies for finding ‘interesting’ rules.",
"title": ""
},
{
"docid": "5ec64c4a423ccd32a5c1ceb918e3e003",
"text": "The leading edge (approximately 1 microgram) of lamellipodia in Xenopus laevis keratocytes and fibroblasts was shown to have an extensively branched organization of actin filaments, which we term the dendritic brush. Pointed ends of individual filaments were located at Y-junctions, where the Arp2/3 complex was also localized, suggesting a role of the Arp2/3 complex in branch formation. Differential depolymerization experiments suggested that the Arp2/3 complex also provided protection of pointed ends from depolymerization. Actin depolymerizing factor (ADF)/cofilin was excluded from the distal 0.4 micrometer++ of the lamellipodial network of keratocytes and in fibroblasts it was located within the depolymerization-resistant zone. These results suggest that ADF/cofilin, per se, is not sufficient for actin brush depolymerization and a regulatory step is required. Our evidence supports a dendritic nucleation model (Mullins, R.D., J.A. Heuser, and T.D. Pollard. 1998. Proc. Natl. Acad. Sci. USA. 95:6181-6186) for lamellipodial protrusion, which involves treadmilling of a branched actin array instead of treadmilling of individual filaments. In this model, Arp2/3 complex and ADF/cofilin have antagonistic activities. Arp2/3 complex is responsible for integration of nascent actin filaments into the actin network at the cell front and stabilizing pointed ends from depolymerization, while ADF/cofilin promotes filament disassembly at the rear of the brush, presumably by pointed end depolymerization after dissociation of the Arp2/3 complex.",
"title": ""
},
{
"docid": "b81f30a692d57ebc2fdef7df652d0ca2",
"text": "Suppose that Alice wishes to send messages to Bob through a communication channel C1, but her transmissions also reach an eavesdropper Eve through another channel C2. This is the wiretap channel model introduced by Wyner in 1975. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of the mutual information between the message and Eve's observations. Wyner showed that the situation is characterized by a single constant Cs, called the secrecy capacity, which has the following meaning: for all ε >; 0, there exist coding schemes of rate R ≥ Cs-ε that asymptotically achieve the reliability and security objectives. However, his proof of this result is based upon a random-coding argument. To date, despite consider able research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. In this paper, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C1 and C2 are symmetric and binary-input, and C2 is degraded with respect to C1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will also be satisfied unless the main channel C1 is noiseless, although we believe it can be always satisfied in practice.",
"title": ""
},
{
"docid": "a2c26a8b15cafeb365ad9870f9bbf884",
"text": "Microgrids consist of multiple parallel-connected distributed generation (DG) units with coordinated control strategies, which are able to operate in both grid-connected and islanded mode. Microgrids are attracting more and more attention since they can alleviate the stress of main transmission systems, reduce feeder losses, and improve system power quality. When the islanded microgrids are concerned, it is important to maintain system stability and achieve load power sharing among the multiple parallel-connected DG units. However, the poor active and reactive power sharing problems due to the influence of impedance mismatch of the DG feeders and the different ratings of the DG units are inevitable when the conventional droop control scheme is adopted. Therefore, the adaptive/improved droop control, network-based control methods and cost-based droop schemes are compared and summarized in this paper for active power sharing. Moreover, nonlinear and unbalanced loads could further affect the reactive power sharing when regulating the active power, and it is difficult to share the reactive power accurately only by using the enhanced virtual impedance method. Therefore, the hierarchical control strategies are utilized as supplements of the conventional droop controls and virtual impedance methods. The improved hierarchical control approaches such as the algorithms based on graph theory, multi-agent system, the gain scheduling method and predictive control have been proposed to achieve proper reactive power sharing for islanded microgrids and eliminate the effect of the communication delays on hierarchical control. Finally, the future research trends on islanded microgrids are also discussed in this paper.",
"title": ""
},
{
"docid": "87da90ee583f5aa1777199f67bdefc83",
"text": "The rapid development of computer networks in the past decades has created many security problems related to intrusions on computer and network systems. Intrusion Detection Systems IDSs incorporate methods that help to detect and identify intrusive and non-intrusive network packets. Most of the existing intrusion detection systems rely heavily on human analysts to analyze system logs or network traffic to differentiate between intrusive and non-intrusive network traffic. With the increase in data of network traffic, involvement of human in the detection system is a non-trivial problem. IDS’s ability to perform based on human expertise brings limitations to the system’s capability to perform autonomously over exponentially increasing data in the network. However, human expertise and their ability to analyze the system can be efficiently modeled using soft-computing techniques. Intrusion detection techniques based on machine learning and softcomputing techniques enable autonomous packet detections. They have the potential to analyze the data packets, autonomously. These techniques are heavily based on statistical analysis of data. The ability of the algorithms that handle these data-sets can use patterns found in previous data to make decisions for the new evolving data-patterns in the network traffic. In this paper, we present a rigorous survey study that envisages various soft-computing and machine learning techniques used to build autonomous IDSs. A robust IDSs system lays a foundation to build an efficient Intrusion Detection and Prevention System IDPS.",
"title": ""
},
{
"docid": "2d5a8949119d7881a97693867a009917",
"text": "Labeling a histopathology image as having cancerous regions or not is a critical task in cancer diagnosis; it is also clinically important to segment the cancer tissues and cluster them into various classes. Existing supervised approaches for image classification and segmentation require detailed manual annotations for the cancer pixels, which are time-consuming to obtain. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL) (along the line of weakly supervised learning) for histopathology image segmentation. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), medical image segmentation (cancer vs. non-cancer tissue), and patch-level clustering (different classes). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to performing the above three tasks in an integrated framework. In addition, we introduce contextual constraints as a prior for MCIL, which further reduces the ambiguity in MIL. Experimental results on histopathology colon cancer images and cytology images demonstrate the great advantage of MCIL over the competing methods.",
"title": ""
},
{
"docid": "f02b44ff478952f1958ba33d8a488b8e",
"text": "Plagiarism is an illicit act of using other’s work wholly or partially as one’s own in any field such as art, poetry literature, cinema, research and other creative forms of study. It has become a serious crime in academia and research fields and access to wide range of resources on the internet has made the situation even worse. Therefore, there is a need for automatic detection of plagiarism in text. This paper presents a survey of various plagiarism detection techniques used for different languages.",
"title": ""
},
{
"docid": "026a0651177ee631a80aaa7c63a1c32f",
"text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.",
"title": ""
},
{
"docid": "02605f4044a69b70673121985f1bd913",
"text": "A novel class of low-cost, small-footprint and high-gain antenna arrays is presented for W-band applications. A 4 × 4 antenna array is proposed and demonstrated using substrate-integrated waveguide (SIW) technology for the design of its feed network and longitudinal slots in the SIW top metallic surface to drive the array antenna elements. Dielectric cubes of low-permittivity material are placed on top of each 1 × 4 antenna array to increase the gain of the circular patch antenna elements. This new design is compared to a second 4 × 4 antenna array which, instead of dielectric cubes, uses vertically stacked Yagi-like parasitic director elements to increase the gain. Measured impedance bandwidths of the two 4 × 4 antenna arrays are about 7.5 GHz (94.2-101.8 GHz) at 18 ± 1 dB gain level, with radiation patterns and gains of the two arrays remaining nearly constant over this bandwidth. While the fabrication effort of the new array involving dielectric cubes is significantly reduced, its measured radiation efficiency of 81 percent is slightly lower compared to 90 percent of the Yagi-like design.",
"title": ""
},
{
"docid": "b05f96e22157b69d7033db35ab38524a",
"text": "Novelty search has shown to be a promising approach for the evolution of controllers for swarms of robots. In existing studies, however, the experimenter had to craft a task-specific behaviour similarity measure. The reliance on hand-crafted similarity measures places an additional burden to the experimenter and introduces a bias in the evolutionary process. In this paper, we propose and compare two generic behaviour similarity measures: combined state count and sampled average state. The proposed measures are based on the values of sensors and effectors recorded for each individual robot of the swarm. The characterisation of the group-level behaviour is then obtained by combining the sensor-effector values from all the robots. We evaluate the proposed measures in an aggregation task and in a resource sharing task. We show that the generic measures match the performance of task-specific measures in terms of solution quality. Our results indicate that the proposed generic measures operate as effective behaviour similarity measures, and that it is possible to leverage the benefits of novelty search without having to craft task-specific similarity measures.",
"title": ""
},
{
"docid": "ba2710c7df05b149f6d2befa8dbc37ee",
"text": "This work proposes a method for blind equalization of possibly non-minimum phase channels using particular infinite impulse response (IIR) filters. In this context, the transfer function of the equalizer is represented by a linear combination of specific rational basis functions. This approach estimates separately the coefficients of the linear expansion and the poles of the rational basis functions by alternating iteratively between an adaptive (fixed pole) estimation of the coefficients and a pole placement method. The focus of the work is mainly on the issue of good pole placement (initialization and updating).",
"title": ""
},
{
"docid": "6b0a4a8c61fb4ceabe3aa3d5664b4b67",
"text": "Most existing approaches for text classification represent texts as vectors of words, namely ``Bag-of-Words.'' This text representation results in a very high dimensionality of feature space and frequently suffers from surface mismatching. Short texts make these issues even more serious, due to their shortness and sparsity. In this paper, we propose using ``Bag-of-Concepts'' in short text representation, aiming to avoid the surface mismatching and handle the synonym and polysemy problem. Based on ``Bag-of-Concepts,'' a novel framework is proposed for lightweight short text classification applications. By leveraging a large taxonomy knowledgebase, it learns a concept model for each category, and conceptualizes a short text to a set of relevant concepts. A concept-based similarity mechanism is presented to classify the given short text to the most similar category. One advantage of this mechanism is that it facilitates short text ranking after classification, which is needed in many applications, such as query or ad recommendation. We demonstrate the usage of our proposed framework through a real online application: Channel-based Query Recommendation. Experiments show that our framework can map queries to channels with a high degree of precision (avg. precision=90.3%), which is critical for recommendation applications.",
"title": ""
},
{
"docid": "32fb1d8492e06b1424ea61d4c28f3c6c",
"text": "Modern IT systems often produce large volumes of event logs, and event pattern discovery is an important log management task. For this purpose, data mining methods have been suggested in many previous works. In this paper, we present the LogCluster algorithm which implements data clustering and line pattern mining for textual event logs. The paper also describes an open source implementation of LogCluster.",
"title": ""
}
] |
scidocsrr
|
1d52c50130f737e30eae4b14fe3ffe0a
|
Pricing in Network Effect Markets
|
[
{
"docid": "1e18be7d7e121aa899c96cbcf5ea906b",
"text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1",
"title": ""
}
] |
[
{
"docid": "dd14f9eb9a9e0e4e0d24527cf80d04f4",
"text": "The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.",
"title": ""
},
{
"docid": "6f45bc16969ed9deb5da46ff8529bb8a",
"text": "In the future, mobile systems will increasingly feature more advanced organic light-emitting diode (OLED) displays. The power consumption of these displays is highly dependent on the image content. However, existing OLED power-saving techniques either change the visual experience of users or degrade the visual quality of images in exchange for a reduction in the power consumption. Some techniques attempt to enhance the image quality by employing a compound objective function. In this article, we present a win-win scheme that always enhances the image quality while simultaneously reducing the power consumption. We define metrics to assess the benefits and cost for potential image enhancement and power reduction. We then introduce algorithms that ensure the transformation of images into their quality-enhanced power-saving versions. Next, the win-win scheme is extended to process videos at a justifiable computational cost. All the proposed algorithms are shown to possess the win-win property without assuming accurate OLED power models. Finally, the proposed scheme is realized through a practical camera application and a video camcorder on mobile devices. The results of experiments conducted on a commercial tablet with a popular image database and on a smartphone with real-world videos are very encouraging and provide valuable insights for future research and practices.",
"title": ""
},
{
"docid": "d34c96bb2399e4bd3f19825eef98d6dd",
"text": "This paper proposes logic programs as a specification for robot control. These provide a formal specification of what an agent should do depending on what it senses, and its previous sensory inputs and actions. We show how to axiomatise reactive agents, events as an interface between continuous and discrete time, and persistence, as well as axiomatising integration and differentiation over time (in terms of the limit of sums and differences). This specification need not be evaluated as a Prolog program; we use can the fact that it will be evaluated in time to get a more efficient agent. We give a detailed example of a nonholonomic maze travelling robot, where we use the same language to model both the agent and the environment. One of the main motivations for this work is that there is a clean interface between the logic programs here and the model of uncertainty embedded in probabilistic Horn abduction. This is one step towards building a decisiontheoretic planning system where the output of the planner is a plan suitable for actually controlling a robot.",
"title": ""
},
{
"docid": "e578bafcfef89e66cd77f6ee41c1fd1e",
"text": "Quadruped robot is expected to serve in complex conditions such as mountain road, grassland, etc., therefore we desire a walking pattern generation that can guarantee both the speed and the stability of the quadruped robot. In order to solve this problem, this paper focuses on the stability for the tort pattern and proposes trot pattern generation for quadruped robot on the basis of ZMP stability margin. The foot trajectory is first designed based on the work space limitation. Then the ZMP and stability margin is computed to achieve the optimal trajectory of the midpoint of the hip joint of the robot. The angles of each joint are finally obtained through the inverse kinematics calculation. Finally, the effectiveness of the proposed method is demonstrated by the results from the simulation and the experiment on the quadruped robot in BIT.",
"title": ""
},
{
"docid": "be91ec9b4f017818f32af09cafbb2a9a",
"text": "Brainard et al. 2 INTRODUCTION Object recognition is difficult because there is no simple relation between an object's properties and the retinal image. Where the object is located, how it is oriented, and how it is illuminated also affect the image. Moreover, the relation is under-determined: multiple physical configurations can give rise to the same retinal image. In the case of object color, the spectral power distribution of the light reflected from an object depends not only on the object's intrinsic surface reflectance but also factors extrinsic to the object, such as the illumination. The relation between intrinsic reflectance, extrinsic illumination, and the color signal reflected to the eye is shown schematically in Figure 1. The light incident on a surface is characterized by its spectral power distribution E(λ). A small surface element reflects a fraction of the incident illuminant to the eye. The surface reflectance function S(λ) specifies this fraction as a function of wavelength. The spectrum of the light reaching the eye is called the color signal and is given by C(λ) = E(λ)S(λ). Information about C(λ) is encoded by three classes of cone photoreceptors, the L-, M-, and Scones. The top two patches rendered in Plate 1 illustrate the large effect that a typical change in natural illumination (see Wyszecki and Stiles, 1982) can have on the color signal. This effect might lead us to expect that the color appearance of objects should vary radically, depending as much on the current conditions of illumination as on the object's surface reflectance. Yet the very fact that we can sensibly refer to objects as having a color indicates otherwise. Somehow our visual system stabilizes the color appearance of objects against changes in illumination, a perceptual effect that is referred to as color constancy. Because the illumination is the most salient object-extrinsic factor that affects the color signal, it is natural that emphasis has been placed on understanding how changing the illumination affects object color appearance. In a typical color constancy experiment, the independent variable is the illumination and the dependent variable is a measure of color appearance experiments employ different stimulus configurations and psychophysical tasks, but taken as a whole they support the view that human vision exhibits a reasonable degree of color constancy. Recall that the top two patches of Plate 1 illustrate the limiting case where a single surface reflectance is seen under multiple illuminations. Although this …",
"title": ""
},
{
"docid": "14a8adf666b115ff4a72ff600432ff07",
"text": "In all branches of medicine, there is an inevitable element of patient exposure to problems arising from human error, and this is increasingly the subject of bad publicity, often skewed towards an assumption that perfection is achievable, and that any error or discrepancy represents a wrong that must be punished. Radiology involves decision-making under conditions of uncertainty, and therefore cannot always produce infallible interpretations or reports. The interpretation of a radiologic study is not a binary process; the “answer” is not always normal or abnormal, cancer or not. The final report issued by a radiologist is influenced by many variables, not least among them the information available at the time of reporting. In some circumstances, radiologists are asked specific questions (in requests for studies) which they endeavour to answer; in many cases, no obvious specific question arises from the provided clinical details (e.g. “chest pain”, “abdominal pain”), and the reporting radiologist must strive to interpret what may be the concerns of the referring doctor. (A friend of one of the authors, while a resident in a North American radiology department, observed a staff radiologist dictate a chest x-ray reporting stating “No evidence of leprosy”. When subsequently confronted by an irate respiratory physician asking for an explanation of the seemingly-perverse report, he explained that he had no idea what the clinical concerns were, as the clinical details section of the request form had been left blank).",
"title": ""
},
{
"docid": "28d19824a598ae20039f2ed5d8885234",
"text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.",
"title": ""
},
{
"docid": "597311f3187b504d91f7c788144f6b30",
"text": "Objective: Body Integrity Identity Disorder (BIID) describes a phenomenon in which physically healthy people feel the constant desire for an impairment of their body. M. First [4] suggested to classify BIID as an identity disorder. The other main disorder in this respect is Gender Dysphoria. In this paper these phenomena are compared. Method: A questionnaire survey with transsexuals (number of subjects, N=19) and BIID sufferers (N=24) measuring similarities and differences. Age and educational level of the subjects are predominantly matched. Results: No differences were found between BIID and Gender Dysphoria with respect to body image and body perception (U-test: p-value=.757), age of onset (p=.841), the imitation of the desired identity (p=.699 and p=.938), the etiology (p=.299) and intensity of desire (p=.989 and p=.224) as well as in relation to a high level of suffering and impaired quality of life (p=.066). Conclusion: There are many similarities between BIID and Gender Dysphoria, but the sample was too small to make general statements. The results, however, indicate that BIID can actually be classified as an identity disorder.",
"title": ""
},
{
"docid": "714c06da1a728663afd8dbb1cd2d472d",
"text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.",
"title": ""
},
{
"docid": "f4ebbcebefbcc1ba8b6f8e5bf6096645",
"text": "With advances in wireless communication technology, more and more people depend heavily on portable mobile devices for businesses, entertainments and social interactions. Although such portable mobile devices can offer various promising applications, their computing resources remain limited due to their portable size. This however can be overcome by remotely executing computation-intensive tasks on clusters of near by computers known as cloudlets. As increasing numbers of people access the Internet via mobile devices, it is reasonable to envision in the near future that cloudlet services will be available for the public through easily accessible public wireless metropolitan area networks (WMANs). However, the outdated notion of treating cloudlets as isolated data-centers-in-a-box must be discarded as there are clear benefits to connecting multiple cloudlets together to form a network. In this paper we investigate how to balance the workload between multiple cloudlets in a network to optimize mobile application performance. We first introduce a system model to capture the response times of offloaded tasks, and formulate a novel optimization problem, that is to find an optimal redirection of tasks between cloudlets such that the maximum of the average response times of tasks at cloudlets is minimized. We then propose a fast, scalable algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. The experimental results demonstrate the significant potential of the proposed algorithm in reducing the response times of tasks.",
"title": ""
},
{
"docid": "1c8e47f700926cf0b6ab6ed7446a6e7a",
"text": "Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated datasets, which are expensive to develop and thus limited in size. Since such datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity datasets. Additionally, we investigated the effect of dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one dataset, performance improves significantly for five datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any dataset, and performance improves significantly for six datasets by up to 1.1%. The dataset size experiments found that as dataset size decreased, the multi-output model’s performance increased compared to the single-task model’s. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our results show that, on average, the multi-task models produced better NER results than the single-task models trained on a single NER dataset. We also found that Multi-task Learning is beneficial for small datasets. Across the various settings the improvements are significant, demonstrating the benefit of Multi-task Learning for this task.",
"title": ""
},
{
"docid": "b238ceff7cf19621a420494ac311b2dd",
"text": "In this paper, we discuss the extension and integration of the statistical concept of Kernel Density Estimation (KDE) in a scatterplot-like visualization for dynamic data at interactive rates. We present a line kernel for representing streaming data, we discuss how the concept of KDE can be adapted to enable a continuous representation of the distribution of a dependent variable of a 2D domain. We propose to automatically adapt the kernel bandwith of KDE to the viewport settings, in an interactive visualization environment that allows zooming and panning. We also present a GPU-based realization of KDE that leads to interactive frame rates, even for comparably large datasets. Finally, we demonstrate the usefulness of our approach in the context of three application scenarios - one studying streaming ship traffic data, another one from the oil & gas domain, where process data from the operation of an oil rig is streaming in to an on-shore operational center, and a third one studying commercial air traffic in the US spanning 1987 to 2008.",
"title": ""
},
{
"docid": "4c30af9dd05b773ce881a312bcad9cb9",
"text": "This review summarized various chemical recycling methods for PVC, such as pyrolysis, catalytic dechlorination and hydrothermal treatment, with a view to solving the problem of energy crisis and the impact of environmental degradation of PVC. Emphasis was paid on the recent progress on the pyrolysis of PVC, including co-pyrolysis of PVC with biomass/coal and other plastics, catalytic dechlorination of raw PVC or Cl-containing oil and hydrothermal treatment using subcritical and supercritical water. Understanding the advantage and disadvantage of these treatment methods can be beneficial for treating PVC properly. The dehydrochlorination of PVC mainly happed at low temperature of 250-320°C. The process of PVC dehydrochlorination can catalyze and accelerate the biomass pyrolysis. The intermediates from dehydrochlorination stage of PVC can increase char yield of co-pyrolysis of PVC with PP/PE/PS. For the catalytic degradation and dechlorination of PVC, metal oxides catalysts mainly acted as adsorbents for the evolved HCl or as inhibitors of HCl formation depending on their basicity, while zeolites and noble metal catalysts can produce lighter oil, depending the total number of acid sites and the number of accessible acidic sites. For hydrothermal treatment, PVC decomposed through three stages. In the first region (T<250°C), PVC went through dehydrochlorination to form polyene; in the second region (250°C<T<350°C), polyene decomposed to low-molecular weight compounds; in the third region (350°C<T), polyene further decomposed into a large amount of low-molecular weight compounds.",
"title": ""
},
{
"docid": "e6245f210bfbcf47795604b45cb927ad",
"text": "The grid-connected AC module is an alternative solution in photovoltaic (PV) generation systems. It combines a PV panel and a micro-inverter connected to grid. The use of a high step-up converter is essential for the grid-connected micro-inverter because the input voltage is about 15 V to 40 V for a single PV panel. The proposed converter employs a Zeta converter and a coupled inductor, without the extreme duty ratios and high turns ratios generally needed for the coupled inductor to achieve high step-up voltage conversion; the leakage-inductor energy of the coupled inductor is efficiently recycled to the load. These features improve the energy-conversion efficiency. The operating principles and steady-state analyses of continuous and boundary conduction modes, as well as the voltage and current stresses of the active components, are discussed in detail. A 25 V input voltage, 200 V output voltage, and 250 W output power prototype circuit of the proposed converter is implemented to verify the feasibility; the maximum efficiency is up to 97.3%, and full-load efficiency is 94.8%.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "f37d32a668751198ed8acde8ab3bdc12",
"text": "INTRODUCTION\nAlthough the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.\n\n\nMETHODS\nThe total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.\n\n\nRESULTS\nFactor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).\n\n\nDISCUSSION\nThe ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD.",
"title": ""
},
{
"docid": "20e19999be17bce4ba3ae6d94400ba3c",
"text": "Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today's multi-core architecture. In this paper, we study the parallelizability of skip lists for the parallel and concurrent environment, and present PSL, a Parallel in-memory Skip List that lends itself naturally to the multi-core environment, particularly with non-uniform memory access. For each query, PSL traverses the index in a Breadth-First-Search (BFS) to find the list node with the matching key, and exploits SIMD processing to speed up this process. Furthermore, PSL distributes incoming queries among multiple execution threads disjointly and uniformly to eliminate the use of latches and achieve a high parallelizability. The experimental results show that PSL is comparable to a readonly index, FAST, in terms of read performance, and outperforms ART and Masstree respectively by up to 30% and 5x for a variety of workloads.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91e9f4d67c89aea99299966492648300",
"text": "In safety critical domains, system test cases are often derived from functional requirements in natural language (NL) and traceability between requirements and their corresponding test cases is usually mandatory. The definition of test cases is therefore time-consuming and error prone, especially so given the quickly rising complexity of embedded systems in many critical domains. Though considerable research has been devoted to automatic generation of system test cases from NL requirements, most of the proposed approaches re- quire significant manual intervention or additional, complex behavioral modelling. This significantly hinders their applicability in practice. In this paper, we propose Use Case Modelling for System Tests Generation (UMTG), an approach that automatically generates executable system test cases from use case spec- ifications and a domain model, the latter including a class diagram and constraints. Our rationale and motivation are that, in many environments, including that of our industry partner in the reported case study, both use case specifica- tions and domain modelling are common and accepted prac- tice, whereas behavioural modelling is considered a difficult and expensive exercise if it is to be complete and precise. In order to extract behavioral information from use cases and enable test automation, UMTG employs Natural Language Processing (NLP), a restricted form of use case specifica- tions, and constraint solving.",
"title": ""
}
] |
scidocsrr
|
33c0170fbe936ebf10972956b27bf1d1
|
Sampling Algorithms in a Stream Operator
|
[
{
"docid": "aa2b1a8d0cf511d5862f56b47d19bc6a",
"text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:",
"title": ""
}
] |
[
{
"docid": "ac56668cdaad25e9df31f71bc6d64995",
"text": "Hand-crafted illustrations are often more effective than photographs for conveying the shape and important features of an object, but they require expertise and time to produce. We describe an image compositing system and user interface that allow an artist to quickly and easily create technical illustrations from a set of photographs of an object taken from the same point of view under variable lighting conditions. Our system uses a novel compositing process in which images are combined using spatially-varying light mattes, enabling the final lighting in each area of the composite to be manipulated independently. We describe an interface that provides for the painting of local lighting effects (e.g. shadows, highlights, and tangential lighting to reveal texture) directly onto the composite. We survey some of the techniques used in illustration and lighting design to convey the shape and features of objects and describe how our system can be used to apply these techniques.",
"title": ""
},
{
"docid": "37dc4a306f043684042e6af01223a275",
"text": "In recent years, studies about control methods for complex machines and robots have been developed rapidly. Biped robots are often treated as inverted pendulums for its simple structure. But modeling of robot and other complex machines is a time-consuming procedure. A new method of modeling and simulation of robot based on SimMechanics is proposed in this paper. Physical modeling, parameter setting and simulation are presented in detail. The SimMechanics block model is first used in modeling and simulation of inverted pendulums. Simulation results of the SimMechanics block model and mathematical model for single inverted pendulum are compared. Furthermore, a full state feedback controller is designed to satisfy the performance requirement. It indicates that SimMechanics can be used for unstable nonlinear system and robots.",
"title": ""
},
{
"docid": "54c2914107ae5df0a825323211138eb9",
"text": "An implicit, but pervasive view in the information science community is that people are perpetual seekers after new public information, incessantly identifying and consuming new information by browsing the Web and accessing public collections. One aim of this review is to move beyond this consumer characterization, which regards information as a public resource containing novel data that we seek out, consume, and then discard. Instead, I want to focus on a very different view: where familiar information is used as a personal resource that we keep, manage, and (sometimes repeatedly) exploit. I call this information curation. I first summarize limitations of the consumer perspective. I then review research on three different information curation processes: keeping, management, and exploitation. I describe existing work detailing how each of these processes is applied to different types of personal data: documents, e-mail messages, photos, and Web pages. The research indicates people tend to keep too much information, with the exception of contacts and Web pages. When managing information, strategies that rely on piles as opposed to files provide surprising benefits. And in spite of the emergence of desktop search, exploitation currently remains reliant on manual methods such as navigation. Several new technologies have the potential to address important",
"title": ""
},
{
"docid": "1854e443a1b4b0ba9762c7364bbe5c69",
"text": "In this paper, we describe our investigation of traces of naturally occurring emotions in electrical brain signals, that can be used to build interfaces that respond to our emotional state. This study confirms a number of known affective correlates in a realistic, uncontrolled environment for the emotions of valence (or pleasure), arousal and dominance: (1) a significant decrease in frontal power in the theta range is found for increasingly positive valence, (2) a significant frontal increase in power in the alpha range is associated with increasing emotional arousal, (3) a significant right posterior power increase in the delta range correlates with increasing arousal and (4) asymmetry in power in the lower alpha bands correlates with self-reported valence. Furthermore, asymmetry in the higher alpha bands correlates with self-reported dominance. These last two effects provide a simple measure for subjective feelings of pleasure and feelings of control.",
"title": ""
},
{
"docid": "2603c07864b92c6723b40c83d3c216b9",
"text": "Background: A study was undertaken to record exacerbations and health resource use in patients with COPD during 6 months of treatment with tiotropium, salmeterol, or matching placebos. Methods: Patients with COPD were enrolled in two 6-month randomised, placebo controlled, double blind, double dummy studies of tiotropium 18 μg once daily via HandiHaler or salmeterol 50 μg twice daily via a metered dose inhaler. The two trials were combined for analysis of heath outcomes consisting of exacerbations, health resource use, dyspnoea (assessed by the transitional dyspnoea index, TDI), health related quality of life (assessed by St George’s Respiratory Questionnaire, SGRQ), and spirometry. Results: 1207 patients participated in the study (tiotropium 402, salmeterol 405, placebo 400). Compared with placebo, tiotropium but not salmeterol was associated with a significant delay in the time to onset of the first exacerbation. Fewer COPD exacerbations/patient year occurred in the tiotropium group (1.07) than in the placebo group (1.49, p<0.05); the salmeterol group (1.23 events/year) did not differ from placebo. The tiotropium group had 0.10 hospital admissions per patient year for COPD exacerbations compared with 0.17 for salmeterol and 0.15 for placebo (not statistically different). For all causes (respiratory and non-respiratory) tiotropium, but not salmeterol, was associated with fewer hospital admissions while both groups had fewer days in hospital than the placebo group. The number of days during which patients were unable to perform their usual daily activities was lowest in the tiotropium group (tiotropium 8.3 (0.8), salmeterol 11.1 (0.8), placebo 10.9 (0.8), p<0.05). SGRQ total score improved by 4.2 (0.7), 2.8 (0.7) and 1.5 (0.7) units during the 6 month trial for the tiotropium, salmeterol and placebo groups, respectively (p<0.01 tiotropium v placebo). Compared with placebo, TDI focal score improved in both the tiotropium group (1.1 (0.3) units, p<0.001) and the salmeterol group (0.7 (0.3) units, p<0.05). Evaluation of morning pre-dose FEV1, peak FEV1 and mean FEV1 (0–3 hours) showed that tiotropium was superior to salmeterol while both active drugs were more effective than placebo. Conclusions: Exacerbations of COPD and health resource usage were positively affected by daily treatment with tiotropium. With the exception of the number of hospital days associated with all causes, salmeterol twice daily resulted in no significant changes compared with placebo. Tiotropium also improved health related quality of life, dyspnoea, and lung function in patients with COPD.",
"title": ""
},
{
"docid": "338dc5d14a5c00a110823dd3ce7c2867",
"text": "Le diagnostic de l'hallux valgus est clinique. Le bilan radiographique n'intervient qu'en seconde intention pour préciser les vices architecturaux primaires ou secondaires responsables des désaxations ostéo-musculotendineuses. Ce bilan sera toujours réalisé dans des conditions physiologiques, c'est-à-dire le pied en charge. La radiographie de face en charge apprécie la formule du pied (égyptien, grec, carré), le degré de luxation des sésamoïdes (stades 1, 2 ou 3), les valeurs angulaires (ouverture du pied, varus intermétatarsien, valgus interphalangien) et linéaires, tel l'étalement de l'avant-pied. La radiographie de profil en charge évalue la formule d'un pied creux, plat ou normo axé. L'incidence de Guntz Walter reflétant l'appui métatarsien décèle les zones d'hyperappui pathologique. En post-opératoire, ce même bilan permettra d'évaluer le geste chirurgical et de reconnaître une éventuelle hyper ou hypocorrection. The diagnosis of hallux valgus is a clinical one. Radiographic examination is involved only secondarily, to define the primary or secondary structural defects responsible for bony and musculotendinous malalignement. This examination should always be made under physiologic conditions, i.e., with the foot taking weight. The frontal radiograph in weight-bearing assesses the category of the foot (Egyptian, Greek, square), the degree of luxation of the sesamoids (stages 1, 2 or 3), the angular values (opening of the foot, intermetatarsal varus, interphalangeal valgus) and the linear values such as the spreading of the forefoot. The lateral radiograph in weight-bearing categorises the foot as cavus, flat or normally oriented. The Guntz Walter view indicates the thrust on the metatarsals and reveals zones of abnormal excessive thrust. Postoperatively, the same examination makes it possible to assess the outcome of the surgical procedure and to detect any over- or under-correction.",
"title": ""
},
{
"docid": "5a392f4c9779c06f700e2ff004197de9",
"text": "Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classiier learning systems. Both form a set of classiiers that are combined by v oting, bagging by generating replicated boot-strap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater beneet. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classiiers reduces this downside and also leads to slightly better results on most of the datasets considered.",
"title": ""
},
{
"docid": "c6878e9e106655f492a989be9e33176f",
"text": "Employees who are engaged in their work are fully connected with their work roles. They are bursting with energy, dedicated to their work, and immersed in their work activities. This article presents an overview of the concept of work engagement. I discuss the antecedents and consequences of engagement. The review shows that job and personal resources are the main predictors of engagement. These resources gain their salience in the context of high job demands. Engaged workers are more open to new information, more productive, and more willing to go the extra mile. Moreover, engaged workers proactively change their work environment in order to stay engaged. The findings of previous studies are integrated in an overall model that can be used to develop work engagement and advance job performance in today’s workplace.",
"title": ""
},
{
"docid": "bc7d0895bcbb47c8bf79d0ba7078b209",
"text": "The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies.",
"title": ""
},
{
"docid": "d2e56a45e0b901024776d36eaa5fa998",
"text": "In this paper, we present our results of automatic gesture recognition systems using different types of cameras in order to compare them in reference to their performances in segmentation. The acquired image segments provide the data for further analysis. The images of a single camera system are mostly used as input data in the research area of gesture recognition. In comparison to that, the analysis results of a stereo color camera and a thermal camera system are used to determine the advantages and disadvantages of these camera systems. On this basis, a real-time gesture recognition system is proposed to classify alphabets (A-Z) and numbers (0-9) with an average recognition rate of 98% using Hidden Markov Models (HMM).",
"title": ""
},
{
"docid": "271731e414285690f3de89ccd3a29ff4",
"text": "BACKGROUND\nRice bran is a nutritionally valuable by-product of paddy milling. In this study an experimental infrared (IR) stabilization system was developed to prevent rice bran rancidity. The free fatty acid content of raw and IR-stabilized rice bran samples was monitored every 15 days during 6 months of storage. In addition, energy consumption was determined.\n\n\nRESULTS\nThe free fatty acid content of rice bran stabilized at 600 W IR power for 5 min remained below 5% for 165 days. No significant change in γ-oryzanol content or fatty acid composition but a significant decrease in tocopherol content was observed in stabilized rice bran compared with raw bran. IR stabilization was found to be comparable to extrusion with regard to energy consumption.\n\n\nCONCLUSION\nIR stabilization was effective in preventing hydrolytic rancidity of rice bran. By optimizing the operational parameters of IR stabilization, this by-product has the potential for use in the food industry in various ways as a value-added commodity.",
"title": ""
},
{
"docid": "e8246712bb8c4e793697b9933ab8b4f6",
"text": "In this paper we utilize a dimensional emotion representation named Resonance-Arousal-Valence to express music emotion and inverse exponential function to represent emotion decay process. The relationship between acoustic features and their emotional impact reflection based on this representation has been well constructed. As music well expresses feelings, through the users' historical playlist in a session, we utilize the Conditional Random Fields to compute the probabilities of different emotion states, choosing the largest as the predicted user's emotion state. In order to recommend music based on the predicted user's emotion, we choose the optimized ranked music list that has the highest emotional similarities to the music invoking the predicted emotion state in the playlist for recommendation. We utilize our minimization iteration algorithm to assemble the optimized ranked recommended music list. The experiment results show that the proposed emotion-based music recommendation paradigm is effective to track the user's emotions and recommend music fitting his emotional state.",
"title": ""
},
{
"docid": "723eeeb477bb6cde7cb69ce2deeff707",
"text": "The charge stored in series-connected lithium batteries needs to be well equalized between the elements of the series. We present here an innovative lithium-battery cell-to-cell active equalizer capable of moving charge between series-connected cells using a super-capacitor as an energy tank. The system temporarily stores the charge drawn from a cell in the super-capacitor, then the charge is moved into another cell without wasting energy as it happens in passive equalization. The architecture of the system which employs a digitally-controlled switching converter is compared with the state of the art, then fully investigated, together with the methodology used in its design. The performance of the system is described by presenting and discussing the experimental results of laboratory tests. The most innovative and attractive aspect of the proposed system is its very high efficiency, which is over 90%.",
"title": ""
},
{
"docid": "6fe71d8d45fa940f1a621bfb5b4e14cd",
"text": "We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources. Attract-Repel facilitates the use of constraints from mono- and cross-lingual resources, yielding semantically specialized cross-lingual vector spaces. Our evaluation shows that the method can make use of existing cross-lingual lexicons to construct high-quality vector spaces for a plethora of different languages, facilitating semantic transfer from high- to lower-resource ones. The effectiveness of our approach is demonstrated with state-of-the-art results on semantic similarity datasets in six languages. We next show that Attract-Repel-specialized vectors boost performance in the downstream task of dialogue state tracking (DST) across multiple languages. Finally, we show that cross-lingual vector spaces produced by our algorithm facilitate the training of multilingual DST models, which brings further performance improvements.",
"title": ""
},
{
"docid": "ce6744b63b6ca028036e7b127c351468",
"text": "Leeches are found in fresh water as well as moist marshy tropical areas. Orifical Hirudiniasis is the presence of leech in natural human orifices. Leech have been reported in nose, oropharynx, vagina, rectum and bladder but leech per urethra is very rare. We report a case of leech in urethra causing hematuria and bleeding disorder in the form of epistaxis and impaired clotting profile after use of stream water for ablution. The case was diagnosed after a prolonged diagnostic dilemma. Asingle alive leech was recovered from the urethra after ten days with the help of forceps. The hematuria and epistaxis gradually improved over next 48 hours and the patient became asymptomatic. Natives of leech infested areas should be advised to avoid swimming in fresh water and desist from drinking and using stream water without inspection for leeches.",
"title": ""
},
{
"docid": "a991cf65cd79abf578a935e1a28a9abb",
"text": "Till now, neural abstractive summarization methods have achieved great success for single document summarization (SDS). However, due to the lack of large scale multi-document summaries, such methods can be hardly applied to multi-document summarization (MDS). In this paper, we investigate neural abstractive methods for MDS by adapting a state-of-the-art neural abstractive summarization model for SDS. We propose an approach to extend the neural abstractive model trained on large scale SDS data to the MDS task. Our approach only makes use of a small number of multi-document summaries for fine tuning. Experimental results on two benchmark DUC datasets demonstrate that our approach can outperform a variety of base-",
"title": ""
},
{
"docid": "ec9c15e543444e88cc5d636bf1f6e3b9",
"text": "Which ZSL method is more robust to GZSL? An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild Wei-Lun Chao*1, Soravit Changpinyo*1, Boqing Gong2, and Fei Sha1,3 1U. of Southern California, 2U. of Central Florida, 3U. of California, Los Angeles NSF IIS-1566511, 1065243, 1451412, 1513966, 1208500, CCF-1139148, USC Graduate Fellowship, a Google Research Award, an Alfred P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484.",
"title": ""
},
{
"docid": "a2251a3cd69eacf72c078f21e9ee3a40",
"text": "This proposal investigates Selective Harmonic Elimination (SHE) to eliminate harmonics brought by Pulse Width Modulation (PWM) inverter. The selective harmonic elimination method for three phase voltage source inverter is generally based on ideas of opposite harmonic injection. In this proposed scheme, the lower order harmonics 3rd, 5th, 7th and 9th are eliminated. The dominant harmonics of same order generated in opposite phase by sine PWM inverter and by using this scheme the Total Harmonic Distortion (THD) is reduced. The analysis of Sinusoidal PWM technique (SPWM) and selective harmonic elimination is simulated using MATLAB/SIMULINK model.",
"title": ""
},
{
"docid": "61a9bc06d96eb213ed5142bfa47920b9",
"text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.",
"title": ""
},
{
"docid": "a1d6ec19be444705fd6c339d501bce10",
"text": "The transmission properties of a guide consisting of a dielectric rod of rectangular cross-section surrounded by dielectrics of smaller refractive indices are determined. This guide is the basic component in a new technology called integrated optical circuitry. The directional coupler, a particularly useful device, made of two of those guides closely spaced is also analyzed. [The SCI indicates that this paper has been cited over 145 times since 1969.]",
"title": ""
}
] |
scidocsrr
|
4bd646da50658547d1ab74cfe5d08613
|
Metaphors We Think With: The Role of Metaphor in Reasoning
|
[
{
"docid": "45082917d218ec53559c328dcc7c02db",
"text": "How are people able to think about things they have never seen or touched? We demonstrate that abstract knowledge can be built analogically from more experience-based knowledge. People's understanding of the abstract domain of time, for example, is so intimately dependent on the more experience-based domain of space that when people make an air journey or wait in a lunch line, they also unwittingly (and dramatically) change their thinking about time. Further, our results suggest that it is not sensorimotor spatial experience per se that influences people's thinking about time, but rather people's representations of and thinking about their spatial experience.",
"title": ""
},
{
"docid": "5ebd92444b69b2dd8e728de2381f3663",
"text": "A mind is a computer.",
"title": ""
},
{
"docid": "e39cafd4de135ccb17f7cf74cbd38a97",
"text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.",
"title": ""
},
{
"docid": "c0fc94aca86a6aded8bc14160398ddea",
"text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.",
"title": ""
}
] |
[
{
"docid": "242686291812095c5320c1c8cae6da27",
"text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.",
"title": ""
},
{
"docid": "9adaeac8cedd4f6394bc380cb0abba6e",
"text": "The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, \"cocktail-party\" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the \"cocktail party problem\".",
"title": ""
},
{
"docid": "f14daee1ddf6bbf4f3d41fe6ef5fcdb6",
"text": "A characteristic that will distinguish successful manufacturing enterprises of the next millennium is agility: the ability to respond quickly, proactively, and aggressively to unpredictable change. The use of extended virtual enterprise Supply Chains (SC) to achieve agility is becoming increasingly prevalent. A key problem in constructing effective SCs is the lack of methods and tools to support the integration of processes and systems into shared SC processes and systems. This paper describes the architecture and concept of operation of the Supply Chain Process Design Toolkit (SCPDT), an integrated software system that addresses the challenge of seamless and efficient integration. The SCPDT enables the analysis and design of Supply Chain (SC) processes. SCPDT facilitates key SC process engineering tasks including 1) AS-IS process base-lining and assessment, 2) collaborative TO-BE process requirements definition, 3) SC process integration and harmonization, 4) TO-BE process design trade-off analysis, and 5) TO-BE process planning and implementation.",
"title": ""
},
{
"docid": "3874d10936841f59647d73f750537d96",
"text": "The number of studies comparing nutritional quality of restrictive diets is limited. Data on vegan subjects are especially lacking. It was the aim of the present study to compare the quality and the contributing components of vegan, vegetarian, semi-vegetarian, pesco-vegetarian and omnivorous diets. Dietary intake was estimated using a cross-sectional online survey with a 52-items food frequency questionnaire (FFQ). Healthy Eating Index 2010 (HEI-2010) and the Mediterranean Diet Score (MDS) were calculated as indicators for diet quality. After analysis of the diet questionnaire and the FFQ, 1475 participants were classified as vegans (n = 104), vegetarians (n = 573), semi-vegetarians (n = 498), pesco-vegetarians (n = 145), and omnivores (n = 155). The most restricted diet, i.e., the vegan diet, had the lowest total energy intake, better fat intake profile, lowest protein and highest dietary fiber intake in contrast to the omnivorous diet. Calcium intake was lowest for the vegans and below national dietary recommendations. The vegan diet received the highest index values and the omnivorous the lowest for HEI-2010 and MDS. Typical aspects of a vegan diet (high fruit and vegetable intake, low sodium intake, and low intake of saturated fat) contributed substantially to the total score, independent of the indexing system used. The score for the more prudent diets (vegetarians, semi-vegetarians and pesco-vegetarians) differed as a function of the used indexing system but they were mostly better in terms of nutrient quality than the omnivores.",
"title": ""
},
{
"docid": "03a39c98401fc22f1a376b9df66988dc",
"text": "A highly efficient wireless power transfer (WPT) system is required in many applications to replace the conventional wired system. The high temperature superconducting (HTS) wires are examined in a WPT system to increase the power-transfer efficiency (PTE) as compared with the conventional copper/Litz conductor. The HTS conductors are naturally can produce higher amount of magnetic field with high induced voltage to the receiving coil. Moreover, the WPT systems are prone to misalignment, which can cause sudden variation in the induced voltage and lead to rapid damage of the resonant capacitors connected in the circuit. Hence, the protection or elimination of resonant capacitor is required to increase the longevity of WPT system, but both the adoptions will operate the system in nonresonance mode. The absence of resonance phenomena in the WPT system will drastically reduce the PTE and correspondingly the future commercialization. This paper proposes an open bifilar spiral coils based self-resonant WPT method without using resonant capacitors at both the sides. The mathematical modeling and circuit simulation of the proposed system is performed by designing the transmitter coil using HTS wire and the receiver with copper coil. The three-dimensional modeling and finite element simulation of the proposed system is performed to analyze the current density at different coupling distances between the coil. Furthermore, the experimental results show the PTE of 49.8% under critical coupling with the resonant frequency of 25 kHz.",
"title": ""
},
{
"docid": "18136fba311484e901282c31c9d206fd",
"text": "New demands, coming from the industry 4.0 concept of the near future production systems have to be fulfilled in the coming years. Seamless integration of current technologies with new ones is mandatory. The concept of Cyber-Physical Production Systems (CPPS) is the core of the new control and automation distributed systems. However, it is necessary to provide the global production system with integrated architectures that make it possible. This work analyses the requirements and proposes a model-based architecture and technologies to make the concept a reality.",
"title": ""
},
{
"docid": "7ebaee3df1c8ee4bf1c82102db70f295",
"text": "Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity.",
"title": ""
},
{
"docid": "88afb98c0406d7c711b112fbe2a6f25e",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8ca0edf4c51b0156c279fcbcb1941d2b",
"text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.",
"title": ""
},
{
"docid": "221c59b8ea0460dac3128e81eebd6aca",
"text": "STUDY DESIGN\nA prospective self-assessment analysis and evaluation of nutritional and radiographic parameters in a consecutive series of healthy adult volunteers older than 60 years.\n\n\nOBJECTIVES\nTo ascertain the prevalence of adult scoliosis, assess radiographic parameters, and determine if there is a correlation with functional self-assessment in an aged volunteer population.\n\n\nSUMMARY OF BACKGROUND DATA\nThere exists little data studying the prevalence of scoliosis in a volunteer aged population, and correlation between deformity and self-assessment parameters.\n\n\nMETHODS\nThere were 75 subjects in the study. Inclusion criteria were: age > or =60 years, no known history of scoliosis, and no prior spine surgery. Each subject answered a RAND 36-Item Health Survey questionnaire, a full-length anteroposterior standing radiographic assessment of the spine was obtained, and nutritional parameters were analyzed from blood samples. For each subject, radiographic, laboratory, and clinical data were evaluated. The study population was divided into 3 groups based on frontal plane Cobb angulation of the spine. Comparison of the RAND 36-Item Health Surveys data among groups of the volunteer population and with United States population benchmark data (age 65-74 years) was undertaken using an unpaired t test. Any correlation between radiographic, laboratory, and self-assessment data were also investigated.\n\n\nRESULTS\nThe mean age of the patients in this study was 70.5 years (range 60-90). Mean Cobb angle was 17 degrees in the frontal plane. In the study group, 68% of subjects met the definition of scoliosis (Cobb angle >10 degrees). No significant correlation was noted among radiographic parameters and visual analog scale scores, albumin, lymphocytes, or transferrin levels in the study group as a whole. Prevalence of scoliosis was not significantly different between males and females (P > 0.03). The scoliosis prevalence rate of 68% found in this study reveals a rate significantly higher than reported in other studies. These findings most likely reflect the targeted selection of an elderly group. Although many patients with adult scoliosis have pain and dysfunction, there appears to be a large group (such as the volunteers in this study) that has no marked physical or social impairment.\n\n\nCONCLUSIONS\nPrevious reports note a prevalence of adult scoliosis up to 32%. In this study, results indicate a scoliosis rate of 68% in a healthy adult population, with an average age of 70.5 years. This study found no significant correlations between adult scoliosis and visual analog scale scores or nutritional status in healthy, elderly volunteers.",
"title": ""
},
{
"docid": "9d2a73c8eac64ed2e1af58a5883229c3",
"text": "Tetyana Sydorenko Michigan State University This study examines the effect of input modality (video, audio, and captions, i.e., onscreen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8) saw video with audio and captions (VAC); group two (N = 9) saw video with audio (VA); group three (N = 9) saw video with captions (VC). All participants completed written and aural vocabulary tests and a final questionnaire.",
"title": ""
},
{
"docid": "428ecd77262fc57c5d0d19924a10f02a",
"text": "In an identity based encryption scheme, each user is identified by a unique identity string. An attribute based encryption scheme (ABE), in contrast, is a scheme in which each user is identified by a set of attributes, and some function of those attributes is used to determine decryption ability for each ciphertext. Sahai and Waters introduced a single authority attribute encryption scheme and left open the question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes [SW05]. We answer this question in",
"title": ""
},
{
"docid": "d1756aa5f0885157bdad130d96350cd3",
"text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.",
"title": ""
},
{
"docid": "59f022a6e943f46e7b87213f651065d8",
"text": "This paper presents a procedure to design a robust switching strategy for the basic Buck-Boost DC-DC converter utilizing switched systems' theory. The converter dynamic is described in the framework of linear switched systems and then sliding-mode controller is developed to ensure the asymptotic stability of the desired equilibrium point for the switched system with constant external input. The inherent robustness of the sliding-mode switching rule leads to efficient regulation of the output voltage under load variations. Simulation results are presented to demonstrate the outperformance of the proposed method compared to a rival scheme in the literature.",
"title": ""
},
{
"docid": "d49fc093d43fa3cdf40ecfa3f670e165",
"text": "As a result of the increase in robots in various fields, the mechanical stability of specific robots has become an important subject of research. This study is concerned with the development of a two-wheeled inverted pendulum robot that can be applied to an intelligent, mobile home robot. This kind of robotic mechanism has an innately clumsy motion for stabilizing the robot’s body posture. To analyze and execute this robotic mechanism, we investigated the exact dynamics of the mechanism with the aid of 3-DOF modeling. By using the governing equations of motion, we analyzed important issues in the dynamics of a situation with an inclined surface and also the effect of the turning motion on the stability of the robot. For the experiments, the mechanical robot was constructed with various sensors. Its application to a two-dimensional floor environment was confirmed by experiments on factors such as balancing, rectilinear motion, and spinning motion.",
"title": ""
},
{
"docid": "a9fc5418c0b5789b02dd6638a1b61b5d",
"text": "As the homeostatis characteristics of nerve systems show, artificial neural networks are considered to be robust to variation of circuit components and interconnection faults. However, the tolerance of neural networks depends on many factors, such as the fault model, the network size, and the training method. In this study, we analyze the fault tolerance of fixed-point feed-forward deep neural networks for the implementation in CMOS digital VLSI. The circuit errors caused by the interconnection as well as the processing units are considered. In addition to the conventional and dropout training methods, we develop a new technique that randomly disconnects weights during the training to increase the error resiliency. Feed-forward deep neural networks for phoneme recognition are employed for the experiments.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "497e2ed6d39ad6c09210b17ce137c45a",
"text": "PURPOSE\nThe purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance.\n\n\nMETHOD\nThis study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0.\n\n\nRESULTS\nThe study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of p<0.05, significantly influenced users' opinions of both the ease of use and the benefits of the HIS. This study found that different factors may affect the acceptance of each user in each type of hospital regarding the use of HIS. Finally, this model is best suited for government-owned hospitals.\n\n\nCONCLUSIONS\nBased on the results of this study, hospital management and IT developers should have more understanding on the non-technological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS.",
"title": ""
},
{
"docid": "2923e6f0760006b6a049a5afa297ca56",
"text": "Six years ago in this journal we discussed the work of Arthur T. Murray, who endeavored to explore artificial intelligence using the Forth programming language [1]. His creation, which he called MIND.FORTH, was interesting in its ability to understand English sentences in the form: subject-verb-object. It also had the capacity to learn new things and to form mental associations between recent experiences and older memories. In the intervening years, Mr. Murray has continued to develop his MIND.FORTH: he has translated it into Visual BASIC, PERL and Javascript, he has written a book [2] on the subject, and he maintains a wiki web site where anyone may suggest changes or extensions to his design [3]. MIND.FORTH is necessarily complex and opaque by virtue of its functionality; therefore it may be challenging for a newcomer to grasp. However, the more dedicated student will find much of value in this code. Murray himself has become quite a controversial figure.",
"title": ""
},
{
"docid": "369ed2ef018f9b6a031b58618f262dce",
"text": "Natural language processing has increasingly moved from modeling documents and words toward studying the people behind the language. This move to working with data at the user or community level has presented the field with different characteristics of linguistic data. In this paper, we empirically characterize various lexical distributions at different levels of analysis, showing that, while most features are decidedly sparse and non-normal at the message-level (as with traditional NLP), they follow the central limit theorem to become much more Log-normal or even Normal at the userand county-levels. Finally, we demonstrate that modeling lexical features for the correct level of analysis leads to marked improvements in common social scientific prediction tasks.",
"title": ""
}
] |
scidocsrr
|
9e5318fd5b7335338fa4f466150f0bf1
|
Image-to-image translation for cross-domain disentanglement
|
[
{
"docid": "25b417a20e9ff8798d1ec74c8dec95ea",
"text": "Many latent factors of variation interact to generate sensory data; for example, pose, morphology and expression in face images. In this work, we propose to learn manifold coordinates for the relevant factors of variation and to model their joint interaction. Many existing feature learning algorithms focus on a single task and extract features that are sensitive to the task-relevant factors and invariant to all others. However, models that just extract a single set of invariant features do not exploit the relationships among the latent factors. To address this, we propose a higher-order Boltzmann machine that incorporates multiplicative interactions among groups of hidden units that each learn to encode a distinct factor of variation. Furthermore, we propose correspondencebased training strategies that allow effective disentangling. Our model achieves state-of-the-art emotion recognition and face verification performance on the Toronto Face Database. We also demonstrate disentangled features learned on the CMU Multi-PIE dataset.",
"title": ""
},
{
"docid": "6f22283e5142035d6f6f9d5e06ab1cd2",
"text": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"title": ""
}
] |
[
{
"docid": "27e0059fb9be7ada93fd2d1e01149582",
"text": "OBJECTIVE\nTo assess the psychosocial impact of psoriatic arthritis (PsA), describe how health-related quality of life (QoL) is affected in patients with PsA, discuss measures used to evaluate the psychosocial impact of PsA, and review studies examining the effect of therapy on QoL.\n\n\nMETHODS\nA targeted review on the impact of PsA on QoL and the role of tailored psychosocial management in reducing the psychosocial burden of the disease was performed. PubMed literature searches were conducted using the terms PsA, psychosocial burden, QoL, and mood/behavioral changes. Articles were deemed relevant if they presented information regarding the psychosocial impact of PsA, methods used to evaluate these impacts, or ways to manage/improve management of PsA and its resulting comorbidities. The findings of this literature search are descriptively reviewed and the authors׳ expert opinion on their interpretation is provided.\n\n\nRESULTS\nThe psychosocial burden of PsA negatively affects QoL. Patients suffer from sleep disorders, fatigue, low-level stress, depression and mood/behavioral changes, poor body image, and reduced work productivity. Additionally, each patient responds to pain differently, depending on a variety of psychological factors including personality structure, cognition, and attention to pain. Strategies for evaluating the burdens associated with PsA and the results of properly managing patients with PsA are described.\n\n\nCONCLUSIONS\nPsA is associated with a considerable psychosocial burden and new assessment tools, specific to PsA, have been developed to help quantify this burden in patients. Future management algorithms of PsA should incorporate appropriate assessment and management of psychological and physical concerns of patients. Furthermore, patients with PsA should be managed by a multidisciplinary team that works in coordination with the patient and their family or caregivers.",
"title": ""
},
{
"docid": "1b5bb38b0a451238b2fc98a39d6766b0",
"text": "OBJECTIVES\nWe quantified concomitant medication polypharmacy, pharmacokinetic and pharmacodynamic interactions, adverse effects and adherence in Australian adults on effective antiretroviral therapy.\n\n\nDESIGN\nCross-sectional.\n\n\nMETHODS\nPatients recruited into a nationwide cohort and assessed for prevalence and type of concomitant medication (including polypharmacy, defined as ≥5 concomitant medications), pharmacokinetic or pharmacodynamic interactions, potential concomitant medication adverse effects and concomitant medication adherence. Factors associated with concomitant medication polypharmacy and with imperfect adherence were identified using multivariable logistic regression.\n\n\nRESULTS\nOf 522 participants, 392 (75%) took a concomitant medication (mostly cardiovascular, nonprescription or antidepressant). Overall, 280 participants (54%) had polypharmacy of concomitant medications and/or a drug interaction or contraindication. Polypharmacy was present in 122 (23%) and independently associated with clinical trial participation, renal impairment, major comorbidity, hospital/general practice-based HIV care (versus sexual health clinic) and benzodiazepine use. Seventeen participants (3%) took at least one concomitant medication contraindicated with their antiretroviral therapy, and 237 (45%) had at least one pharmacokinetic/pharmacodynamic interaction. Concomitant medication use was significantly associated with sleep disturbance and myalgia, and polypharmacy of concomitant medications with diarrhoea, fatigue, myalgia and peripheral neuropathy. Sixty participants (12%) reported imperfect concomitant medication adherence, independently associated with requiring financial support, foregoing necessities for financial reasons, good/very good self-reported general health and at least 1 bed day for illness in the previous 12 months.\n\n\nCONCLUSION\nIn a resource-rich setting with universal healthcare access, the majority of this sample took a concomitant medication. Over half had at least one of concomitant medication polypharmacy, pharmacokinetic or pharmacodynamic interaction. Concomitant medication use was associated with several adverse clinical outcomes.",
"title": ""
},
{
"docid": "35822f51adaef207b205910a48dd497f",
"text": "BACKGROUND\nThe adoption of healthcare technology is arduous, and it requires planning and implementation time. Healthcare organizations are vulnerable to modern trends and threats because it has not kept up with threats.\n\n\nOBJECTIVE\nThe objective of this systematic review is to identify cybersecurity trends, including ransomware, and identify possible solutions by querying academic literature.\n\n\nMETHODS\nThe reviewers conducted three separate searches through the CINAHL and PubMed (MEDLINE) and the Nursing and Allied Health Source via ProQuest databases. Using key words with Boolean operators, database filters, and hand screening, we identified 31 articles that met the objective of the review.\n\n\nRESULTS\nThe analysis of 31 articles showed the healthcare industry lags behind in security. Like other industries, healthcare should clearly define cybersecurity duties, establish clear procedures for upgrading software and handling a data breach, use VLANs and deauthentication and cloud-based computing, and to train their users not to open suspicious code.\n\n\nCONCLUSIONS\nThe healthcare industry is a prime target for medical information theft as it lags behind other leading industries in securing vital data. It is imperative that time and funding is invested in maintaining and ensuring the protection of healthcare technology and the confidentially of patient information from unauthorized access.",
"title": ""
},
{
"docid": "7192e2ae32eb79aaefdf8e54cdbba715",
"text": "Recently, ridge gap waveguides are considered as guiding structures in high-frequency applications. One of the major problems facing this guiding structure is the limited ability of using all the possible bandwidths due to the limited bandwidth of the transition to the coaxial lines. Here, a review of the different excitation techniques associated with this guiding structure is presented. Next, some modifications are proposed to improve its response in order to cover the possible actual bandwidth. The major aim of this paper is to introduce a wideband coaxial to ridge gap waveguide transition based on five sections of matching networks. The introduced transition shows excellent return loss, which is better than 15 dB over the actual possible bandwidth for double transitions.",
"title": ""
},
{
"docid": "4894c7683db13e71764eb9bf570586aa",
"text": "In previous works, Juba and Sudan [6] and Goldreich, Juba and Sudan [4] considered the idea of “semantic communication”, wherein two players, a user and a server, attempt to communicate with each other without any prior common language (or communication) protocol. They showed that if communication was goal-oriented and the user could sense progress towards the goal (or verify when it has been achieved), then meaningful communication is possible, in that the user’s goal can be achieved whenever the server is helpful. A principal criticism of their result has been that it is inefficient: in order to determine the “right” protocol to communicate with the server, the user enumerates protocols and tries them out with the server until it finds one that allows it to achieve its goal. They also show settings in which such enumeration is essentially the best possible solution. In this work we introduce definitions which allow for efficient behavior in practice. Roughly, we measure the performance of users and servers against their own “beliefs” about natural protocols. We show that if user and server are efficient with respect to their own beliefs and their beliefs are (even just slightly) compatible with each other, then they can achieve their goals very efficiently. We show that this model allows sufficiently “broad-minded” servers to talk with “exponentially” many different users in polynomial time, while dismissing the “counterexamples” in the previous work as being “narrow-minded,” or based on “incompatible beliefs.” ∗Portions of this work are presented in modified form in the first author’s Ph.D. thesis. [5, Chapter 4]. Research supported in part by NSF Awards CCF-0915155 and CCF-0939370.",
"title": ""
},
{
"docid": "80058ed2de002f05c9f4c1451c53e69c",
"text": "Purchasing decisions in many product categories are heavily influenced by the shopper's aesthetic preferences. It's insufficient to simply match a shopper with popular items from the category in question; a successful shopping experience also identifies products that match those aesthetics. The challenge of capturing shoppers' styles becomes more difficult as the size and diversity of the marketplace increases. At Etsy, an online marketplace for handmade and vintage goods with over 30 million diverse listings, the problem of capturing taste is particularly important -- users come to the site specifically to find items that match their eclectic styles.\n In this paper, we describe our methods and experiments for deploying two new style-based recommender systems on the Etsy site. We use Latent Dirichlet Allocation (LDA) to discover trending categories and styles on Etsy, which are then used to describe a user's \"interest\" profile. We also explore hashing methods to perform fast nearest neighbor search on a map-reduce framework, in order to efficiently obtain recommendations. These techniques have been implemented successfully at very large scale, substantially improving many key business metrics.",
"title": ""
},
{
"docid": "e27d949155cef2885a4ab93f4fba18b3",
"text": "Because of its richness and availability, micro-blogging has become an ideal platform for conducting psychological research. In this paper, we proposed to predict active users' personality traits through micro-blogging behaviors. 547 Chinese active users of micro-blogging participated in this study. Their personality traits were measured by the Big Five Inventory, and digital records of micro-blogging behaviors were collected via web crawlers. After extracting 839 micro-blogging behavioral features, we first trained classification models utilizing Support Vector Machine (SVM), differentiating participants with high and low scores on each dimension of the Big Five Inventory [corrected]. The classification accuracy ranged from 84% to 92%. We also built regression models utilizing PaceRegression methods, predicting participants' scores on each dimension of the Big Five Inventory. The Pearson correlation coefficients between predicted scores and actual scores ranged from 0.48 to 0.54. Results indicated that active users' personality traits could be predicted by micro-blogging behaviors.",
"title": ""
},
{
"docid": "4d147b58340571f4254f7c2190b383b9",
"text": "We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1% top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.",
"title": ""
},
{
"docid": "b86f9981230708c2e84dc643d9ad16ad",
"text": "The article provides an analysis and reports experimental validation of the various performance metrics of the LoRa low-power wide-area network technology. The LoRa modulation is based on chirp spread spectrum, which enables use of low-quality oscillators in the end device, and to make the synchronization faster and more reliable. Moreover, LoRa technology provides over 150 dB link budget, providing good coverage. Therefore, LoRa seems to be quite a promising option for implementing communication in many diverse Internet of Things applications. In this article, we first briefly overview the specifics of the LoRa technology and analyze the scalability of the LoRa wide-area network. Then, we introduce setups of the performance measurements. The results show that using the transmit power of 14 dBm and the highest spreading factor of 12, more than 60% of the packets are received from the distance of 30 km on water. With the same configuration, we measured the performance of LoRa communication in mobile scenarios. The presented results reveal that at around 40 km/h, the communication performance gets worse, because duration of the LoRa-modulated symbol exceeds coherence time. However, it is expected that communication link is more reliable when lower spreading factors are used.",
"title": ""
},
{
"docid": "2a894e7b94f5f0e553d9e4101ff5b60b",
"text": "It is an open question whether the motor system is involved during understanding of concrete nouns, as it is for concrete verbs. To clarify this issue, we carried out a behavioral experiment using a go-no go paradigm with an early and delayed go-signal delivery. Italian nouns referring to concrete objects (hand-related or foot-related) and abstract entities served as stimuli. Right-handed participants read the stimuli and responded when the presented word was concrete using the left or right hand. At the early go-signal, slower right-hand responses were found for hand-related nouns compared to foot-related nouns. The opposite pattern was found for the left hand. These findings demonstrate an early lateralized modulation of the motor system during noun processing, most likely crucial for noun comprehension.",
"title": ""
},
{
"docid": "9c74807f3c1a5b0928ade3f9e3c1229d",
"text": "Current perception systems of intelligent vehicles not only make use of visual sensors, but also take advantage of depth sensors. Extrinsic calibration of these heterogeneous sensors is required for fusing information obtained separately by vision sensors and light detection and ranging (LIDARs). In this paper, an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2-D LIDAR is proposed. Most extrinsic calibration methods between cameras and a LIDAR proceed by calibrating separately each camera with the LIDAR. We show that by placing a common planar chessboard with different poses in front of the multisensor system, the extrinsic calibration problem is solved by a 3-D reconstruction of the chessboard and geometric constraints between the views from the stereovision system and the LIDAR. Furthermore, our method takes sensor noise into account that it provides optimal results under Mahalanobis distance constraints. To evaluate the performance of the algorithm, experiments based on both computer simulation and real datasets are presented and analyzed. The proposed approach is also compared with a popular camera/LIDAR calibration method to show the benefits of our method.",
"title": ""
},
{
"docid": "e6cba9e178f568c402be7b25c4f0777f",
"text": "This paper is a tutorial introduction to the Viterbi Algorithm, this is reinforced by an example use of the Viterbi Algorithm in the area of error correction in communications channels. Some extensions to the basic algorithm are also discussed briefly. Some of the many application areas where the Viterbi Algorithm has been used are considered, including it's use in communications, target tracking and pattern recognition problems. A proposal for further research into the use of the Viterbi Algorithm in Signature Verification is then presented, and is the area of present research at the moment.",
"title": ""
},
{
"docid": "9975b9d094249ddcefe8a78450a6920d",
"text": "The acquisition of soccer skills is fundamental to our enjoyment of the game and is essential to the attainment of expertise. Players spend most of their time in practice with the intention of improving technical skills. However, there is a lack of scientific research relating to the effective acquisition of soccer skills, especially when compared with the extensive research base on physiological aspects of performance. Current coaching practice is therefore based on tradition, intuition and emulation rather than empirical evidence. The aim of this review is to question some of the popular beliefs that guide current practice and instruction in soccer. Empirical evidence is presented to dispel many of these beliefs as myths, thereby challenging coaches to self-reflect and critically evaluate contemporary doctrine. The review should inform sports scientists and practitioners as to the important role that those interested in skill acquisition can play in enhancing performance at all levels of the game.",
"title": ""
},
{
"docid": "87ed7ebdf8528df1491936000649761b",
"text": "Internet of Vehicles (IoV) is an important constituent of next generation smart cities that enables city wide connectivity of vehicles for traffic management applications. A secure and reliable communications is an important ingredient of safety applications in IoV. While the use of a more robust security algorithm makes communications for safety applications secure, it could reduce application QoS due to increased packet overhead and security processing delays. Particularly, in high density scenarios where vehicles receive large number of safety packets from neighborhood, timely signature verification of these packets could not be guaranteed. As a result, critical safety packets remain unverified resulting in cryptographic loss. In this paper, we propose two security mechanisms that aim to reduce cryptographic loss rate. The first mechanism is random transmitter security level section whereas the second one is adaptive scheme that iteratively selects the best possible security level at the transmitter depending on the current cryptographic loss rate. Simulation results show the effectiveness of the proposed mechanisms in comparison with the static security technique recommended by the ETSI standard.",
"title": ""
},
{
"docid": "68c7509ec0261b1ddccef7e3ad855629",
"text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.",
"title": ""
},
{
"docid": "1f5d8df6e85e1f97411b701c0a1be0c7",
"text": "In 2013, 392 research papers and notes were published in the CHI conference (The ACM CHI Conference on Human Factors in Computing Systems) and even more papers in the domain of Human-Computer Interaction (HCI) are constantly published in various conferences and journals. It is quite arduous, if not impossible, to read all of these papers. One approach to deal with this information deluge is to focus on skimming through lots of papers in a short period of time, so that one can more wisely choose what to read before investing time in them. In order to teach such a skimming technique, I have taught a technique, called \"Quick and Dirty Review (QnDReview),\" in a graduate-level HCI course. The method has been employed in the course for five semesters, and students' responses were collected and analyzed. Results showed that students spent, on average, 4.3 minutes per paper and believed that they got the gist of each paper. However, the largest benefit I noticed is that students get the overall pictures of the fields while exposing themselves to various new ideas through this approach.",
"title": ""
},
{
"docid": "2bb4366b813728af555be714da0ee241",
"text": "A case of acute mathamphetamine (MA) poisoning death was occasionally found in autopsy by leaking into alimentary tract from package in drug traffic. A Korean man (39-year-old) was found dead in his apartment in Shenyang and 158 columned-shaped packages (390 g) of MA were found in his alimentary tract by autopsy, in which four packages were found in the esophagus, 118 in the stomach and 36 in the lower part of small intestine. The packages were wrapped with tinfoil and plastic film, from which one package in the stomach was empty and ruptured. Extreme pulmonary edema, congestion and hemorrhage as well as moderate edema, congestion and petechial hemorrhage in the other viscera were observed at autopsy and microscopically. Simultaneously AMP (amphatamine) in urine was tested positive by Trige DOA kit. Quantitative analysis was performed by gas chromatography/mass spectrometry. Extremely high concentrations of MA were found in the cardiac blood (24.8 microg/mL), the urine (191 microg/mL), the liver (116 microg/mL) and the gastric contents (1045 microg/mL), and no alcohol and other conventional drugs or poisons were detected in the same samples. The poisoning dosage is 5 microg/mL in the plasma and lethal dosage is 10-40 microg/mL in the plasma according the report. This high concentrations of MA in blood indicated that the cause of death was result from acute MA poisoning due to MA leaking into stomach. Much attention must be paid in the body packer of drugs in illegal drug traffic.",
"title": ""
},
{
"docid": "fdd59ff419b9613a1370babe64ef1c98",
"text": "The disentangling problem is to discover multiple complex factors of variations hidden in data. One recent approach is to take a dataset with grouping structure and separately estimate a factor common within a group (content) and a factor specific to each group member (transformation). Notably, this approach can learn to represent a continuous space of contents, which allows for generalization to data with unseen contents. In this study, we aim at cultivating this approach within probabilistic deep generative models. Motivated by technical complication in existing groupbased methods, we propose a simpler probabilistic method, called group-contrastive variational autoencoders. Despite its simplicity, our approach achieves reasonable disentanglement with generalizability for three grouped datasets of 3D object images. In comparison with a previous model, although conventional qualitative evaluation shows little difference, our qualitative evaluation using few-shot classification exhibits superior performances for some datasets. We analyze the content representations from different methods and discuss their transformation-dependency and potential performance impacts.",
"title": ""
},
{
"docid": "7c4104651e484e4cbff5735d62f114ef",
"text": "A pair of salient tradeoffs have driven the multiple-input multiple-output (MIMO) systems developments. More explicitly, the early era of MIMO developments was predominantly motivated by the multiplexing-diversity tradeoff between the Bell Laboratories layered space-time and space-time block coding. Later, the linear dispersion code concept was introduced to strike a flexible tradeoff. The more recent MIMO system designs were motivated by the performance-complexity tradeoff, where the spatial modulation and space-time shift keying concepts eliminate the problem of inter-antenna interference and perform well with the aid of low-complexity linear receivers without imposing a substantial performance loss on generic maximum-likelihood/max a posteriori -aided MIMO detection. Against the background of the MIMO design tradeoffs in both uncoded and coded MIMO systems, in this treatise, we offer a comprehensive survey of MIMO detectors ranging from hard decision to soft decision. The soft-decision MIMO detectors play a pivotal role in approaching to the full-performance potential promised by the MIMO capacity theorem. In the near-capacity system design, the soft-decision MIMO detection dominates the total complexity, because all the MIMO signal combinations have to be examined, when both the channel’s output signal and the a priori log-likelihood ratios gleaned from the channel decoder are taken into account. Against this background, we provide reduced-complexity design guidelines, which are conceived for a wide-range of soft-decision MIMO detectors.",
"title": ""
}
] |
scidocsrr
|
fb09a2ee30dab464632f395e45a61300
|
Anticipation and next action forecasting in video: an end-to-end model with memory
|
[
{
"docid": "6a72b09ce61635254acb0affb1d5496e",
"text": "We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.",
"title": ""
}
] |
[
{
"docid": "9f6fb1de80f4500384097978c3712c68",
"text": "Reflection is a language feature which allows to analyze and transform the behavior of classes at the runtime. Reflection is used for software debugging and testing. Malware authors can leverage reflection to subvert the malware detection by static analyzers. Reflection initializes the class, invokes any method of class, or accesses any field of class. But, instead of utilizing usual programming language syntax, reflection passes classes/methods etc. as parameters to reflective APIs. As a consequence, these parameters can be constructed dynamically or can be encrypted by malware. These cannot be detected by state-of-the-art static tools. We propose EspyDroid, a system that combines dynamic analysis with code instrumentation for a more precise and automated detection of malware employing reflection. We evaluate EspyDroid on 28 benchmark apps employing major reflection categories. Our technique show improved results over FlowDroid via detection of additional undetected flows. These flows have potential to leak sensitive and private information of the users, through various sinks.",
"title": ""
},
{
"docid": "bb2e7ee3a447fd5bad57f2acd0f6a259",
"text": "A new cavity arrangement, namely, the generalized TM dual-mode cavity, is presented in this paper. In contrast with the previous contributions on TM dual-mode filters, the generalized TM dual-mode cavity allows the realization of both symmetric and asymmetric filtering functions, simultaneously exploiting the maximum number of finite frequency transmission zeros. The high design flexibility in terms of number and position of transmission zeros is obtained by exciting and exploiting a set of nonresonating modes. Five structure parameters are used to fully control its equivalent transversal topology. The relationship between structure parameters and filtering function realized is extensively discussed. The design of multiple cavity filters is presented along with the experimental results of a sixth-order filter having six asymmetrically located transmission zeros.",
"title": ""
},
{
"docid": "e8a69f68bc1647c69431ce88a0728777",
"text": "Contrary to popular perception, qualitative research can produce vast amounts of data. These may include verbatim notes or transcribed recordings of interviews or focus groups, jotted notes and more detailed “fieldnotes” of observational research, a diary or chronological account, and the researcher’s reflective notes made during the research. These data are not necessarily small scale: transcribing a typical single interview takes several hours and can generate 20-40 pages of single spaced text. Transcripts and notes are the raw data of the research. They provide a descriptive record of the research, but they cannot provide explanations. The researcher has to make sense of the data by sifting and interpreting them.",
"title": ""
},
{
"docid": "1f0fd314cdc4afe7b7716ca4bd681c16",
"text": "Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "3e5312f6d3c02d8df2903ea80c1bbae5",
"text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "423d15bbe1c47bc6225030307fc8e379",
"text": "In a secret sharing scheme, a datumd is broken into shadows which are shared by a set of trustees. The family {P′⊆P:P′ can reconstructd} is called the access structure of the scheme. A (k, n)-threshold scheme is a secret sharing scheme having the access structure {P′⊆P: |P′|≥k}. In this paper, by observing a simple set-theoretic property of an access structure, we propose its mathematical definition. Then we verify the definition by proving that every family satisfying the definition is realized by assigning two more shadows of a threshold scheme to trustees.",
"title": ""
},
{
"docid": "84307c2dd94ebe89c46a535b31b4b51b",
"text": "Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework [41]. However, only recently in [1] was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from [1] and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.",
"title": ""
},
{
"docid": "9c780c4d37326ce2a5e2838481f48456",
"text": "A maximum power point tracker has been previously developed for the single high performance triple junction solar cell for hybrid and electric vehicle applications. The maximum power point tracking (MPPT) control method is based on the incremental conductance (IncCond) but removes the need for current sensors. This paper presents the hardware implementation of the maximum power point tracker. Significant efforts have been made to reduce the size to 18 mm times 21 mm (0.71 in times 0.83 in) and the cost to close to $5 US. This allows the MPPT hardware to be integrable with a single solar cell. Precision calorimetry measurements are employed to establish the converter power loss and confirm that an efficiency of 96.2% has been achieved for the 650-mW converter with 20-kHz switching frequency. Finally, both the static and the dynamic tests are conducted to evaluate the tracking performances of the MPPT hardware. The experimental results verify a tracking efficiency higher than 95% under three different insolation levels and a power loss less than 5% of the available cell power under instantaneous step changes between three insolation levels.",
"title": ""
},
{
"docid": "6abc9ea6e1d5183e589194db8520172c",
"text": "Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents to perform well in the domain of real-time strategy (RTS) games. This paper presents a Bayesian model that can be used to predict the outcomes of isolated battles, as well as predict what units are needed to defeat a given army. Model parameters are learned from simulated battles, in order to minimize the dependency on player skill. We apply our model to the game of StarCraft, with the end-goal of using the predictor as a module for making high-level combat decisions, and show that the model is capable of making accurate predictions.",
"title": ""
},
{
"docid": "3255b89b7234595e7078a012d4e62fa7",
"text": "Virtual assistants such as IFTTT and Almond support complex tasks that combine open web APIs for devices and web services. In this work, we explore semantic parsing to understand natural language commands for these tasks and their compositions. We present the ThingTalk dataset, which consists of 22,362 commands, corresponding to 2,681 distinct programs in ThingTalk, a language for compound virtual assistant tasks. To improve compositionality of multiple APIs, we propose SEQ2TT, a Seq2Seq extension using a bottom-up encoding of grammar productions for programs and a maxmargin loss. On the ThingTalk dataset, SEQ2TT obtains 84% accuracy on trained programs and 67% on unseen combinations, an improvement of 12% over a basic sequence-to-sequence model with attention.",
"title": ""
},
{
"docid": "ac2e1a27ae05819d213efe7d51d1b988",
"text": "Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.",
"title": ""
},
{
"docid": "6e198119c72a796bc0b56280503fec18",
"text": "Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs.",
"title": ""
},
{
"docid": "0cce6366df945f079dbb0b90d79b790e",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
},
{
"docid": "6de3aca18d6c68f0250c8090ee042a4e",
"text": "JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed.\n JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.",
"title": ""
},
{
"docid": "a3b3380940613a5fb704727e41e9907a",
"text": "Stackelberg Security Games (SSG) have been widely applied for solving real-world security problems - with a significant research emphasis on modeling attackers' behaviors to handle their bounded rationality. However, access to real-world data (used for learning an accurate behavioral model) is often limited, leading to uncertainty in attacker's behaviors while modeling. This paper therefore focuses on addressing behavioral uncertainty in SSG with the following main contributions: 1) we present a new uncertainty game model that integrates uncertainty intervals into a behavioral model to capture behavioral uncertainty, and 2) based on this game model, we propose a novel robust algorithm that approximately computes the defender's optimal strategy in the worst-case scenario of uncertainty. We show that our algorithm guarantees an additive bound on its solution quality.",
"title": ""
},
{
"docid": "5998ce035f4027c6713f20f8125ec483",
"text": "As the use of automotive radar increases, performance limitations associated with radar-to-radar interference will become more significant. In this paper, we employ tools from stochastic geometry to characterize the statistics of radar interference. Specifically, using two different models for the spatial distributions of vehicles, namely, a Poisson point process and a Bernoulli lattice process, we calculate for each case the interference statistics and obtain analytical expressions for the probability of successful range estimation. This paper shows that the regularity of the geometrical model appears to have limited effect on the interference statistics, and so it is possible to obtain tractable tight bounds for the worst case performance. A technique is proposed for designing the duty cycle for the random spectrum access, which optimizes the total performance. This analytical framework is verified using Monte Carlo simulations.",
"title": ""
},
{
"docid": "de5fd8ae40a2d078101d5bb1859f689b",
"text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.",
"title": ""
},
{
"docid": "109838175d109002e022115d84cae0fa",
"text": "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).",
"title": ""
}
] |
scidocsrr
|
a3d1cd53f93a7a984ba2727e0b104340
|
Generative Model for Material Experiments Based on Prior Knowledge and Attention Mechanism
|
[
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
}
] |
[
{
"docid": "050679bfbeba42b30f19f1a824ec518a",
"text": "Principles of cognitive science hold the promise of helping children to study more effectively, yet they do not always make successful transitions from the laboratory to applied settings and have rarely been tested in such settings. For example, self-generation of answers to questions should help children to remember. But what if children cannot generate anything? And what if they make an error? Do these deviations from the laboratory norm of perfect generation hurt, and, if so, do they hurt enough that one should, in practice, spurn generation? Can feedback compensate, or are errors catastrophic? The studies reviewed here address three interlocking questions in an effort to better implement a computer-based study program to help children learn: (1) Does generation help? (2) Do errors hurt if they are corrected? And (3) what is the effect of feedback? The answers to these questions are: Yes, generation helps; no, surprisingly, errors that are corrected do not hurt; and, finally, feedback is beneficial in verbal learning. These answers may help put cognitive scientists in a better position to put their well-established principles in the service of children's learning.",
"title": ""
},
{
"docid": "b4721bd92f399a32799b474539a2f6e6",
"text": "Neural networks have been shown to be vulnerable to adversarial perturbations. Although adversarially crafted examples look visually similar to the unaltered original image, neural networks behave abnormally on these modified images. Image attribution methods highlight regions of input image important for the model’s prediction. We believe that the domains of adversarial generation and attribution are closely related and we support this claim by carrying out various experiments. By using the attribution of images, we train a second neural network classifier as a detector for adversarial examples. Our method of detection differs from other works in the domain of adversarial detection [10, 13, 4, 3] in the sense that we don’t use adversarial examples during our training procedure. Our detection methodology thus is independent of the adversarial attack generation methods. We have validated our detection technique on MNIST and CIFAR-10, achieving a high success rate for various adversarial attacks including FGSM, DeepFool, CW, PGD. We also show that training the detector model with attribution of adversarial examples generated even from a simple attack like FGSM further increases the detection accuracy over several different attacks.",
"title": ""
},
{
"docid": "38c5aff507ab3b626b48faadb07b3fea",
"text": "In real world applications, more and more data, for example, image/video data, are high dimensional and repre-sented by multiple views which describe different perspectives of the data. Efficiently clustering such data is a challenge. To address this problem, this paper proposes a novel multi-view clustering method called Discriminatively Embedded K-Means (DEKM), which embeds the synchronous learning of multiple discriminative subspaces into multi-view K-Means clustering to construct a unified framework, and adaptively control the intercoordinations between these subspaces simultaneously. In this framework, we firstly design a weighted multi-view Linear Discriminant Analysis (LDA), and then develop an unsupervised optimization scheme to alternatively learn the common clustering indicator, multiple discriminative subspaces and weights for heterogeneous features with convergence. Comprehensive evaluations on three benchmark datasets and comparisons with several state-of-the-art multi-view clustering algorithms demonstrate the superiority of the proposed work.",
"title": ""
},
{
"docid": "82866d253fda63fd7a1e70e9a0f4252e",
"text": "We introduce a new class of maximization-expectation (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.",
"title": ""
},
{
"docid": "fff85feeef18f7fa99819711e47e2d39",
"text": "This paper presents a robotic vehicle that can be operated by the voice commands given from the user. Here, we use the speech recognition system for giving &processing voice commands. The speech recognition system use an I.C called HM2007, which can store and recognize up to 20 voice commands. The R.F transmitter and receiver are used here, for the wireless transmission purpose. The micro controller used is AT89S52, to give the instructions to the robot for its operation. This robotic car can be able to avoid vehicle collision , obstacle collision and it is very secure and more accurate. Physically disabled persons can use these robotic cars and they can be used in many industries and for many applications Keywords—SpeechRecognitionSystem,AT89S52 micro controller, R. F. Transmitter and Receiver.",
"title": ""
},
{
"docid": "a97d6be18e2cc9272318b7f3c48345e6",
"text": "Recently, we are witnessing the progressive increase in the occurrence of largescale disasters, characterized by an overwhelming scale and number of causalities. After 72 hours from the disaster occurrence, the damaged area is interested by assessment, reconstruction and recovery actions from several heterogeneous organizations, which need to collaborate and being orchestrated by a centralized authority. This situation requires an effective data sharing by means of a proper middleware platform able to let such organizations to interoperate despite of their differences. Although international organizations have defined collaboration frameworks at the higher level, there is no ICT supporting platform at operational level able to realize the data sharing demanded by such collaborative frameworks. This work proposes a layered architecture and a preliminary implementation of such a middleware for messaging, data and knowledge management. We also illustrate a demonstration of the usability of such an implementation, so as to show the achievable interoperability.",
"title": ""
},
{
"docid": "d72cc46845f546e6b4d7ef42a14a0ea3",
"text": "It is well known that parsing accuracies drop significantly on out-of-domain data. What is less known is that some parsers suffer more from domain shifts than others. We show that dependency parsers have more difficulty parsing questions than constituency parsers. In particular, deterministic shift-reduce dependency parsers, which are of highest interest for practical applications because of their linear running time, drop to 60% labeled accuracy on a question test set. We propose an uptraining procedure in which a deterministic parser is trained on the output of a more accurate, but slower, latent variable constituency parser (converted to dependencies). Uptraining with 100K unlabeled questions achieves results comparable to having 2K labeled questions for training. With 100K unlabeled and 2K labeled questions, uptraining is able to improve parsing accuracy to 84%, closing the gap between in-domain and out-of-domain performance.",
"title": ""
},
{
"docid": "fcf1d5d56f52d814f0df3b02643ef71b",
"text": "The research work deals with an approach to perform texture and morphological based retrieval on a corpus of food grain images. The work has been carried out using Image Warping and Image analysis approach. The method has been employed to normalize food grain images and hence eliminating the effects of orientation using image warping technique with proper scaling. The images have been properly enhanced to reduce noise and blurring in image. Finally image has segmented applying proper segmentation methods so that edges may be detected effectively and thus rectification of the image has been done. The approach has been tested on sufficient number of food grain images of rice based on intensity, position and orientation. A digital image analysis algorithm based on color, morphological and textural features was developed to identify the six varieties rice seeds which are widely planted in Chhattisgarh region. Nine color and nine morphological and textural features were used for discriminant analysis. A back propagation neural network-based classifier was developed to identify the unknown grain types. The color and textural features were presented to the neural network for training purposes. The trained network was then used to identify the unknown grain types.",
"title": ""
},
{
"docid": "496bdd85a0aebb64d2f2b36c2050eb3a",
"text": "This research derives, implements, tunes and compares selected path tracking methods for controlling a car-like robot along a predetermined path. The scope includes commonly used m ethods found in practice as well as some theoretical methods found in various literature from other areas of rese arch. This work reviews literature and identifies important path tracking models and control algorithms from the vast back ground and resources. This paper augments the literature with a comprehensive collection of important path tracking idea s, a guide to their implementations and, most importantly, an independent and realistic comparison of the perfor mance of these various approaches. This document does not catalog all of the work in vehicle modeling and control; only a selection that is perceived to be important ideas when considering practical system identification, ease of implementation/tuning and computational efficiency. There are several other methods that meet this criteria, ho wever they are deemed similar to one or more of the approaches presented and are not included. The performance r esults, analysis and comparison of tracking methods ultimately reveal that none of the approaches work well in all applications a nd that they have some complementary characteristics. These complementary characteristics lead to an idea that a combination of methods may be useful for more general applications. Additionally, applications for which the methods in this paper do not provide adequate solutions are identified.",
"title": ""
},
{
"docid": "83e16c6a186d04b4de71ce8cec872b05",
"text": "In this paper, we propose a unified framework to analyze the performance of dense small cell networks (SCNs) in terms of the coverage probability and the area spectral efficiency (ASE). In our analysis, we consider a practical path loss model that accounts for both non-line-of-sight (NLOS) and line-of-sight (LOS) transmissions. Furthermore, we adopt a generalized fading model, in which Rayleigh fading, Rician fading and Nakagami-m fading can be treated in a unified framework. The analytical results of the coverage probability and the ASE are derived, using a generalized stochastic geometry analysis. Different from existing work that does not differentiate NLOS and LOS transmissions, our results show that NLOS and LOS transmissions have a significant impact on the coverage probability and the ASE performance, particularly when the SCNs grow dense. Furthermore, our results establish for the first time that the performance of the SCNs can be divided into four regimes, according to the intensity (aka density) of BSs, where in each regime the performance is dominated by different factors.",
"title": ""
},
{
"docid": "8fd269218b8bafbe2912c46726dd8533",
"text": "\"!# #$ % $ & % ' (*) % +-,. $ &/ 0 1 2 3% 41 0 + 5 % 1 &/ !# #%#67$/ 18!# #% % #% ' \"% 9,: $ &/ %<;=,> '? \"( % $@ \"!\" 1A B% \" 1 0 %C + ,: AD8& ,. \"%#6< E+F$ 1 +/& !# \"%31 & $ &/ % ) % + 1 -G &E H.,> JI/(*1 0 (K / \" L ,:!# M *G 1N O% $@ #!#,>PE!# ,:1 %#QORS' \" ,: ( $ & T #%4 \"U/!# # +V%CI/%C # 2! $E !\",: JI86WH. # !\"IV;=,:H:HX+ \" ,.1 Q Y E+/ \" = ' #% !#1 E+/,: ,:1 %#6E ' %CI %C \" Z;=,:H:H[% ' + H:1N +\\6E ' & %=+/ \"( +/,. ] ' O %C;O \" 6 ,: 41 + \" ^ 1],: M$ 15LN W ' _1 ) % \" LN + H. # !\"I 1 0 ' \"% & H> %#Q ` ' ,:% $E $@ < \"U M,: #% M #! ' ,.D8& 0 1 +/I/ E M,:! H:H>I ,: % \" ,: E+< # M15L ,: = 1 $ 1 $@ \" 1 %[,: 1X ' aD8& I<$ H. 4 %^ D8& ,> + )8Ib ' 4!#& \" H:1 +\\QMR? 9 \"U M,: 4 K;a1 KI/$@ #% 1 0 1 $ %#c< ' P %C d+/ 1 $ %X 0 ! ,.1 1 0 ' d & $ H: #%a,. E+/1 0 % ' ,:1 e6 E+ ' % #!\"1 E+f+/ 1 $ %g & $ H: #%9) % +A1 A ' h,: M$@1 !# 31 0 ' #,> !\"1 8 # 8 Q[RV O + + #% %W ' X$ 1 ) H: # M%71 0 + \" \" M,: ,. ];=' # 9H:1N + % ' + +/,: ,:%i # +/ +\\6 ;=' \" =,: 9 ' =D8& I4$ H. 9 1 ,: % \" _ 1 $ %#6 E+b' 1 ;j g& ! ' 1 0 ' TH:1N +?% ' 1 & H.+-)@ 4% ' +' $@1 ,: <,. ' k$ H. \\Q-R? k$ #% # 8 g A H. 1 ,> ' 0 1 M !\"!#1 M$ H.,:% ' ,. ' ,:% E+9 \"U/$@ \" ,: M # 8 H #L ,.+ # !# X 'E 5 a,> i! M !\" XD8& ,:! l H:Ig +9! )/ ,: g ' <%CI/%C # m)E ! lk,: 8 1g ' <& % 0 & He1 $@ \" ,: 9 Q 1. INTRODUCTION n \";o $ $ H:,.!# ,:1 %4 'E 5 T g& %C T+/ H_;=,> 'VL %C T /& g)@ \" % 1 0 ,: ( $ & i%C M%i d)@ #!#1 M,: M1 X!\"1 M M1 \\Q ` ' #% ],: !#H:&E+ d $ ( $ H:,.!# ,:1 %T ' T$ 18!# \"% %4+ 0 1 % k H:H_ # g)@ #+ + #+b% \" % 1 %#6 $ $ H.,:! 5 ,:1 %[ ' ^ g& %C e!#1 #H. aP E !#,. H + 0 # #+ %#6 E+ $ ( $ H:,.!# ,:1 %^ 'E 5 [ g& %C \\ k E _,: $ &/ 0 1] p XLN \" I Hq 5 i /& g)@ \" 1 0 #1 (J$@1 % ,> ,.1 ,: 4+ \"L/,:!# \"%#QW F \";r!#H. % %i1 0 + 5 < k \" M # 8 %CI/%C # s,:%X # M \" ,: 4,: k #% $@1 % 1T ' #% < $ $ H.,:! 5 ,:1 %#Q ` ' #% %CI/%C # M%]$ 15L ,.+ ' T% M l ,: E+ 1 0 ,: 0 %C & ! & 13%C 9( ) % +M $ $ H.,:! 5 ,:1 %i 'E a+ 5 ) % = k \" M # 8 i%CI/%C # M%W' 5L $/ 15L/,.+/ + 0 1 h+ b$ 18!# \"% % ,. V $ $ H:,:! 5 ,.1 %#Qr m%C t+ k E \" -& % \"%b $ $ H:,.!# ,:1 /(JH: #L #Hg% # k 8 ,.!\"% 1u k l ?,: 8 #H:H.,>( # 8 + \"!#,:% ,.1 % )@1 &/ < #% 1 &/ !# 9 H:H.18!# ,:1 \\Q ` I/$ ,:! H:H>I86v ' \"( % 1 & !# \"%3,: wD8& #%C ,:1 F,: !#H:&E+/ %C 1 6]$ 18!# \"% % 1 3!\"I/!#H: #%#6] E+ ) +/;=,.+/ '\\Q x &/ X+/ #% ,: %X' LN ])@ \" # 3,: /yE& \" !# +M' L/,:H>IM)8IM% #L \" H@% $@ #!\",:P ! $ $ H:,:! ,:1 %#QTzK b$E 5 ,:!#& H. 6v;a g'E LN T%C &E+/,: +b $ $ H:,:! ,:1 'E 5 $@ \" 0 1 M%< # M1 4 ,q M15LN \" )E 5 H. PE #H.+{ ,:LN # { \" 1 K;a # 8 KI3) ,:1 (*% # % 1 %d # g)@ #+ + #+3,: ! ' % 1 Hq+/,: \" | % & , 0 1 QiRV 'E LN a H:% 1];a1 lN #+ ;=,> '4 4 $ $ H:,.!# ,:1 g ' ^!#1 H:H: #!\" %7 #!#1 ,:%C( % !# d+ 5 0 1 s M +/L !# #+g ,> $ Hq d )@1 & W ' a$@1 % ,> ,:1 %i1 0 # # 4I & ,> % E+ ' <,:%<!\"1 !\" \" + ;=,> 'b ' T,. 8 #H:H:,: # 8 +/,:%C( % # M,: E 5 ,:1 k1 0 ' ,:%O,: 0 1 k 5 ,.1 M 1g % \" ,: #%X1 0 1 & +M%C ,:1 % ! ' ;=,> ' +/,:}v # 8 d #D & ,> # M # 8 %#QWRV T H.% 1k)@ # ,: ,: k \"U/$@ \" ,: M # 8 HX }v1 M 1? k E P % 'f #% $ ,> 1 Ir+ ? %h ,: E+/,.!# 1 ]1 0 ' <$/ #% # !\" 1 0 1 U/,. %X,: 4 #% \" L 1 ,> Q ]H:H71 0 ' \"% 9 $ $ H:,:! ,:1 % 4! ' !\" \" ,:~# +-) I hH. 9 & 9( )@ \" W1 0 $ & % ' (*)E % + + ]% 1 & !\" #%7,: 4;=' ,:! '4 ' O+ 5 < ,:L H8 ! 9)@ X' ,: '9 E+4& $/ + ,:!\" ) H: Q[i ! 'M1 0 ' #% = $ $ H:,:! 5 ,.1 %_,:% #% $@1 % ,:) H: 0 1 d M1 ,> 1 ,. 4 ' ,.%O+ T 19+ #!\" X!\" ,> ,:! He% ,> &E 5( ,:1 %#Qi & ,: ' #% #LN # 8 %#68 ' <+ #%X!# ,: !\" % 6E E+ ,> <,:%< 4& ! ' M1 ,: M$@1 d 'E 5 #H: #L 8 + 5 k \" +/ #H:,.L \" + ,: B ,: M #H>I 0 % ' ,:1 eQ zK { ' M ]o%CI/%C # 67 V \"U/$ \"% % ,.1 B1 0 ' h #H. ,:LN ,: M$@1 !# 31 0 1 & $ & 9 \"LN # 8 %9,:%k! $/ & +f %k G 18 g% $@ \"!#,>PE! 5 ,.1 eQ ` ' d%CI/%C # j 4& %C i H>;X #I/%W Ig 1 k U/,: M,:~# ' k 1 H=+ #H:,:LN \" #+rG 1N vQ7 & ,: ,: M #%g1 0 %C #% %#6a ' h,: $ & \"%M! A \"U/!# # #+A ' 3%CI/%C # ! $ !#,> KI8QAzJ B ' #% 3! % #%#6i ' 1 H>IM;X #IM 141 $@ \" ];=,: ' ,: h ' <G 1N k)@1 & E+/%a,:%O 1T% ' +k% 1 M 1 0 ' 4H:1N +\\Q] I/ E M,.!# H:H>I ! ' 181 % ,. h;=' \" 1h)@ \"%C <% ' #+ H.1 + + ' 1 ; 4& ! ' H:1N +b 1k% ' #+ ,.% M! ' H:H: # ,. 3$/ 1 ) H. \" Q ` ' ,:% $E $@ \" 9 U $ H:1 #%4% ! H. ) H: H:1N +A% ' #+ + ,: #! ' ,.D8& #% 0 1 9H. $ 18!\" #% % ,: M K;O1 l %#Q RV g)@ #H:,. \"LN g 'E 5 4G 18 ,:%T% $@ \"!#,>PE +-% #$E 5 #H>I 0 1 T ! '? $ $ H:,>( ! 5 ,.1 eQkz* T+ #% !\" ,:)@ #% ' 4 #H. ,:1 % ' ,:$V)@ \" J;O # # {L 5 ,:1 & % ! 'E 5 ( ! \" ,:%C ,.!\"%g1 0 B %C;O 9 E+{ ' 9& % 0 & H: \"% %3 , Q Q:67& ,:H:,> KI <1 0 'E 5 i %C;a \" Q ` '/& %#6N;O = M1 +/ #HEG 1N g %a <% \" _1 0v0 & !\" ,:1 %W 'E #H. < 4$E 5 M \" X1 0 ' 1 & $ & a 14,> %O& ,:H:,> KI8Q_ 1 X \"U M$ H. 6 ,: F k 8Ir $ $ H:,:! ,:1 %#6 %C;a \" %3 b1 H:Ir& % 0 & H], 0 ' \"Ir ,: M #H>I8Q ` ' \" 0 1 6 ' X&/ ,.H:,> KIg1 0 k %C;a \" O! M)@ d 0 & !\" ,:1 1 0 ' =Hq 5 # ! Ig,: LN1 H:LN #+g,: 9,> %i!\" # ,:1 \\Q_ ]H:% 1 68 ' X& ,:H:,: JIg1 0 %C;O \" ! b)@ T 0 & !\" ,:1 1 0 ' 1 &/ $ & ]L H:& Q] 1 M L H:& #% M1 <,: 8 \" #%C ,: 9 ' 1 ' \" %#Q ,:LN # b%C ,:%C ,:!#% )@1 & ] ' T!#1 %C 1 0 ! 'b$ 18!# \"% % ,. %C #$E+ ,> %9 % % 18!#,. +A% #H: #!\" ,:L ,> KI867,> 9,:%T$@1 % % ,:) H: k 1b!#1 M$ & M L H:& 0 1 ' U $@ \"!\" +FL H:& 0 1 1 H G 18 r;=' # F ' b%CI/%C #
,:% 1 $@ ,: f)@ #H:1 ;,> % ! $E !\",: JI8Q x L \" H:1N +S,:% +/ \" #!\" #+S;=' # ' =1 ) % \" LN #+3G 1N 9+/ 1 $ %a% ,: ,>PE! 8 H:I9)@ #H:15;r ' ,:%aL H:& QWe1 + % ' #+ + ,: T,:%O,: LN1 lN +k %X ;X #I9 14+/ ,:LN = ' d%CI/%C # o) ! lg 14 !\"!# #$ ) H: 4G 18 @Q zJ O% ' 1 & H.+3)@ < 1 +M 'E 5 X;=' ,:H: +/ 1 $ $ ,: & $ H: #%O;=,:H.H\\!# \" ,: H>I",
"title": ""
},
{
"docid": "63de507f7bbf289c3e53e2c73660d3e5",
"text": "Stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguistically accurate. Moreover, parallel datasets for regular-to-stylistic pairs are usually unavailable. We present three weakly-supervised models that can generate diverse, polite (or rude) dialogue responses without parallel data. Our late fusion model (Fusion) merges the decoder of an encoder-attention-decoder dialogue model with a language model trained on stand-alone polite utterances. Our label-finetuning (LFT) model prepends to each source sequence a politeness-score scaled label (predicted by our state-of-the-art politeness classifier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model (Polite-RL) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrievalbased, polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.",
"title": ""
},
{
"docid": "2122697f764fbffc588f9a407105c5ba",
"text": "Very rare cases of human T cell acute lymphoblastic leukemia (T-ALL) harbor chromosomal translocations that involve NOTCH1, a gene encoding a transmembrane receptor that regulates normal T cell development. Here, we report that more than 50% of human T-ALLs, including tumors from all major molecular oncogenic subtypes, have activating mutations that involve the extracellular heterodimerization domain and/or the C-terminal PEST domain of NOTCH1. These findings greatly expand the role of activated NOTCH1 in the molecular pathogenesis of human T-ALL and provide a strong rationale for targeted therapies that interfere with NOTCH signaling.",
"title": ""
},
{
"docid": "479fe61e0b738cb0a0284da1bda7c36d",
"text": "In urban areas, congestion creates a substantial variation in travel speeds during peak morning and evening hours. This research presents a new solution approach, an iterative route construction and improvement algorithm (IRCI), for the time dependent vehicle routing problem (TDVRP) with hard or soft time windows. Improvements are obtained at a route level; hence the proposed approach does not rely on any type of local improvement procedure. Further, the solution algorithms can tackle constant speed or time-dependent speed problems without any alteration in their structure. A new formulation for the TDVRP with soft and hard time windows is presented. Leveraging on the well known Solomon instances, new test problems that capture the typical speed variations of congested urban settings are proposed. Results in terms of solution quality as well as computational time are presented and discussed. The computational complexity of the IRCI is analyzed and experimental results indicate that average computational time increases proportionally to the square of the number of customers.",
"title": ""
},
{
"docid": "617189999dd72a73f5097f87d9874ae5",
"text": "In this study, we present a novel ranking model based on learning the nearest neighbor relationships embedded in the index space. Given a query point, a conventional nearest neighbor search approach calculates the distances to the cluster centroids, before ranking the clusters from near to far based on the distances. The data indexed in the top-ranked clusters are retrieved and treated as the nearest neighbor candidates for the query. However, the loss of quantization between the data and cluster centroids will inevitably harm the search accuracy. To address this problem, the proposed model ranks clusters based on their nearest neighbor probabilities rather than the query-centroid distances to the query. The nearest neighbor probabilities are estimated by employing neural networks to characterize the neighborhood relationships as a nonlinear function, i.e., the density distribution of nearest neighbors with respect to the query. The proposed probability-based ranking model can replace the conventional distance-based ranking model as a coarse filter for candidate clusters, and the nearest neighbor probability can be used to determine the data quantity to be retrieved from the candidate cluster. Our experimental results demonstrated that implementation of the proposed ranking model for two state-of-the-art nearest neighbor quantization and search methods could boost the search performance effectively in billion-scale datasets.",
"title": ""
},
{
"docid": "93cec060a420f2ffc3e67eb532186f8e",
"text": "This paper presents an efficient approach to identify tabular structures within either electronic or paper documents. The resulting T—Recs system takes word bounding box information as input, and outputs the corresponding logical text block units (e.g. the cells within a table environment). Starting with an arbitrary word as block seed the algorithm recursively expands this block to all words that interleave with their vertical (north and south) neighbors. Since even smallest gaps of table columns prevent their words from mutual interleaving, this initial segmentation is able to identify and isolate such columns. In order to deal with some inherent segmentation errors caused by isolated lines (e.g. headers), overhanging words, or cells spawning more than one column, a series of postprocessing steps is added. These steps benefit from a very simple distinction between type 1 and type 2 blocks: type 1 blocks are those of at most one word per line, all others are of type 2. This distinction allows the selective application of heuristics to each group of blocks. The conjoint decomposition of column blocks into subsets of table cells leads to the final block segmentation of a homogeneous abstraction level. These segments serve the final layout analysis which identifies table environments and cells that are stretching over several rows and/or columns.",
"title": ""
},
{
"docid": "e67a7ba82594e024f96fc1deb4ff7498",
"text": "The software industry is more than ever facing the challenge of delivering WYGIWYW software (what you get is what you want). A well-structured document specifying adequate, complete, consistent, precise, and measurable requirements is a critical prerequisite for such software. Goals have been recognized to be among the driving forces for requirements elicitation, elaboration, organization, analysis, negotiation, documentation, and evolution. Growing experience with goal-oriented requirements engineering suggests synergistic links between research in this area and good practice. We discuss one journey along this road from influencing ideas and research results to tool developments to good practice in industrial projects. On the way, we discuss some lessons learnt, obstacles to technology transfer, and challenges for better requirements engineering research and practice.",
"title": ""
},
{
"docid": "3137bb7ba1b33d873acaa8b4079f6e30",
"text": "Accurate estimation of spatial gait characteristics is critical to assess motor impairments resulting from neurological or musculoskeletal disease. Currently, however, methodological constraints limit clinical applicability of state-of-the-art double integration approaches to gait patterns with a clear zero-velocity phase. We describe a novel approach to stride length estimation that uses deep convolutional neural networks to map stride-specific inertial sensor data to the resulting stride length. The model is trained on a publicly available and clinically relevant benchmark dataset consisting of 1220 strides from 101 geriatric patients. Evaluation is done in a 10-fold cross validation and for three different stride definitions. Even though best results are achieved with strides defined from mid-stance to mid-stance with average accuracy and precision of 0.01 ± 5.37 cm, performance does not strongly depend on stride definition. The achieved precision outperforms state-of-the-art methods evaluated on this benchmark dataset by 3.0 cm (36%). Due to the independence of stride definition, the proposed method is not subject to the methodological constrains that limit applicability of state-of-the-art double integration methods. Furthermore, precision on the benchmark dataset could be improved. With more precise mobile stride length estimation, new insights to the progression of neurological disease or early indications might be gained. Due to the independence of stride definition, previously uncharted diseases in terms of mobile gait analysis can now be investigated by re-training and applying the proposed method.",
"title": ""
},
{
"docid": "70710daefe747da7d341577947b6b8ff",
"text": "This paper describes an automated lane centering/changing control algorithm that was developed at General Motors Research and Development. Over the past few decades, there have been numerous studies in the autonomous vehicle motion control. These studies typically focused on improving the control accuracy of the autonomous driving vehicles. In addition to the control accuracy, driver/passenger comfort is also an important performance measure of the system. As an extension of authors' prior study, this paper further considers vehicle motion control to provide driver/passenger comfort based on the adjustment of the lane change maneuvering time in various traffic situations. While defining the driver/passenger comfort level is a human factor study topic, this paper proposes a framework to integrate the motion smoothness into the existing lane centering/changing control problem. The proposed algorithm is capable of providing smooth and aggressive lane change maneuvers according to traffic situation and driver preference. Several simulation results as well as on-road vehicle test results confirm the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "0c479abc72634e6d76b787f130a8ea1f",
"text": "While intelligent transportation systems come in many shapes and sizes, arguably the most transformational realization will be the autonomous vehicle. As such vehicles become commercially available in the coming years, first on dedicated roads and under specific conditions, and later on all public roads at all times, a phase transition will occur. Once a sufficient number of autonomous vehicles is deployed, the opportunity for explicit coordination appears. This article treats this challenging network control problem, which lies at the intersection of control theory, signal processing, and wireless communication. We provide an overview of the state of the art, while at the same time highlighting key research directions for the coming decades.",
"title": ""
}
] |
scidocsrr
|
3d0d06dd7f672dd75ea1f28a8515c757
|
Fast and Accurate Annotation of Short Texts with Wikipedia Pages
|
[
{
"docid": "0b59b6f7e24a4c647ae656a0dc8cc3ab",
"text": "Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced. r 2009 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "ede0e47ee50f11096ce457adea6b4600",
"text": "Recent advances in hardware, software, and communication technologies are enabling the design and implementation of a whole range of different types of networks that are being deployed in various environments. One such network that has received a lot of interest in the last couple of S. Zeadally ( ) Network Systems Laboratory, Department of Computer Science and Information Technology, University of the District of Columbia, 4200, Connecticut Avenue, N.W., Washington, DC 20008, USA e-mail: [email protected] R. Hunt Department of Computer Science and Software Engineering, College of Engineering, University of Canterbury, Private Bag 4800, Christchurch, New Zealand e-mail: [email protected] Y.-S. Chen Department of Computer Science and Information Engineering, National Taipei University, 151, University Rd., San Shia, Taipei County, Taiwan e-mail: [email protected] Y.-S. Chen e-mail: [email protected] Y.-S. Chen e-mail: [email protected] A. Irwin School of Computer and Information Science, University of South Australia, Room F2-22a, Mawson Lakes, South Australia 5095, Australia e-mail: [email protected] A. Hassan School of Information Science, Computer and Electrical Engineering, Halmstad University, Kristian IV:s väg 3, 301 18 Halmstad, Sweden e-mail: [email protected] years is the Vehicular Ad-Hoc Network (VANET). VANET has become an active area of research, standardization, and development because it has tremendous potential to improve vehicle and road safety, traffic efficiency, and convenience as well as comfort to both drivers and passengers. Recent research efforts have placed a strong emphasis on novel VANET design architectures and implementations. A lot of VANET research work have focused on specific areas including routing, broadcasting, Quality of Service (QoS), and security. We survey some of the recent research results in these areas. We present a review of wireless access standards for VANETs, and describe some of the recent VANET trials and deployments in the US, Japan, and the European Union. In addition, we also briefly present some of the simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations. Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANET architectures, protocols, technologies, and services.",
"title": ""
},
{
"docid": "9002cefa8b062c49858439d54c460472",
"text": "In heterogeneous or shared clusters, distributed learning processes are slowed down by straggling workers. In this work, we propose LB-BSP, a new synchronization scheme that eliminates stragglers by adapting each worker's training load (batch size) to its processing capability. For training in shared production clusters, a prerequisite for deciding the workers' batch sizes is to know their processing speeds before each iteration starts. To this end, we adopt NARX, an extended recurrent neural network that accounts for both the historical speeds and the driving factors such as CPU and memory in prediction.",
"title": ""
},
{
"docid": "01ccb35abf3eed71191dc8638e58f257",
"text": "In this paper we describe several fault attacks on the Advanced Encryption Standard (AES). First, using optical fault induction attacks as recently publicly presented by Skorobogatov and Anderson [SA], we present an implementation independent fault attack on AES. This attack is able to determine the complete 128-bit secret key of a sealed tamper-proof smartcard by generating 128 faulty cipher texts. Second, we present several implementationdependent fault attacks on AES. These attacks rely on the observation that due to the AES's known timing analysis vulnerability (as pointed out by Koeune and Quisquater [KQ]), any implementation of the AES must ensure a data independent timing behavior for the so called AES's xtime operation. We present fault attacks on AES based on various timing analysis resistant implementations of the xtime-operation. Our strongest attack in this direction uses a very liberal fault model and requires only 256 faulty encryptions to determine a 128-bit key.",
"title": ""
},
{
"docid": "d57072f4ffa05618ebf055824e7ae058",
"text": "Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network’s members and non-members; we analyze the impact of privacy concerns on members’ behavior; we compare members’ stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual’s privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members’ misconceptions about the online community’s actual size and composition, and about the visibility of members’ profiles.",
"title": ""
},
{
"docid": "f3a89c01dbbd40663811817ef7ba4be3",
"text": "In order to address the mental health disparities that exist for Latino adolescents in the United States, psychologists must understand specific factors that contribute to the high risk of mental health problems in Latino youth. Given the significant percentage of Latino youth who are immigrants or the children of immigrants, acculturation is a key factor in understanding mental health among this population. However, limitations in the conceptualization and measurement of acculturation have led to conflicting findings in the literature. Thus, the goal of the current review is to examine and critique research linking acculturation and mental health outcomes for Latino youth, as well as to integrate individual, environmental, and family influences of this relationship. An integrated theoretical model is presented and implications for clinical practice and future directions are discussed.",
"title": ""
},
{
"docid": "936048690fb043434c3ee0060c5bf7a5",
"text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "9090999f7fdaad88943f4dc4dca414d6",
"text": "Collaborative reasoning for understanding each image-question pair is very critical but underexplored for an interpretable visual question answering system. Although very recent works also attempted to use explicit compositional processes to assemble multiple subtasks embedded in the questions, their models heavily rely on annotations or handcrafted rules to obtain valid reasoning processes, leading to either heavy workloads or poor performance on composition reasoning. In this paper, to better align image and language domains in diverse and unrestricted cases, we propose a novel neural network model that performs global reasoning on a dependency tree parsed from the question, and we thus phrase our model as parse-tree-guided reasoning network (PTGRN). This network consists of three collaborative modules: i) an attention module to exploit the local visual evidence for each word parsed from the question, ii) a gated residual composition module to compose the previously mined evidence, and iii) a parse-tree-guided propagation module to pass the mined evidence along the parse tree. Our PTGRN is thus capable of building an interpretable VQA system that gradually derives the image cues following a question-driven parse-tree reasoning route. Experiments on relational datasets demonstrate the superiority of our PTGRN over current state-of-the-art VQA methods, and the visualization results highlight the explainable capability of our reasoning system.",
"title": ""
},
{
"docid": "5e95d54ef979a11ad18ec774210eb175",
"text": "Recently, neural network based sentence modeling methods have achieved great progress. Among these methods, the recursive neural networks (RecNNs) can effectively model the combination of the words in sentence. However, RecNNs need a given external topological structure, like syntactic tree. In this paper, we propose a gated recursive neural network (GRNN) to model sentences, which employs a full binary tree (FBT) structure to control the combinations in recursive structure. By introducing two kinds of gates, our model can better model the complicated combinations of features. Experiments on three text classification datasets show the effectiveness of our model.",
"title": ""
},
{
"docid": "f388ad2a0ee9bcd5126b1cea7f527541",
"text": "Our team provided a security analysis of the edX platform. At MIT, the edX platform is used by a wide variety of classes through MITx, and is starting to be used by many other organizations, making it of great interest to us. In our security analysis, we first provide an overview of the modules of edX, as well as how the different users are intended to interact with these modules. We then outline the vulnerabilities we found in the platform and how users may exploit them. We conclude with possible changes to their system to protect against the given attacks, and where we believe there may exist other vulnerabilities worth future investigation.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "c207f2c0dfc1ecee332df70ec5810459",
"text": "Hierarchical organization-the recursive composition of sub-modules-is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force-the cost of connections-promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.",
"title": ""
},
{
"docid": "b9bf838263410114ec85c783d26d92aa",
"text": "We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.",
"title": ""
},
{
"docid": "a3b680c8c9eb00b6cc66ec24aeadaa66",
"text": "With the application of Internet of Things and services to manufacturing, the fourth stage of industrialization, referred to as Industrie 4.0, is believed to be approaching. For Industrie 4.0 to come true, it is essential to implement the horizontal integration of inter-corporation value network, the end-to-end integration of engineering value chain, and the vertical integration of factory inside. In this paper, we focus on the vertical integration to implement flexible and reconfigurable smart factory. We first propose a brief framework that incorporates industrial wireless networks, cloud, and fixed or mobile terminals with smart artifacts such as machines, products, and conveyors.Then,we elaborate the operationalmechanism from the perspective of control engineering, that is, the smart artifacts form a self-organized systemwhich is assistedwith the feedback and coordination blocks that are implemented on the cloud and based on the big data analytics. In addition, we outline the main technical features and beneficial outcomes and present a detailed design scheme. We conclude that the smart factory of Industrie 4.0 is achievable by extensively applying the existing enabling technologies while actively coping with the technical challenges.",
"title": ""
},
{
"docid": "5a8f926b76eb4ad9cb7eb6c21196097d",
"text": "This paper presents a model based on Deep Learning algorithms of LSTM and GRU for facilitating an anomaly detection in Large Hadron Collider superconducting magnets. We used high resolution data available in Post Mortem database to train a set of models and chose the best possible set of their hyper-parameters. Using Deep Learning approach allowed to examine a vast body of data and extract the fragments which require further experts examination and are regarded as anomalies. The presented method does not require tedious manual threshold setting and operator attention at the stage of the system setup. Instead, the automatic approach is proposed, which achieves according to our experiments accuracy of 99 %. This is reached for the largest dataset of 302 MB and the following architecture of the network: single layer LSTM, 128 cells, 20 epochs of training, look_back=16, look_ahead=128, grid=100 and optimizer Adam. All the experiments were run on GPU Nvidia Tesla K80.",
"title": ""
},
{
"docid": "b0c5c8e88e9988b6548acb1c8ebb5edd",
"text": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “ a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms.",
"title": ""
},
{
"docid": "21b9b7995cabde4656c73e9e278b2bf5",
"text": "Topic modeling techniques have been recently applied to analyze and model source code. Such techniques exploit the textual content of source code to provide automated support for several basic software engineering activities. Despite these advances, applications of topic modeling in software engineering are frequently suboptimal. This can be attributed to the fact that current state-of-the-art topic modeling techniques tend to be data intensive. However, the textual content of source code, embedded in its identifiers, comments, and string literals, tends to be sparse in nature. This prevents classical topic modeling techniques, typically used to model natural language texts, to generate proper models when applied to source code. Furthermore, the operational complexity and multi-parameter calibration often associated with conventional topic modeling techniques raise important concerns about their feasibility as data analysis models in software engineering. Motivated by these observations, in this paper we propose a novel approach for topic modeling designed for source code. The proposed approach exploits the basic assumptions of the cluster hypothesis and information theory to discover semantically coherent topics in software systems. Ten software systems from different application domains are used to empirically calibrate and configure the proposed approach. The usefulness of generated topics is empirically validated using human judgment. Furthermore, a case study that demonstrates thet operation of the proposed approach in analyzing code evolution is reported. The results show that our approach produces stable, more interpretable, and more expressive topics than classical topic modeling techniques without the necessity for extensive parameter calibration.",
"title": ""
},
{
"docid": "02da733cc5d5c2070e00820afc20e285",
"text": "Service-oriented computing has brought special attention to service description, especially in connection with semantic technologies. The expected proliferation of publicly accessible services can benefit greatly from tool support and automation, both ofwhich are the focus of SemanticWebService (SWS) frameworks that especially address service discovery, composition and execution. As the first SWS standard, in 2007 the World Wide Web Consortium produced a lightweight bottom-up specification called SAWSDL for adding semantic annotations to WSDL service descriptions. Building on SAWSDL, this article presents WSMO-Lite, a lightweight ontology of Web service semantics that distinguishes four semantic aspects of services: function, behavior, information model, and nonfunctional properties, which together form a basis for semantic automation. With the WSMO-Lite ontology, SAWSDL descriptions enable semantic automation beyond simple input/output matchmaking that is supported by SAWSDL itself. Further, to broaden the reach of WSMO-Lite and SAWSDL tools to the increasingly common RESTful services, the article adds hRESTS and MicroWSMO, two HTML microformats that mirror WSDL and SAWSDL in the documentation of RESTful services, enabling combiningRESTful serviceswithWSDL-based ones in a single semantic framework. To demonstrate the feasibility and versatility of this approach, the article presents common algorithms for Web service discovery and composition adapted to WSMO-Lite. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f53d13eeccff0048fc96e532a52a2154",
"text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.",
"title": ""
},
{
"docid": "92d04ad5a9fa32c2ad91003213b1b86d",
"text": "You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to...",
"title": ""
},
{
"docid": "1deb1d0705685ddab6d7009da397532f",
"text": "It is unclear whether disseminated tumour cells detected in bone marrow in early stages of solid cancers indicate a subclinical systemic disease component determining the patient's fate or simply represent mainly irrelevant shed cells. Moreover, characteristics differentiating high and low metastatic potential of disseminated tumour cells are not defined. We performed repeated serial bone marrow biopsies during follow–up in operated gastric cancer patients. Most patients with later tumour relapse revealed either an increase or a constantly high number of tumour cells. In contrast, in patients without recurrence, either clearance of tumour cells or negative or low cell counts were seen. Urokinase plasminogen activator (uPA)–receptor expression on disseminated tumour cells was significantly correlated with increasing tumour cell counts and clinical prognosis. These results demonstrate a systemic component in early solid cancer, indicated by early systemically disseminated tumour cells, which may predict individual disease development.",
"title": ""
}
] |
scidocsrr
|
d3889f249c96ad7e734031ae8ddd16f5
|
Factors mediating disclosure in social network sites
|
[
{
"docid": "7eed84f959268599e1b724b0752f6aa5",
"text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.",
"title": ""
}
] |
[
{
"docid": "a6b4ee8a6da7ba240b7365cf1a70669d",
"text": "Received: 2013-04-15 Accepted: 2013-05-13 Accepted after one revision by Prof. Dr. Sinz. Published online: 2013-06-14 This article is also available in German in print and via http://www. wirtschaftsinformatik.de: Blohm I, Leimeister JM (2013) Gamification. Gestaltung IT-basierter Zusatzdienstleistungen zur Motivationsunterstützung und Verhaltensänderung. WIRTSCHAFTSINFORMATIK. doi: 10.1007/s11576-013-0368-0.",
"title": ""
},
{
"docid": "752e6d6f34ffc638e9a0d984a62db184",
"text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.",
"title": ""
},
{
"docid": "beec3b6b4e5ecaa05d6436426a6d93b7",
"text": "This paper introduces a 6LoWPAN simulation model for OMNeT++. Providing a 6LoWPAN model is an important step to advance OMNeT++-based Internet of Things simulations. We integrated Contiki’s 6LoWPAN implementation into OMNeT++ in order to avoid problems of non-standard compliant, non-interoperable, or highly abstracted and thus unreliable simulation models. The paper covers the model’s structure as well as its integration and the generic interaction between OMNeT++ / INET and Contiki.",
"title": ""
},
{
"docid": "41d546266db9b3e9ec5071e4926abb8d",
"text": "Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction. Under the assumption that the rays refract only twice when traveling through the object, we present the first approach to simultaneously reconstructing the 3D positions and normals of the object's surface at both refraction locations. Our acquisition setup requires only two cameras and one monitor, which serves as the light source. After acquiring the ray-ray correspondences between each camera and the monitor, we solve an optimization function which enforces a new position-normal consistency constraint. That is, the 3D positions of surface points shall agree with the normals required to refract the rays under Snell's law. Experimental results using both synthetic and real data demonstrate the robustness and accuracy of the proposed approach.",
"title": ""
},
{
"docid": "cf41591ea323c2dd2aa4f594c61315d9",
"text": "Natural language descriptions of videos provide a potentially rich and vast source of supervision. However, the highly-varied nature of language presents a major barrier to its effective use. What is needed are models that can reason over uncertainty over both videos and text. In this paper, we tackle the core task of person naming: assigning names of people in the cast to human tracks in TV videos. Screenplay scripts accompanying the video provide some crude supervision about who’s in the video. However, even the basic problem of knowing who is mentioned in the script is often difficult, since language often refers to people using pronouns (e.g., “he”) and nominals (e.g., “man”) rather than actual names (e.g., “Susan”). Resolving the identity of these mentions is the task of coreference resolution, which is an active area of research in natural language processing. We develop a joint model for person naming and coreference resolution, and in the process, infer a latent alignment between tracks and mentions. We evaluate our model on both vision and NLP tasks on a new dataset of 19 TV episodes. On both tasks, we significantly outperform the independent baselines.",
"title": ""
},
{
"docid": "13cdf06acdcf3f6e0c7085662cb99315",
"text": "Terrestrial ecosystems play a significant role in the global carbon cycle and offset a large fraction of anthropogenic CO2 emissions. The terrestrial carbon sink is increasing, yet the mechanisms responsible for its enhancement, and implications for the growth rate of atmospheric CO2, remain unclear. Here using global carbon budget estimates, ground, atmospheric and satellite observations, and multiple global vegetation models, we report a recent pause in the growth rate of atmospheric CO2, and a decline in the fraction of anthropogenic emissions that remain in the atmosphere, despite increasing anthropogenic emissions. We attribute the observed decline to increases in the terrestrial sink during the past decade, associated with the effects of rising atmospheric CO2 on vegetation and the slowdown in the rate of warming on global respiration. The pause in the atmospheric CO2 growth rate provides further evidence of the roles of CO2 fertilization and warming-induced respiration, and highlights the need to protect both existing carbon stocks and regions, where the sink is growing rapidly.",
"title": ""
},
{
"docid": "b1ffdb1e3f069b78458a2b464293d97a",
"text": "We consider the detection of activities from non-cooperating individuals with features obtained on the radio frequency channel. Since environmental changes impact the transmission channel between devices, the detection of this alteration can be used to classify environmental situations. We identify relevant features to detect activities of non-actively transmitting subjects. In particular, we distinguish with high accuracy an empty environment or a walking, lying, crawling or standing person, in case-studies of an active, device-free activity recognition system with software defined radios. We distinguish between two cases in which the transmitter is either under the control of the system or ambient. For activity detection the application of one-stage and two-stage classifiers is considered. Apart from the discrimination of the above activities, we can show that a detected activity can also be localized simultaneously within an area of less than 1 meter radius.",
"title": ""
},
{
"docid": "22241857a42ffcad817356900f52df66",
"text": "Most of the intensive care units (ICU) are equipped with commercial pulse oximeters for monitoring arterial blood oxygen saturation (SpO2) and pulse rate (PR). Photoplethysmographic (PPG) data recorded from pulse oximeters usually corrupted by motion artifacts (MA), resulting in unreliable and inaccurate estimated measures of SpO2. In this paper, a simple and efficient MA reduction method based on Ensemble Empirical Mode Decomposition (E2MD) is proposed for the estimation of SpO2 from processed PPGs. Performance analysis of the proposed E2MD is evaluated by computing the statistical and quality measures indicating the signal reconstruction like SNR and NRMSE. Intentionally created MAs (Horizontal MA, Vertical MA and Bending MA) in the recorded PPGs are effectively reduced by the proposed one and proved to be the best suitable method for reliable and accurate SpO2 estimation from the processed PPGs.",
"title": ""
},
{
"docid": "2702eb18e03af90e4061badd87bae7f7",
"text": "Two linear time (and hence asymptotically optimal) algorithms for computing the Euclidean distance transform of a two-dimensional binary image are presented. The algorithms are based on the construction and regular sampling of the Voronoi diagram whose sites consist of the unit (feature) pixels in the image. The rst algorithm, which is of primarily theoretical interest, constructs the complete Voronoi diagram. The second, more practical, algorithm constructs the Voronoi diagram where it intersects the horizontal lines passing through the image pixel centres. Extensions to higher dimensional images and to other distance functions are also discussed.",
"title": ""
},
{
"docid": "897962874a43ee19e3f50f431d4c449e",
"text": "According to Dennett, the same system may be described using a ‘physical’ (mechanical) explanatory stance, or using an ‘intentional’ (beliefand goalbased) explanatory stance. Humans tend to find the physical stance more helpful for certain systems, such as planets orbiting a star, and the intentional stance for others, such as living animals. We define a formal counterpart of physical and intentional stances within computational theory: a description of a system as either a device, or an agent, with the key difference being that ‘devices’ are directly described in terms of an input-output mapping, while ‘agents’ are described in terms of the function they optimise. Bayes’ rule can then be applied to calculate the subjective probability of a system being a device or an agent, based only on its behaviour. We illustrate this using the trajectories of an object in a toy grid-world domain.",
"title": ""
},
{
"docid": "36e99c1f3be629e3d556e5dc48243e0a",
"text": "Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.",
"title": ""
},
{
"docid": "83238b7ede9cc85090e44028e79375af",
"text": "Purpose – This paper aims to represent a capability model for industrial robot as they pertain to assembly tasks. Design/methodology/approach – The architecture of a real kit building application is provided to demonstrate how robot capabilities can be used to fully automate the planning of assembly tasks. Discussion on the planning infrastructure is done with the Planning Domain Definition Language (PDDL) for heterogeneous multi robot systems. Findings – The paper describes PDDL domain and problem files that are used by a planner to generate a plan for kitting. Discussion on the plan shows that the best robot is selected to carry out assembly actions. Originality/value – The author presents a robot capability model that is intended to be used for helping manufacturers to characterize the different capabilities their robots contribute to help the end user to select the appropriate robots for the appropriate tasks, selecting backup robots during robot’s failures to limit the deterioration of the system’s productivity and the products’ quality and limiting robots’ failures and increasing productivity by providing a tool to manufacturers that outputs a process plan that assigns the best robot to each task needed to accomplish the assembly.",
"title": ""
},
{
"docid": "88f6a0f18d32d9cf6da82ff730b22298",
"text": "In this letter, we propose an energy efficient power control scheme for resource sharing between cellular and device-to-device (D2D) users in cellular network assisted D2D communication. We take into account the circuit power consumption of the device-to-device user (DU) and aim at maximizing the DU's energy efficiency while guaranteeing the required throughputs of both the DU and the cellular user. Specifically, we define three different regions for the circuit power consumption of the DU and derive the optimal power control scheme for each region. Moreover, a distributed algorithm is proposed for implementation of the optimal power control scheme.",
"title": ""
},
{
"docid": "d5e3b7d29389990154b50087f5c13c88",
"text": "This paper presents two sets of features, shape representation and kinematic structure, for human activity recognition using a sequence of RGB-D images. The shape features are extracted using the depth information in the frequency domain via spherical harmonics representation. The other features include the motion of the 3D joint positions (i.e. the end points of the distal limb segments) in the human body. Both sets of features are fused using the Multiple Kernel Learning (MKL) technique at the kernel level for human activity recognition. Our experiments on three publicly available datasets demonstrate that the proposed features are robust for human activity recognition and particularly when there are similarities",
"title": ""
},
{
"docid": "815e0ad06fdc450aa9ba3f56ab19ab05",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "ad53198bab3ad3002b965914f92ce3c9",
"text": "Adaptive Learning Algorithms for Transferable Visual Recognition by Judith Ho↵man Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor Trevor Darrell, Chair Understanding visual scenes is a crucial piece in many artificial intelligence applications ranging from autonomous vehicles and household robotic navigation to automatic image captioning for the blind. Reliably extracting high-level semantic information from the visual world in real-time is key to solving these critical tasks safely and correctly. Existing approaches based on specialized recognition models are prohibitively expensive or intractable due to limitations in dataset collection and annotation. By facilitating learned information sharing between recognition models these applications can be solved; multiple tasks can regularize one another, redundant information can be reused, and the learning of novel tasks is both faster and easier. In this thesis, I present algorithms for transferring learned information between visual data sources and across visual tasks all with limited human supervision. I will both formally and empirically analyze the adaptation of visual models within the classical domain adaptation setting and extend the use of adaptive algorithms to facilitate information transfer between visual tasks and across image modalities. Most visual recognition systems learn concepts directly from a large collection of manually annotated images/videos. A model which detects pedestrians requires a human to manually go through thousands or millions of images and indicate all instances of pedestrians. However, this model is susceptible to biases in the labeled data and often fails to generalize to new scenarios a detector trained in Palo Alto may have degraded performance in Rome, or a detector trained in sunny weather may fail in the snow. Rather than require human supervision for each new task or scenario, this work draws on deep learning, transformation learning, and convex-concave optimization to produce novel optimization frameworks which transfer information from the large curated databases to real world scenarios.",
"title": ""
},
{
"docid": "79a3631f3ada452ad3193924071211dd",
"text": "The encoder-decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel source-side token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture a correspondence between source and target tokens. The experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. Additionally, we show that our method has an ability to learn a reasonable token-wise correspondence without knowing any true alignments.",
"title": ""
},
{
"docid": "77b9d8a71d5bdd0afdf93cd525950496",
"text": "One of the main tasks of a dialog system is to assign intents to user utterances, which is a form of text classification. Since intent labels are application-specific, bootstrapping a new dialog system requires collecting and annotating in-domain data. To minimize the need for a long and expensive data collection process, we explore ways to improve the performance of dialog systems with very small amounts of training data. In recent years, word embeddings have been shown to provide valuable features for many different language tasks. We investigate the use of word embeddings in a text classification task with little training data. We find that count and vector features complement each other and their combination yields better results than either type of feature alone. We propose a simple alternative, vector extrema, to replace the usual averaging of a sentence’s vectors. We show how taking vector extrema is well suited for text classification and compare it against standard vector baselines in three different applications.",
"title": ""
},
{
"docid": "420fa81c2dbe77622108c978d5c6c019",
"text": "Reasoning about a scene's thermal signature, in addition to its visual appearance and spatial configuration, would facilitate significant advances in perceptual systems. Applications involving the segmentation and tracking of persons, vehicles, and other heat-emitting objects, for example, could benefit tremendously from even coarsely accurate relative temperatures. With the increasing affordability of commercially available thermal cameras, as well as the imminent introduction of new, mobile form factors, such data will be readily and widely accessible. However, in order for thermal processing to complement existing methods in RGBD, there must be an effective procedure for calibrating RGBD and thermal cameras to create RGBDT (red, green, blue, depth, and thermal) data. In this paper, we present an automatic method for the synchronization and calibration of RGBD and thermal cameras in arbitrary environments. While traditional calibration methods fail in our multimodal setting, we leverage invariant features visible by both camera types. We first synchronize the streams with a simple optimization procedure that aligns their motion statistic time series. We then find the relative poses of the cameras by minimizing an objective that measures the alignment between edge maps from the two streams. In contrast to existing methods that use special calibration targets with key points visible to both cameras, our method requires nothing more than some edges visible to both cameras, such as those arising from humans. We evaluate our method and demonstrate that it consistently converges to the correct transform and that it results in high-quality RGBDT data.",
"title": ""
},
{
"docid": "19863150313643b977f72452bb5a8a69",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
}
] |
scidocsrr
|
22ad268ada2c230126faa965804de169
|
Phase distribution of software development effort
|
[
{
"docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb",
"text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.",
"title": ""
}
] |
[
{
"docid": "56a7243414824a2e4ab3993dc3a90fbe",
"text": "The primary objectives of periodontal therapy are to maintain and to obtain health and integrity of the insertion apparatus and to re-establish esthetics by means of the quantitative and qualitative restoration of the gingival margin. Esthetics can be considered essential to the success of any dental procedure. However, in cleft lip and palate patients gingival esthetics do not play a relevant role, since most patients present little gingiva exposure (Mikami, 1990). The treatment protocol for cleft palate patients is complex and often requires a myriad of surgical and rehabilitative procedures that last until adulthood. In order to rehabilitate these patients and provide them with adequate physical and psychological conditions for a good quality of life, plastic surgery has been taking place since the 19th century, with the development of new techniques. By the age of six months the patients have undergone lip repair procedures (Bill, 1956; Jolleys, 1954), followed by palatoplasty at the age of 1218 months. As a consequence of these surgical interventions, the formation of innumerous scars and fibrous tissue in the anterior region may cause some sequels, such as orofacial growth alterations (Quarta and Koch, 1989; Ozawa, 2001), a shallow vestibule with lack of attached gingiva and gingival margin mobility (Falcone, 1966). A shallow vestibule in the cleft lip and palate patient is associated with the contraction of the upper lip during healing (Iino et al, 2001), which causes deleterious effects on growth, facial expression, speech, orthodontic and prosthetic treatment problems, diminished keratinized gingiva, bone graft resorption and changes in the upper lip muscle pattern. The surgical protocol at the Hospital for Rehabilitation of Craniofacial Anomalies (HRCA) in Bauru consists of carrying out primary surgeries (cheiloplasty and palatoplasty) during the first months of Periodontal Health Re-Establishment in Cleft Lip and Palate Patients through Vestibuloplasty Associated with Free Gingival Graft",
"title": ""
},
{
"docid": "a947380864130c898d15d7d34280825f",
"text": "Automatic oil tank detection plays a very important role for remote sensing image processing. To accomplish the task, a hierarchical oil tank detector with deep surrounding features is proposed in this paper. The surrounding features extracted by the deep learning model aim at making the oil tanks more easily to recognize, since the appearance of oil tanks is a circle and this information is not enough to separate targets from the complex background. The proposed method is divided into three modules: 1) candidate selection; 2) feature extraction; and 3) classification. First, a modified ellipse and line segment detector (ELSD) based on gradient orientation is used to select candidates in the image. Afterward, the feature combing local and surrounding information together is extracted to represent the target. Histogram of oriented gradients (HOG) which can reliably capture the shape information is extracted to characterize the local patch. For the surrounding area, the convolutional neural network (CNN) trained in ImageNet Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) contest is applied as a blackbox feature extractor to extract rich surrounding feature. Then, the linear support vector machine (SVM) is utilized as the classifier to give the final output. Experimental results indicate that the proposed method is robust under different complex backgrounds and has high detection rate with low false alarm.",
"title": ""
},
{
"docid": "c0ec2818c7f34359b089acc1df5478c6",
"text": "Methods We searched Medline from Jan 1, 2009, to Nov 19, 2013, limiting searches to phase 3, randomised trials of patients with atrial fi brillation who were randomised to receive new oral anticoagulants or warfarin, and trials in which both effi cacy and safety outcomes were reported. We did a prespecifi ed meta-analysis of all 71 683 participants included in the RE-LY, ROCKET AF, ARISTOTLE, and ENGAGE AF–TIMI 48 trials. The main outcomes were stroke and systemic embolic events, ischaemic stroke, haemorrhagic stroke, all-cause mortality, myocardial infarction, major bleeding, intracranial haemorrhage, and gastrointestinal bleeding. We calculated relative risks (RRs) and 95% CIs for each outcome. We did subgroup analyses to assess whether diff erences in patient and trial characteristics aff ected outcomes. We used a random-eff ects model to compare pooled outcomes and tested for heterogeneity.",
"title": ""
},
{
"docid": "5ae974ffec58910ea3087aefabf343f8",
"text": "With the ever-increasing use of multimedia contents through electronic commerce and on-line services, the problems associated with the protection of intellectual property, management of large database and indexation of content are becoming more prominent. Watermarking has been considered as efficient means to these problems. Although watermarking is a powerful tool, there are some issues with the use of it, such as the modification of the content and its security. With respect to this, identifying content itself based on its own features rather than watermarking can be an alternative solution to these problems. The aim of fingerprinting is to provide fast and reliable methods for content identification. In this paper, we present a new approach for image fingerprinting using the Radon transform to make the fingerprint robust against affine transformations. Since it is quite easy with modern computers to apply affine transformations to audio, image and video, there is an obvious necessity for affine transformation resilient fingerprinting. Experimental results show that the proposed fingerprints are highly robust against most signal processing transformations. Besides robustness, we also address other issues such as pairwise independence, database search efficiency and key dependence of the proposed method. r 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2f3046369c717cc3dc15632fc163a429",
"text": "We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3D avatars (Li et al. 2015; Olszewski et al. 2016). Although they achieve good tracking performance, the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly introduced algorithmic components.",
"title": ""
},
{
"docid": "de630d018f3ff24fad06976e8dc390fa",
"text": "A critical first step in navigation of unmanned aerial vehicles is the detection of the horizon line. This information can be used for adjusting flight parameters, attitude estimation as well as obstacle detection and avoidance. In this paper, a fast and robust technique for precise detection of the horizon is presented. Our approach is to apply convolutional neural networks to the task, training them to detect the sky and ground regions as well as the horizon line in flight videos. Thorough experiments using large datasets illustrate the significance and accuracy of this technique for various types of terrain as well as seasonal conditions.",
"title": ""
},
{
"docid": "ab231cbc45541b5bdbd0da82571b44ca",
"text": "ABSTRACT Evidence of Sedona magnetic anomaly and brainwave EEG synchronization can be demonstrated with portable equipment on site in the field, during sudden magnetic events. Previously, we have demonstrated magnetic anomaly charts recorded in both known and unrecognized Sedona vortex activity locations. We have also shown a correlation or amplification of vortex phenomena with Schumann Resonance. Adding the third measurable parameter of brain wave activity, we demonstrate resonance and amplification among them. We suggest tiny magnetic crystals, biogenic magnetite, make human beings highly sensitive to ELF field fluctuations. Biological Magnetite could act as a transducer of both low frequency magnetic fields and RF fields.",
"title": ""
},
{
"docid": "53e74115eceda124c28975cdaa8e4088",
"text": "Current state-of-the-art motion planners rely on samplingbased planning to explore the problem space for a solution. However, sampling valid configurations in narrow or cluttered workspaces remains a challenge. If a valid path for the robot correlates to a path in the workspace, then the planning process can employ a representation of the workspace that captures its salient topological features. Prior approaches have investigated exploiting geometric decompositions of the workspace to bias sampling; while beneficial in some environments, complex narrow passages remain challenging to navigate. In this work, we present Dynamic Region-biased RRT, a novel samplingbased planner that guides the exploration of a Rapidly-exploring Random Tree (RRT) by moving sampling regions along an embedded graph that captures the workspace topology. These sampling regions are dynamically created, manipulated, and destroyed to greedily bias sampling through unexplored passages that lead to the goal. We show that our approach reduces online planning time compared with related methods on a set of maze-like problems.",
"title": ""
},
{
"docid": "0d7ce42011c48232189c791e71c289f5",
"text": "RECENT WORK in virtue ethics, particularly sustained reflection on specific virtues, makes it possible to argue that the classical list of cardinal virtues (prudence, justice, temperance, and fortitude) is inadequate, and that we need to articulate the cardinal virtues more correctly. With that end in view, the first section of this article describes the challenges of espousing cardinal virtues today, the second considers the inadequacy of the classical listing of cardinal virtues, and the third makes a proposal. Since virtues, no matter how general, should always relate to concrete living, the article is framed by a case.",
"title": ""
},
{
"docid": "84e71d32b1f40eb59d63a0ec6324d79b",
"text": "Typically a classifier trained on a given dataset (source domain) does not performs well if it is tested on data acquired in a different setting (target domain). This is the problem that domain adaptation (DA) tries to overcome and, while it is a well explored topic in computer vision, it is largely ignored in robotic vision where usually visual classification methods are trained and tested in the same domain. Robots should be able to deal with unknown environments, recognize objects and use them in the correct way, so it is important to explore the domain adaptation scenario also in this context. The goal of the project is to define a benchmark and a protocol for multimodal domain adaptation that is valuable for the robot vision community. With this purpose some of the state-of-the-art DA methods are selected: Deep Adaptation Network (DAN), Domain Adversarial Training of Neural Network (DANN), Automatic Domain Alignment Layers (AutoDIAL) and Adversarial Discriminative Domain Adaptation (ADDA). Evaluations have been done using different data types: RGB only, depth only and RGB-D over the following datasets, designed for the robotic community: RGB-D Object Dataset (ROD), Web Object Dataset (WOD), Autonomous Robot Indoor Dataset (ARID), Big Berkeley Instance Recognition Dataset (BigBIRD) and Active Vision Dataset. Although progresses have been made on the formulation of effective adaptation algorithms and more realistic object datasets are available, the results obtained show that, training a sufficiently good object classifier, especially in the domain adaptation scenario, is still an unsolved problem. Also the best way to combine depth with RGB informations to improve the performance is a point that needs to be investigated more.",
"title": ""
},
{
"docid": "73e1b088461da774889ec2bd7ee2f524",
"text": "In this paper, we propose a method for obtaining sentence-level embeddings. While the problem of securing word-level embeddings is very well studied, we propose a novel method for obtaining sentence-level embeddings. This is obtained by a simple method in the context of solving the paraphrase generation task. If we use a sequential encoder-decoder model for generating paraphrase, we would like the generated paraphrase to be semantically close to the original sentence. One way to ensure this is by adding constraints for true paraphrase embeddings to be close and unrelated paraphrase candidate sentence embeddings to be far. This is ensured by using a sequential pair-wise discriminator that shares weights with the encoder that is trained with a suitable loss function. Our loss function penalizes paraphrase sentence embedding distances from being too large. This loss is used in combination with a sequential encoder-decoder network. We also validated our method by evaluating the obtained embeddings for a sentiment analysis task. The proposed method results in semantic embeddings and outperforms the state-of-the-art on the paraphrase generation and sentiment analysis task on standard datasets. These results are also shown to be statistically significant.",
"title": ""
},
{
"docid": "8c03df6650b3e400bc5447916d01820a",
"text": "People called night owls habitually have late bedtimes and late times of arising, sometimes suffering a heritable circadian disturbance called delayed sleep phase syndrome (DSPS). Those with DSPS, those with more severe progressively-late non-24-hour sleep-wake cycles, and those with bipolar disorder may share genetic tendencies for slowed or delayed circadian cycles. We searched for polymorphisms associated with DSPS in a case-control study of DSPS research participants and a separate study of Sleep Center patients undergoing polysomnography. In 45 participants, we resequenced portions of 15 circadian genes to identify unknown polymorphisms that might be associated with DSPS, non-24-hour rhythms, or bipolar comorbidities. We then genotyped single nucleotide polymorphisms (SNPs) in both larger samples, using Illumina Golden Gate assays. Associations of SNPs with the DSPS phenotype and with the morningness-eveningness parametric phenotype were computed for both samples, then combined for meta-analyses. Delayed sleep and \"eveningness\" were inversely associated with loci in circadian genes NFIL3 (rs2482705) and RORC (rs3828057). A group of haplotypes overlapping BHLHE40 was associated with non-24-hour sleep-wake cycles, and less robustly, with delayed sleep and bipolar disorder (e.g., rs34883305, rs34870629, rs74439275, and rs3750275 were associated with n=37, p=4.58E-09, Bonferroni p=2.95E-06). Bright light and melatonin can palliate circadian disorders, and genetics may clarify the underlying circadian photoperiodic mechanisms. After further replication and identification of the causal polymorphisms, these findings may point to future treatments for DSPS, non-24-hour rhythms, and possibly bipolar disorder or depression.",
"title": ""
},
{
"docid": "da7fc676542ccc6f98c36334d42645ae",
"text": "Extracting the defects of the road pavement in images is difficult and, most of the time, one image is used alone. The difficulties of this task are: illumination changes, objects on the road, artefacts due to the dynamic acquisition. In this work, we try to solve some of these problems by using acquisitions from different points of view. In consequence, we present a new methodology based on these steps : the detection of defects in each image, the matching of the images and the merging of the different extractions. We show the increase in performances and more particularly how the false detections are reduced.",
"title": ""
},
{
"docid": "347e7b80b2b0b5cd5f0736d62fa022ae",
"text": "This article presents the results of an interview study on how people perceive and play social network games on Facebook. During recent years, social games have become the biggest genre of games if measured by the number of registered users. These games are designed to cater for large audiences in their design principles and values, a free-to-play revenue model and social network integration that make them easily approachable and playable with friends. Although these games have made the headlines and have been seen to revolutionize the game industry, we still lack an understanding of how people perceive and play them. For this article, we interviewed 18 Finnish Facebook users from a larger questionnaire respondent pool of 134 people. This study focuses on a user-centric approach, highlighting the emergent experiences and the meaning-making of social games players. Our findings reveal that social games are usually regarded as single player games with a social twist, and as suffering partly from their design characteristics, while still providing a wide spectrum of playful experiences for different needs. The free-to-play revenue model provides an easy access to social games, but people disagreed with paying for additional content for several reasons.",
"title": ""
},
{
"docid": "6f0faf1a90d9f9b19fb2e122a26a0f77",
"text": "Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data associated with a social media statement can help dispel rumors, clarify opinions, and confirm facts. However, provenance data about social media statements is not readily available to users today. Currently, providing this data to users requires changing the social media infrastructure or offering subscription services. Taking advantage of social media features, research in this nascent field spearheads the search for a way to provide provenance data to social media users, thus leveraging social media itself by mining it for the provenance data. Searching for provenance data reveals an interesting problem space requiring the development and application of new metrics in order to provide meaningful provenance data to social media users. This lecture reviews the current research on information provenance, explores exciting research opportunities to address pressing needs, and shows how data mining can enable a social media user to make informed judgements about statements published in social media.",
"title": ""
},
{
"docid": "cf21fd00999dff7d974f39b99e71bb13",
"text": "Taking r > 0, let π2r(x) denote the number of prime pairs (p, p+ 2r) with p ≤ x. The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π2r(x) ∼ 2C2r li2(x) with an explicit constant C2r > 0. There seems to be no good conjecture for the remainders ω2r(x) = π2r(x)−2C2r li2(x) that corresponds to Riemann’s formula for π(x)−li(x). However, there is a heuristic approximate formula for averages of the remainders ω2r(x) which is supported by numerical results.",
"title": ""
},
{
"docid": "ed66f39bda7ccd5c76f64543b5e3abd6",
"text": "BACKGROUND\nLoeys-Dietz syndrome is a recently recognized multisystemic disorder caused by mutations in the genes encoding the transforming growth factor-beta receptor. It is characterized by aggressive aneurysm formation and vascular tortuosity. We report the musculoskeletal demographic, clinical, and imaging findings of this syndrome to aid in its diagnosis and treatment.\n\n\nMETHODS\nWe retrospectively analyzed the demographic, clinical, and imaging data of sixty-five patients with Loeys-Dietz syndrome seen at one institution from May 2007 through December 2008.\n\n\nRESULTS\nThe patients had a mean age of twenty-one years, and thirty-six of the sixty-five patients were less than eighteen years old. Previous diagnoses for these patients included Marfan syndrome (sixteen patients) and Ehlers-Danlos syndrome (two patients). Spinal and foot abnormalities were the most clinically important skeletal findings. Eleven patients had talipes equinovarus, and nineteen patients had cervical anomalies and instability. Thirty patients had scoliosis (mean Cobb angle [and standard deviation], 30 degrees +/- 18 degrees ). Two patients had spondylolisthesis, and twenty-two of thirty-three who had computed tomography scans had dural ectasia. Thirty-five patients had pectus excavatum, and eight had pectus carinatum. Combined thumb and wrist signs were present in approximately one-fourth of the patients. Acetabular protrusion was present in approximately one-third of the patients and was usually mild. Fourteen patients had previous orthopaedic procedures, including scoliosis surgery, cervical stabilization, clubfoot correction, and hip arthroplasty. Features of Loeys-Dietz syndrome that are important clues to aid in making this diagnosis include bifid broad uvulas, hypertelorism, substantial joint laxity, and translucent skin.\n\n\nCONCLUSIONS\nPatients with Loeys-Dietz syndrome commonly present to the orthopaedic surgeon with cervical malformations, spinal and foot deformities, and findings in the craniofacial and cutaneous systems.\n\n\nLEVEL OF EVIDENCE\nTherapeutic Level IV. See Instructions to Authors for a complete description of levels of evidence.",
"title": ""
},
{
"docid": "20705a14783c89ac38693b2202363c1f",
"text": "This paper analyzes the effect of employee recognition, pay, and benefits on job satisfaction. In this cross-sectional study, survey responses from university students in the U.S. (n = 457), Malaysia (n = 347) and Vietnam (n = 391) were analyzed. Employee recognition, pay, and benefits were found to have a significant impact on job satisfaction, regardless of home country income level (high, middle or low income) and culture (collectivist or individualist). However, the effect of benefits on job satisfaction was significantly more important for U.S. respondents than for respondents from Malaysia and Vietnam. The authors conclude that both financial and nonfinancial rewards have a role in influencing job satisfaction, which ultimately impacts employee performance. Theoretical and practical implications for developing effective recruitment and retention policies for employees are also discussed.",
"title": ""
},
{
"docid": "d3e8dce306eb20a31ac6b686364d0415",
"text": "Lung diseases are the deadliest disease in the world. The computer aided detection system in lung diseases needed accurate lung segmentation to preplan the pulmonary treatment and surgeries. The researchers undergone the lung segmentation need a deep study and understanding of the traditional and recent papers developed in the lung segmentation field so that they can continue their research journey in an efficient way with successful outcomes. The need of reviewing the research papers is now a most wanted one for researches so this paper makes a survey on recent trends of pulmonary lung segmentation. Seven recent papers are carried out to analyze the performance characterization of themselves. The working methods, purpose for development, name of algorithm and drawbacks of the method are taken into consideration for the survey work. The tables and charts are drawn based on the reviewed papers. The study of lung segmentation research is more helpful to new and fresh researchers who are committed their research in lung segmentation.",
"title": ""
},
{
"docid": "43f9e6edee92ddd0b9dfff885b69f64d",
"text": "In this paper, we present a scalable and exact solution for probabilistic linear discriminant analysis (PLDA). PLDA is a probabilistic model that has been shown to provide state-of-the-art performance for both face and speaker recognition. However, it has one major drawback: At training time estimating the latent variables requires the inversion and storage of a matrix whose size grows quadratically with the number of samples for the identity (class). To date, two approaches have been taken to deal with this problem, to 1) use an exact solution that calculates this large matrix and is obviously not scalable with the number of samples or 2) derive a variational approximation to the problem. We present a scalable derivation which is theoretically equivalent to the previous nonscalable solution and thus obviates the need for a variational approximation. Experimentally, we demonstrate the efficacy of our approach in two ways. First, on labeled faces in the wild, we illustrate the equivalence of our scalable implementation with previously published work. Second, on the large Multi-PIE database, we illustrate the gain in performance when using more training samples per identity (class), which is made possible by the proposed scalable formulation of PLDA.",
"title": ""
}
] |
scidocsrr
|
288953db1746430bc33b3c5c3df583c9
|
Denoising image sequences does not require motion estimation
|
[
{
"docid": "69b631f179ea3c521f1dde75be537279",
"text": "A conceptually simple but effective noise smoothing algorithm is described. This filter is motivated by the sigma probability of the Gaussian distribution, and it smooths the image noise by averaging only those neighborhood pixels which have the intensities within a fixed sigma range of the center pixel. Consequently, image edges are preserved, and subtle details and thin tines such as roads are retained. The characteristics of this smoothing algorithm are analyzed and compared with several other known filtering algorithms by their ability to retain subtle details, preserving edge shapes, sharpening ramp edges, etc. The comparison also indicates that the sigma filter is the most computationally efficient filter among those evaluated. The filter can be easily extended into several forms which can be used in contrast enhancement, image segmentation, and smoothing signal-dependent noisy images. Several test images 128 X 128 and 256 X 256 pixels in size are used to substantiate its characteristics. The algorithm can be easily extended to 3-D image smoothing.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "24167db00908c65558e8034d94dfb8da",
"text": "Due to the wide variety of devices used in computer network systems, cybersecurity plays a major role in securing and improving the performance of the network or system. Although cybersecurity has received a large amount of global interest in recent years, it remains an open research space. Current security solutions in network-based cyberspace provide an open door to attackers by communicating first before authentication, thereby leaving a black hole for an attacker to enter the system before authentication. This article provides an overview of cyberthreats, traditional security solutions, and the advanced security model to overcome current security drawbacks.",
"title": ""
},
{
"docid": "c0ee14083f779e3f4115f8b5fd822f67",
"text": "The booming popularity of smartphones is partly a result of application markets where users can easily download wide range of third-party applications. However, due to the open nature of markets, especially on Android, there have been several privacy and security concerns with these applications. On Google Play, as with most other markets, users have direct access to natural-language descriptions of those applications, which give an intuitive idea of the functionality including the security-related information of those applications. Google Play also provides the permissions requested by applications to access security and privacy-sensitive APIs on the devices. Users may use such a list to evaluate the risks of using these applications. To best assist the end users, the descriptions should reflect the need for permissions, which we term description-to-permission fidelity. In this paper, we present a system AutoCog to automatically assess description-to-permission fidelity of applications. AutoCog employs state-of-the-art techniques in natural language processing and our own learning-based algorithm to relate description with permissions. In our evaluation, AutoCog outperforms other related work on both performance of detection and ability of generalization over various permissions by a large extent. On an evaluation of eleven permissions, we achieve an average precision of 92.6% and an average recall of 92.0%. Our large-scale measurements over 45,811 applications demonstrate the severity of the problem of low description-to-permission fidelity. AutoCog helps bridge the long-lasting usability gap between security techniques and average users.",
"title": ""
},
{
"docid": "85353c8d4d414875cc08a0757d1d6430",
"text": "Contacts : [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Summary Our long term goal is to promote mutual trust based on the expectations of both a pedestrian and an autonomous vehicle (AV). Our first step to accomplishing this is to propose a user study that leverages virtual reality (VR) to examine the effects of an autonomous vehicle’s driving behavior and situational characteristics on a pedestrian’s trust.",
"title": ""
},
{
"docid": "50f5e103666426113af680a0c2494921",
"text": "The security of information systems is the most important principle that can also be said that the most difficult, because security must be maintained throughout the system. At the beginning of this article we are going to introduce basic principles of security. The securities of distributed systems are divided into two parts: A portion of the communication between users and processes is concerned with examining issues such as authentication, message integrity and encryption will be discussed. In the next section, we will examine the guaranteed access permissions to resources in distributed systems. In addition to traditional access solutions, access control in mobile codes will be examined.",
"title": ""
},
{
"docid": "9b10757ca3ca84784033c20f064078b7",
"text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.",
"title": ""
},
{
"docid": "de38d178143e555170ecbc99f0495739",
"text": "In this article, we present a real-time 3D hybrid beamforming approach for 5G wireless networks. One of the key concepts in 5G cellular systems is the small cell network, which settles the high mobile traffic demand and provides uniform user-experienced data rates. The overall capacity of the small cell network can be enhanced with the enabling technology of 3D hybrid beamforming. This study validates the feasibility of 3D hybrid beamforming, mostly for link-level performance, through the implementation of a real-time testbed using a SDR platform and fabricated antenna array. Based on the measured data, we also investigate system-level performance to verify the gain of the proposed smart small cell system over LTE systems by performing system-level simulations based on a 3D ray-tracing tool.",
"title": ""
},
{
"docid": "fd721261c29395867ce3966bdaeeaa7a",
"text": "Cutaneous saltation provides interesting possibilities for applications. An illusion of vibrotactile mediolateral movement was elicited to a left dorsal forearm to investigate emotional (i.e., pleasantness) and cognitive (i.e., continuity) experiences to vibrotactile stimulation. Twelve participants were presented with nine saltatory stimuli delivered to a linearly aligned row of three vibrotactile actuators separated by 70 mm in distance. The stimuli were composed of three temporal parameters of 12, 24 and 48 ms for both burst duration and inter-burst interval to form all nine possible uniform pairs. First, the stimuli were ranked by the participants using a special three-step procedure. Second, the participants rated the stimuli using two nine-point bipolar scales measuring the pleasantness and continuity of each stimulus, separately. The results showed especially the interval between two successive bursts was a significant factor for saltation. Moreover, the temporal parameters seemed to affect more the experienced continuity of the stimuli compared to pleasantness. These findings encourage us to continue to further study the saltation and the effect of different parameters for subjective experience.",
"title": ""
},
{
"docid": "269e1c0d737beafd10560360049c6ee3",
"text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.",
"title": ""
},
{
"docid": "5c8242eabf1df5fb6c61f490dd2e3e5d",
"text": "In recent years, the capabilities and roles of Unmanned Aerial Vehicles (UAVs) have rapidly evolved, and their usage in military and civilian areas is extremely popular as a result of the advances in technology of robotic systems such as processors, sensors, communications, and networking technologies. While this technology is progressing, development and maintenance costs of UAVs are decreasing relatively. The focus is changing from use of one large UAV to use of multiple UAVs, which are integrated into teams that can coordinate to achieve high-level goals. This level of coordination requires new networking models that can be set up on highly mobile nodes such as UAVs in the fleet. Such networking models allow any two nodes to communicate directly if they are in the communication range, or indirectly through a number of relay nodes such as UAVs. Setting up an ad-hoc network between flying UAVs is a challenging issue, and requirements can differ from traditional networks, Mobile Ad-hoc Networks (MANETs) and Vehicular Ad-hoc Networks (VANETs) in terms of node mobility, connectivity, message routing, service quality, application areas, etc. This paper O. K. Sahingoz (B) Computer Engineering Department, Turkish Air Force Academy, Yesilyurt, Istanbul, 34149, Turkey e-mail: [email protected] identifies the challenges with using UAVs as relay nodes in an ad-hoc manner, introduces network models of UAVs, and depicts open research issues with analyzing opportunities and future work.",
"title": ""
},
{
"docid": "9e9be149fc44552b6ac9eb2d90d4a4ba",
"text": "In this work, a level set energy for segmenting the lungs from digital Posterior-Anterior (PA) chest x-ray images is presented. The primary challenge in using active contours for lung segmentation is local minima due to shading effects and presence of strong edges due to the rib cage and clavicle. We have used the availability of good contrast at the lung boundaries to extract a multi-scale set of edge/corner feature points and drive our active contour model using these features. We found these features when supplemented with a simple region based data term and a shape term based on the average lung shape, able to handle the above local minima issues. The algorithm was tested on 1130 clinical images, giving promising results.",
"title": ""
},
{
"docid": "ff4c2f1467a141894dbe76491bc06d3b",
"text": "Railways is the major means of transport in most of the countries. Rails are the backbone of the track structure and should be protected from defects. Surface defects are irregularities in the rails caused due to the shear stresses between the rails and wheels of the trains. This type of defects should be detected to avoid rail fractures. The objective of this paper is to propose an innovative technique to detect the surface defect on rail heads. In order to identify the defects, it is essential to extract the rails from the background and further enhance the image for thresholding. The proposed method uses Binary Image Based Rail Extraction (BIBRE) algorithm to extract the rails from the background. The extracted rails are enhanced to achieve uniform background with the help of direct enhancement method. The direct enhancement method enhance the image by enhancing the brightness difference between objects and their backgrounds. The enhanced rail image uses Gabor filters to identify the defects from the rails. The Gabor filters maximizes the energy difference between defect and defect less surface. Thresholding is done based on the energy of the defects. From the thresholded image the defects are identified and a message box is generated when there is a presence of defects.",
"title": ""
},
{
"docid": "404acd9265ae921e7454d4348ae45bda",
"text": "Wepresent a bitmap printingmethod and digital workflow usingmulti-material high resolution Additive Manufacturing (AM). Material composition is defined based on voxel resolution and used to fabricate a design object with locally varying material stiffness, aiming to satisfy the design objective. In this workflowvoxel resolution is set by theprinter’s native resolution, eliminating theneed for slicing andpath planning. Controlling geometry and material property variation at the resolution of the printer provides significantly greater control over structure–property–function relationships. To demonstrate the utility of the bitmap printing approach we apply it to the design of a customized prosthetic socket. Pressuresensing elements are concurrently fabricated with the socket, providing possibilities for evaluation of the socket’s fit. The level of control demonstrated in this study cannot be achieved using traditional CAD tools and volume-based AM workflows, implying that new CAD workflows must be developed in order to enable designers to harvest the capabilities of AM. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b5a8577b02f7f44e9fc5abd706e096d4",
"text": "Automotive Safety Integrity Level (ASIL) decomposition is a technique presented in the ISO 26262: Road Vehicles Functional Safety standard. Its purpose is to satisfy safety-critical requirements by decomposing them into less critical ones. This procedure requires a system-level validation, and the elements of the architecture to which the decomposed requirements are allocated must be analyzed in terms of Common-Cause Faults (CCF). In this work, we present a generic method for a bottomup ASIL decomposition, which can be used during the development of a new product. The system architecture is described in a three-layer model, from which fault trees are generated, formed by the application, resource, and physical layers and their mappings. A CCF analysis is performed on the fault trees to verify the absence of possible common faults between the redundant elements and to validate the ASIL decomposition.",
"title": ""
},
{
"docid": "2be043b09e6dd631b5fe6f9eed44e2ec",
"text": "This article aims to contribute to a critical research agenda for investigating the democratic implications of citizen journalism and social news. The article calls for a broad conception of ‘citizen journalism’ which is (1) not an exclusively online phenomenon, (2) not confined to explicitly ‘alternative’ news sources, and (3) includes ‘metajournalism’ as well as the practices of journalism itself. A case is made for seeing democratic implications not simply in the horizontal or ‘peer-to-peer’ public sphere of citizen journalism networks, but also in the possibility of a more ‘reflexive’ culture of news consumption through citizen participation. The article calls for a research agenda that investigates new forms of gatekeeping and agendasetting power within social news and citizen journalism networks and, drawing on the example of three sites, highlights the importance of both formal and informal status differentials and of the software ‘code’ structuring these new modes of news",
"title": ""
},
{
"docid": "4551c05bbf8969d310d548d5a773f584",
"text": "Optical testing of advanced CMOS circuits successfully exploits the near-infrared photon emission by hot-carriers in transistor channels (see EMMI (Ng et al., 1999) and PICA (Kash and Tsang, 1997) (Song et al., 2005) techniques). However, due to the continuous scaling of features size and supply voltage, spontaneous emission is becoming fainter and optical circuit diagnostics becomes more challenging. Here we present the experimental characterization of hot-carrier luminescence emitted by transistors in four CMOS technologies from two different manufacturers. Aim of the research is to gain a better perspective on emission trends and dependences on technological parameters. In particular, we identify luminescence changes due to short-channel effects (SCE) and we ascertain that, for each technology node, there are two operating regions, for short- and long-channels. We highlight the emission reduction of p-FETs compared to n-FETs, due to a \"red-shift\" (lower energy) of the hot-carrier distribution. Eventually, we give perspectives about emission trends in actual and future technology nodes, showing that luminescence dramatically decreases with voltage, but it recovers strength when moving from older to more advanced technology generations. Such results extend the applicability of optical testing techniques, based on present single-photon detectors, to future low-voltage chips",
"title": ""
},
{
"docid": "612c5bfb0878588e5362d17a4ebc47ef",
"text": "There are two key issues in successfully solving the image restoration problem: 1) estimation of the regularization parameter that balances data fidelity with the regularity of the solution and 2) development of efficient numerical techniques for computing the solution. In this paper, we derive a fast algorithm that simultaneously estimates the regularization parameter and restores the image. The new approach is based on the total-variation (TV) regularized strategy and Morozov's discrepancy principle. The TV norm is represented by the dual formulation that changes the minimization problem into a minimax problem. A proximal point method is developed to compute the saddle point of the minimax problem. By adjusting the regularization parameter adaptively in each iteration, the solution is guaranteed to satisfy the discrepancy principle. We will give the convergence proof of our algorithm and numerically show that it is better than some state-of-the-art methods in terms of both speed and accuracy.",
"title": ""
},
{
"docid": "562cf2d0bc59f0fde4d7377f1d5058a2",
"text": "The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.",
"title": ""
},
{
"docid": "0e4917e7a9e1abe867811f8454cbcdc0",
"text": "Long videos can be played much faster than real-time by recording only one frame per second or by dropping all but one frame each second, i.e., by creating a timelapse. Unstable hand-held moving videos can be stabilized with a number of recently described methods. Unfortunately, creating a stabilized timelapse, or hyperlapse, cannot be achieved through a simple combination of these two methods. Two hyperlapse methods have been previously demonstrated: one with high computational complexity and one requiring special sensors. We present an algorithm for creating hyperlapse videos that can handle significant high-frequency camera motion and runs in real-time on HD video. Our approach does not require sensor data, thus can be run on videos captured on any camera. We optimally select frames from the input video that best match a desired target speed-up while also resulting in the smoothest possible camera motion. We evaluate our approach using several input videos from a range of cameras and compare these results to existing methods.",
"title": ""
},
{
"docid": "5a77a8a9e0a1ec5284d07140fff06f66",
"text": "Among the many challenges facing modern space physics today is the need for a visualisation and analysis package which can examine the results from the diversity of numerical and empirical computer models as well as observational data. Magnetohydrodynamic (MHD) models represent the latest numerical models of the complex Earth’s space environment and have the unique ability to span the enormous distances present in the magnetosphere from several hundred kilometres to several thousand kilometres above the Earth surface. This feature enables scientist to study complex structures of processes where otherwise only point measurements from satellites or ground-based instruments are available. Only by combining these observational data and the MHD simulations it is possible to enlarge the scope of the point-to-point observations and to fill the gaps left by measurements in order to get a full 3-D representation of the processes in our geospace environment. In this paper we introduce the VisAn MHD toolbox for Matlab as a tool for the visualisation and analysis of observational data and MHD simulations. We have created an easy to use tool which is capable of highly sophisticated visualisations and data analysis of the results from a diverse set of MHD models in combination with in situ measurements from satellites and groundbased instruments. The toolbox is being released under an open-source licensing agreement to facilitate and encourage community use and contribution.",
"title": ""
},
{
"docid": "95b93b1c67349e98fe803ee120193329",
"text": "This paper tackles the problem of segmenting things that could move from 3D laser scans of urban scenes. In particular, we wish to detect instances of classes of interest in autonomous driving applications - cars, pedestrians and bicyclists - amongst significant background clutter. Our aim is to provide the layout of an end-to-end pipeline which, when fed by a raw stream of 3D data, produces distinct groups of points which can be fed to downstream classifiers for categorisation. We postulate that, for the specific classes considered in this work, solving a binary classification task (i.e. separating the data into foreground and background first) outperforms approaches that tackle the multi-class problem directly. This is confirmed using custom and third-party datasets gathered of urban street scenes. While our system is agnostic to the specific clustering algorithm deployed we explore the use of a Euclidean Minimum Spanning Tree for an end-to-end segmentation pipeline and devise a RANSAC-based edge selection criterion.",
"title": ""
}
] |
scidocsrr
|
a3d87a32a073d061dd5a28f606b7006e
|
An empirical study of PHP feature usage: a static analysis perspective
|
[
{
"docid": "8c7ac806217e1ff497f7f76a5769bf7e",
"text": "Transforming text into executable code with a function such as JavaScript’s eval endows programmers with the ability to extend applications, at any time, and in almost any way they choose. But, this expressive power comes at a price: reasoning about the dynamic behavior of programs that use this feature becomes challenging. Any ahead-of-time analysis, to remain sound, is forced to make pessimistic assumptions about the impact of dynamically created code. This pessimism affects the optimizations that can be applied to programs and significantly limits the kinds of errors that can be caught statically and the security guarantees that can be enforced. A better understanding of how eval is used could lead to increased performance and security. This paper presents a large-scale study of the use of eval in JavaScript-based web applications. We have recorded the behavior of 337 MB of strings given as arguments to 550,358 calls to the eval function exercised in over 10,000 web sites. We provide statistics on the nature and content of strings used in eval expressions, as well as their provenance and data obtained by observing their dynamic behavior. eval is evil. Avoid it. eval has aliases. Don’t use them. —Douglas Crockford",
"title": ""
}
] |
[
{
"docid": "8ffd290907609be99ca25acee4fb2a87",
"text": "This paper introduces zero-shot dialog generation (ZSDG), as a step towards neural dialog systems that can instantly generalize to new situations with minimal data. ZSDG enables an end-to-end generative dialog system to generalize to a new domain for which only a domain description is provided and no training dialogs are available. Then a novel learning framework, Action Matching, is proposed. This algorithm can learn a cross-domain embedding space that models the semantics of dialog responses which, in turn, lets a neural dialog generation model generalize to new domains. We evaluate our methods on a new synthetic dialog dataset, and an existing human-human dialog dataset. Results show that our method has superior performance in learning dialog models that rapidly adapt their behavior to new domains and suggests promising future research.1",
"title": ""
},
{
"docid": "b5e811e4ae761c185c6e545729df5743",
"text": "Sleep assessment is of great importance in the diagnosis and treatment of sleep disorders. In clinical practice this is typically performed based on polysomnography recordings and manual sleep staging by experts. This procedure has the disadvantages that the measurements are cumbersome, may have a negative influence on the sleep, and the clinical assessment is labor intensive. Addressing the latter, there has recently been encouraging progress in the field of automatic sleep staging [1]. Furthermore, a minimally obtrusive method for recording EEG from electrodes in the ear (ear-EEG) has recently been proposed [2]. The objective of this study was to investigate the feasibility of automatic sleep stage classification based on ear-EEG. This paper presents a preliminary study based on recordings from a total of 18 subjects. Sleep scoring was performed by a clinical expert based on frontal, central and occipital region EEG, as well as EOG and EMG. 5 subjects were excluded from the study because of alpha wave contamination. In one subject the standard polysomnography was supplemented by ear-EEG. A single EEG channel sleep stage classifier was implemented using the same features and the same classifier as proposed in [1]. The performance of the single channel sleep classifier based on the scalp recordings showed an 85.7 % agreement with the manual expert scoring through 10-fold inter-subject cross validation, while the performance of the ear-EEG recordings was based on a 10-fold intra-subject cross validation and showed an 82 % agreement with the manual scoring. These results suggest that automatic sleep stage classification based on ear-EEG recordings may provide similar performance as compared to single channel scalp EEG sleep stage classification. Thereby ear-EEG may be a feasible technology for future minimal intrusive sleep stage classification.",
"title": ""
},
{
"docid": "b3f4473d11801d862a052a2ec91c71ab",
"text": "Plastics from waste electrical and electronic equipment (WEEE) have been an important environmental problem because these plastics commonly contain toxic halogenated flame retardants which may cause serious environmental pollution, especially the formation of carcinogenic substances polybrominated dibenzo dioxins/furans (PBDD/Fs), during treat process of these plastics. Pyrolysis has been proposed as a viable processing route for recycling the organic compounds in WEEE plastics into fuels and chemical feedstock. However, dehalogenation procedures are also necessary during treat process, because the oils collected in single pyrolysis process may contain numerous halogenated organic compounds, which would detrimentally impact the reuse of these pyrolysis oils. Currently, dehalogenation has become a significant topic in recycling of WEEE plastics by pyrolysis. In order to fulfill the better resource utilization of the WEEE plastics, the compositions, characteristics and dehalogenation methods during the pyrolysis recycling process of WEEE plastics were reviewed in this paper. Dehalogenation and the decomposition or pyrolysis of WEEE plastics can be carried out simultaneously or successively. It could be 'dehalogenating prior to pyrolysing plastics', 'performing dehalogenation and pyrolysis at the same time' or 'pyrolysing plastics first then upgrading pyrolysis oils'. The first strategy essentially is the two-stage pyrolysis with the release of halogen hydrides at low pyrolysis temperature region which is separate from the decomposition of polymer matrixes, thus obtaining halogenated free oil products. The second strategy is the most common method. Zeolite or other type of catalyst can be used in the pyrolysis process for removing organohalogens. The third strategy separate pyrolysis and dehalogenation of WEEE plastics, which can, to some degree, avoid the problem of oil value decline due to the use of catalyst, but obviously, this strategy may increase the cost of whole recycling process.",
"title": ""
},
{
"docid": "b3450073ad3d6f2271d6a56fccdc110a",
"text": "OBJECTIVE\nMindfulness-based therapies (MBTs) have been shown to be efficacious in treating internally focused psychological disorders (e.g., depression); however, it is still unclear whether MBTs provide improved functioning and symptom relief for individuals with externalizing disorders, including ADHD. To clarify the literature on the effectiveness of MBTs in treating ADHD and to guide future research, an effect-size analysis was conducted.\n\n\nMETHOD\nA systematic review of studies published in PsycINFO, PubMed, and Google Scholar was completed from the earliest available date until December 2014.\n\n\nRESULTS\nA total of 10 studies were included in the analysis of inattention and the overall effect size was d = -.66. A total of nine studies were included in the analysis of hyperactivity/impulsivity and the overall effect was calculated at d = -.53.\n\n\nCONCLUSION\nResults of this study highlight the possible benefits of MBTs in reducing symptoms of ADHD.",
"title": ""
},
{
"docid": "e76fc05d9fd195d39c382652ecb750f6",
"text": "A compact ultrawideband (UWB) multiple-input multiple-output (MIMO) antenna, with high isolation, is proposed for portable UWB MIMO systems. Two coplanar stripline-fed staircase-shaped radiating elements are connected back-to-back. The prototype is designed on a substrate of dielectric constant 4.4 with an overall dimension of 25 mm × 30 mm × 1.6 mm. This antenna configuration with an isolating metal strip placed in between the two radiating elements ensures high isolation in the entire UWB band. The proposed antenna exhibits a good 2:1 VSWR impedance bandwidth covering the entire UWB band (3.1-10.6 GHz) with a high isolation better than 20 dB, peak gain of 5.2 dBi, peak efficiency of 90%, and guaranteed value of envelope correlation coefficient (ECC) ≤0.1641.",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "45974f33d79bf4d3af349877ef119508",
"text": "Generation of graspable three-dimensional objects applied for surgical planning, prosthetics and related applications using 3D printing or rapid prototyping is summarized and evaluated. Graspable 3D objects overcome the limitations of 3D visualizations which can only be displayed on flat screens. 3D objects can be produced based on CT or MRI volumetric medical images. Using dedicated post-processing algorithms, a spatial model can be extracted from image data sets and exported to machine-readable data. That spatial model data is utilized by special printers for generating the final rapid prototype model. Patient–clinician interaction, surgical training, medical research and education may require graspable 3D objects. The limitations of rapid prototyping include cost and complexity, as well as the need for specialized equipment and consumables such as photoresist resins. Medical application of rapid prototyping is feasible for specialized surgical planning and prosthetics applications and has significant potential for development of new medical applications.",
"title": ""
},
{
"docid": "7c0586335facd8388814f863e19e3d06",
"text": "OBJECTIVE\nWe reviewed randomized controlled trials of complementary and alternative medicine (CAM) treatments for depression, anxiety, and sleep disturbance in nondemented older adults.\n\n\nDATA SOURCES\nWe searched PubMed (1966-September 2006) and PsycINFO (1984-September 2006) databases using combinations of terms including depression, anxiety, and sleep; older adult/elderly; randomized controlled trial; and a list of 56 terms related to CAM.\n\n\nSTUDY SELECTION\nOf the 855 studies identified by database searches, 29 met our inclusion criteria: sample size >or= 30, treatment duration >or= 2 weeks, and publication in English. Four additional articles from manual bibliography searches met inclusion criteria, totaling 33 studies.\n\n\nDATA EXTRACTION\nWe reviewed identified articles for methodological quality using a modified Scale for Assessing Scientific Quality of Investigations (SASQI). We categorized a study as positive if the CAM therapy proved significantly more effective than an inactive control (or as effective as active control) on at least 1 primary psychological outcome. Positive and negative studies were compared on the following characteristics: CAM treatment category, symptom(s) assessed, country where the study was conducted, sample size, treatment duration, and mean sample age.\n\n\nDATA SYNTHESIS\n67% of the 33 studies reviewed were positive. Positive studies had lower SASQI scores for methodology than negative studies. Mind-body and body-based therapies had somewhat higher rates of positive results than energy- or biologically-based therapies.\n\n\nCONCLUSIONS\nMost studies had substantial methodological limitations. A few well-conducted studies suggested therapeutic potential for certain CAM interventions in older adults (e.g., mind-body interventions for sleep disturbances and acupressure for sleep and anxiety). More rigorous research is needed, and suggestions for future research are summarized.",
"title": ""
},
{
"docid": "4d57b0dbc36c2eb058285b4a5b6c102c",
"text": "OBJECTIVE\nThis study was planned to investigate the efficacy of neuromuscular rehabilitation and Johnstone Pressure Splints in the patients who had ataxic multiple sclerosis.\n\n\nMETHODS\nTwenty-six outpatients with multiple sclerosis were the subjects of the study. The control group (n = 13) was given neuromuscular rehabilitation, whereas the study group (n = 13) was treated with Johnstone Pressure Splints in addition.\n\n\nRESULTS\nIn pre- and posttreatment data, significant differences were found in sensation, anterior balance, gait parameters, and Expanded Disability Status Scale (p < 0.05). An important difference was observed in walking-on-two-lines data within the groups (p < 0.05). There also was a statistically significant difference in pendular movements and dysdiadakokinesia (p < 0.05). When the posttreatment values were compared, there was no significant difference between sensation, anterior balance, gait parameters, equilibrium and nonequilibrium coordination tests, Expanded Disability Status Scale, cortical onset latency, and central conduction time of somatosensory evoked potentials and motor evoked potentials (p > 0.05). Comparison of values revealed an important difference in cortical onset-P37 peak amplitude of somatosensory evoked potentials (right limbs) in favor of the study group (p < 0.05).\n\n\nCONCLUSIONS\nAccording to our study, it was determined that physiotherapy approaches were effective to decrease the ataxia. We conclude that the combination of suitable physiotherapy techniques is effective multiple sclerosis rehabilitation.",
"title": ""
},
{
"docid": "b4763eece86468bc7718fc98bac856dd",
"text": "The inception network has been shown to provide good performance on image classification problems, but there are not much evidences that it is also effective for the image restoration or pixel-wise labeling problems. For image restoration problems, the pooling is generally not used because the decimated features are not helpful for the reconstruction of an image as the output. Moreover, most deep learning architectures for the restoration problems do not use dense prediction that need lots of training parameters. From these observations, for enjoying the performance of inception-like structure on the image based problems we propose a new convolutional network-in-network structure. The proposed network can be considered a modification of inception structure where pool projection and pooling layer are removed for maintaining the entire feature map size, and a larger kernel filter is added instead. Proposed network greatly reduces the number of parameters on account of removed dense prediction and pooling, which is an advantage, but may also reduce the receptive field in each layer. Hence, we add a larger kernel than the original inception structure for not increasing the depth of layers. The proposed structure is applied to typical image-to-image learning problems, i.e., the problems where the size of input and output are same such as skin detection, semantic segmentation, and compression artifacts reduction. Extensive experiments show that the proposed network brings comparable or better results than the state-of-the-art convolutional neural networks for these problems.",
"title": ""
},
{
"docid": "dbd3234f12aff3ee0e01db8a16b13cad",
"text": "Information visualization has traditionally limited itself to 2D representations, primarily due to the prevalence of 2D displays and report formats. However, there has been a recent surge in popularity of consumer grade 3D displays and immersive head-mounted displays (HMDs). The ubiquity of such displays enables the possibility of immersive, stereoscopic visualization environments. While techniques that utilize such immersive environments have been explored extensively for spatial and scientific visualizations, contrastingly very little has been explored for information visualization. In this paper, we present our considerations of layout, rendering, and interaction methods for visualizing graphs in an immersive environment. We conducted a user study to evaluate our techniques compared to traditional 2D graph visualization. The results show that participants answered significantly faster with a fewer number of interactions using our techniques, especially for more difficult tasks. While the overall correctness rates are not significantly different, we found that participants gave significantly more correct answers using our techniques for larger graphs.",
"title": ""
},
{
"docid": "b6a0fcd9ee49b3dbfccdfa88fd0f07a0",
"text": "Generating images from natural language is one of the primary applications of recent conditional generative models. Besides testing our ability to model conditional, highly dimensional distributions, text to image synthesis has many exciting and practical applications such as photo editing or computer-aided content creation. Recent progress has been made using Generative Adversarial Networks (GANs). This material starts with a gentle introduction to these topics and discusses the existent state of the art models. Moreover, I propose Wasserstein GAN-CLS, a new model for conditional image generation based on the Wasserstein distance which offers guarantees of stability. Then, I show how the novel loss function of Wasserstein GAN-CLS can be used in a Conditional Progressive Growing GAN. In combination with the proposed loss, the model boosts by 7.07% the best Inception Score (on the Caltech birds dataset) of the models which use only the sentence-level visual semantics. The only model which performs better than the Conditional Wasserstein Progressive growing GAN is the recently proposed AttnGAN which uses word-level visual semantics as well.",
"title": ""
},
{
"docid": "31d66211511ae35d71c7055a2abf2801",
"text": "BACKGROUND\nPrevious evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.\n\n\nCONCLUSION/SIGNIFICANCE\nCognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.",
"title": ""
},
{
"docid": "99bd8339f260784fff3d0a94eb04f6f4",
"text": "Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios.",
"title": ""
},
{
"docid": "2e2a21ca1be2da2d30b1b2a92cd49628",
"text": "A new form of cloud computing, serverless computing, is drawing attention as a new way to design micro-services architectures. In a serverless computing environment, services are developed as service functional units. The function development environment of all serverless computing framework at present is CPU based. In this paper, we propose a GPU-supported serverless computing framework that can deploy services faster than existing serverless computing framework using CPU. Our core approach is to integrate the open source serverless computing framework with NVIDIA-Docker and deploy services based on the GPU support container. We have developed an API that connects the open source framework to the NVIDIA-Docker and commands that enable GPU programming. In our experiments, we measured the performance of the framework in various environments. As a result, developers who want to develop services through the framework can deploy high-performance micro services and developers who want to run deep learning programs without a GPU environment can run code on remote GPUs with little performance degradation.",
"title": ""
},
{
"docid": "e91dd3f9e832de48a27048a0efa1b67a",
"text": "Smart Home technology is the future of residential related technology which is designed to deliver and distribute number of services inside and outside the house via networked devices in which all the different applications & the intelligence behind them are integrated and interconnected. These smart devices have the potential to share information with each other given the permanent availability to access the broadband internet connection. Hence, Smart Home Technology has become part of IoT (Internet of Things). In this work, a home model is analyzed to demonstrate an energy efficient IoT based smart home. Several Multiphysics simulations were carried out focusing on the kitchen of the home model. A motion sensor with a surveillance camera was used as part of the home security system. Coupled with the home light and HVAC control systems, the smart system can remotely control the lighting and heating or cooling when an occupant enters or leaves the kitchen.",
"title": ""
},
{
"docid": "bf6c93ac774f8ae691d0de32e9cd3057",
"text": "We address deafness and directional hidden terminal problem that occur when MAC protocols are designed for directional antenna based wireless multi-hop networks. Deafness occurs when the transmitter fails to communicate to its intended receiver, because the receiver's antenna is oriented in a different direction. The directional hidden terminal problem occurs when the transmitter fails to hear a prior RTS/CTS exchange between another pair of nodes and cause collision by initiating a transmission to the receiver of the ongoing communication. Though directional antennas offer better spatial reuse, these problems can have a serious impact on network performance. In this paper, we study various scenarios in which these problems can occur and design a MAC protocol that solves them comprehensively using only a single channel and single radio interface. Current solutions in literature either do not address these issues comprehensively or use more than one radio/channel to solve them. We evaluate our protocol using detailed simulation studies. Simulation results indicate that our protocol can effectively address deafness and directional hidden terminal problem and increase network performance.",
"title": ""
},
{
"docid": "a78149e30a677c320cab3540d55adc4f",
"text": "We develop Markov topic models (MTMs), a novel family of generative probabilistic models that can learn topics simultaneously from multiple corpora, such as papers from different conferences. We apply Gaussian (Markov) random fields to model the correlations of different corpora. MTMs capture both the internal topic structure within each corpus and the relationships between topics across the corpora. We derive an efficient estimation procedure with variational expectation-maximization. We study the performance of our models on a corpus of abstracts from six different computer science conferences. Our analysis reveals qualitative discoveries that are not possible with traditional topic models, and improved quantitative performance over the state of the art.",
"title": ""
},
{
"docid": "2fd42b61615dce7e9604b482f16dfa73",
"text": "Wildlife species such as tigers and elephants are under the threat of poaching. To combat poaching, conservation agencies (“defenders”) need to (1) anticipate where the poachers are likely to poach and (2) plan effective patrols. We propose an anti-poaching tool CAPTURE (Comprehensive Anti-Poaching tool with Temporal and observation Uncertainty REasoning), which helps the defenders achieve both goals. CAPTURE builds a novel hierarchical model for poacher-patroller interaction. It considers the patroller’s imperfect detection of signs of poaching, the complex temporal dependencies in the poacher's behaviors and the defender’s lack of knowledge of the number of poachers. Further, CAPTURE uses a new game-theoretic algorithm to compute the optimal patrolling strategies and plan effective patrols. This paper investigates the computational challenges that CAPTURE faces. First, we present a detailed analysis of parameter separation and target abstraction, two novel approaches used by CAPTURE to efficiently learn the parameters in the hierarchical model. Second, we propose two heuristics – piece-wise linear approximation and greedy planning – to speed up the computation of the optimal patrolling strategies. We discuss in this paper the lessons learned from using CAPTURE to analyze real-world poaching data collected over 12 years in Queen Elizabeth National Park in Uganda. Introduction Wildlife poaching presents a significant threat to large-bodied animal species. It is one major driver of the population declines of key wildlife species such as tigers, elephants, and rhinos, which are crucial to the functioning of natural ecosystems as well as local and national economies [1, 2]. Poachers illegally catch wildlife by placing snares or hunting. To combat poaching, both government and non-government agencies send well-trained patrollers to wildlife conservation areas. In this work, we focus on snare poaching. The patrollers conduct patrols with the aim of preventing poachers from poaching animals either by catching the poachers or by removing animal traps set by the poachers. Signs of poaching are collected and recorded during the patrols, including snares, traps and other signs such as poacher tracks, which can be used together with other domain features such as animal density or slope of the terrain to analyze and predict the poachers' behavior [3, 4]. It is critical to learn the poachers' behavior, anticipate where the poachers would go for poaching, and further use such information to guide future patrols and make them more effective. Poachers’ behavior is adaptive to patrols as evidenced by multiple studies [5, 6, 7]. Instead of falling into a static pattern, the distribution of poaching activities can be affected by ranger patrols as the poachers will take the patrol locations into account when making decisions. As a result, the rangers should also consider such dynamics when planning the patrols. Such strategic interaction between the conservation agencies and the poachers make game theory an appropriate framework for the problem. Stackelberg Security Games (SSGs) in computational game theory have been successfully applied to various infrastructure security problems in which the defender",
"title": ""
}
] |
scidocsrr
|
981afbb61e0060c0e3f5ea4f84abbd63
|
Holographic localization of passive UHF RFID transponders
|
[
{
"docid": "22c3eb9aa0127e687f6ebb6994fc8d1d",
"text": "In this paper, the novel inverse synthetic aperture secondary radar wireless positioning technique is introduced. The proposed concept allows for a precise spatial localization of a backscatter transponder even in dense multipath environments. A novel secondary radar signal evaluation concept compensates for the unknown modulation phase of the returned signal and thus leads to radar signals comparable to common primary radar. With use of this concept, inverse synthetic aperture radar algorithms can be applied to the signals of backscatter transponder systems. In simulations and first experiments, we used a broadband holographic reconstruction principle to realize the inverse synthetic aperture approach. The movement of the transponder along a short arbitrary aperture path is determined with assisting relative sensors (dead reckoning or inertia sensors). A set of signals measured along the aperture is adaptively focused to the transponder position. By this focusing technique, multipath reflections can be suppressed impressively and a precise indoor positioning becomes feasible. With our technique, completely new and powerful options for integrated navigation and sensor fusion in RF identification systems and wireless local positioning systems are now possible.",
"title": ""
}
] |
[
{
"docid": "54d223a2a00cbda71ddf3f1b29f1ebed",
"text": "Much of the data of scientific interest, particularly when independence of data is not assumed, can be represented in the form of information networks where data nodes are joined together to form edges corresponding to some kind of associations or relationships. Such information networks abound, like protein interactions in biology, web page hyperlink connections in information retrieval on the Web, cellphone call graphs in telecommunication, co-authorships in bibliometrics, crime event connections in criminology, etc. All these networks, also known as social networks, share a common property, the formation of connected groups of information nodes, called community structures. These groups are densely connected nodes with sparse connections outside the group. Finding these communities is an important task for the discovery of underlying structures in social networks, and has recently attracted much attention in data mining research. In this paper, we present Top Leaders, a new community mining approach that, simply put, regards a community as a set of followers congregating around a potential leader. Our algorithm starts by identifying promising leaders in a given network then iteratively assembles followers to their closest leaders to form communities, and subsequently finds new leaders in each group around which to gather followers again until convergence. Our intuitions are based on proven observations in social networks and the results are very promising. Experimental results on benchmark networks verify the feasibility and effectiveness of our new community mining approach.",
"title": ""
},
{
"docid": "a2e8ece304e6300d399a4ef38d282623",
"text": "So, they had to be simple, apply to a broad range of systems, and yet exhibit good resolution. Obviously a simple task, but first let’s look at what it means to be autonomous. he recently released DoD Unmanned Aerial Vehicles map [9] discusses advancements in UAV autonomy in of autonomous control levels (ACL). The ACL concept pioneered by researchers in the Air Force Research ratory’s Air Vehicles Directorate who are charged with loping autonomous air vehicles. In the process of loping intelligent autonomous agents for UAV control ms we were constantly challenged to “tell us how omous a UAV is, and how do you think it can be ured...” Usually we hand-waved away the argument and d the questioner will go away since this is a very subjective, complicated, subject, but within the last year we’ve been ted to develop national intelligent autonomous UAV control cs an IQ test for the flyborgs, if you will. The ACL chart result. We’ve done this via intense discussions with other rnment labs and industry, and this paper covers the agreed cs (an extension of the OODA observe, orient, decide, and loop) as well as the precursors, “dead-ends”, and out-andlops investigated to get there. 2. Quick Difference Between Autonomous and Automatic (our definition) Many people don’t realize that there is a significant difference between the words autonomous and automatic. Many news and trade articles use these words interchangeably. Automatic means that a system will do exactly as programmed, it has no choice. Autonomous means that a system has a choice to make free of outside influence, i.e., an autonomous system has free will. For instance, let’s compare functions of an automatic system (autopilot) and an autonomous guidance system: • Autopilot: Stay on course chosen. • Autonomous Guidance: Decide which course to take, then stay on it. eywords: autonomy metrics, machine intelligence ics, UAV, autonomous control Example: a cruise missile is not autonomous, but automatic since all choices have been made prior to launch. Background p levels of the US Department of Defense an effort been initiated to coordinate researchers across the ices and industry in meeting national goals in fixedvehicle development. The Fixed-Wing Vehicle ative (FWV) has broad goals across numerous vehicle nologies. One of those areas is mission management AVs. Our broad goal is to develop the technology ing UAVs to replace human piloted aircraft for any eivable mission. This implies that we have to give s some level of autonomy to accomplish the ions. One of the cornerstones of the FWV process is stablishment of metrics so one know that a goal is hed, but what metrics were available for measuring autonomy? Our research, in conjunction with stry, determined that there was not any sort of metric e desired. Thus we set out to define our own [Note 3. We Need To Measure Autonomy, Not Intelligence For some reason people tend to equate autonomy to intelligence. Looking through the proceedings of the last NIST Intelligent Systems Workshop there are several papers which do this, and in fact, the entire conference sets the tone that “intelligence is autonomy” [3]. They are not the same. Many stupid things are quite autonomous (bacteria) and many very smart things are not (my 3 year old daughter seemingly most of the time). Intelligence (one of a myriad of definitions) is the capability of discovering knowledge and using it to do something. Autonomy is: • the ability to generate one’s own purposes without any instruction from outside (L. Fogel) what characteristics should these metrics have? We ded that they needed to be: • having free will (B. Clough) What we want to know is how well a UAV will do a task, or better yet, develop tasks to reach goals, when we’re not around to do it for the UAV. We really don’t care how intelligent it is, just that it does the job assigned. Therefore, intelligence measures tell us little. So, although we could talk about the Turing Test [1] and other intelligence metrics, that is not what we wanted. Easily visualized such that upper management could grasp the concepts in a couple of briefing slides. Broad enough to measure past, present and future autonomous system development. Have enough resolution to easily track impact of technological program investments. Report Documentation Page Form Approved",
"title": ""
},
{
"docid": "444472f7c11a35a747b50bc9ffc7fea7",
"text": "Deep Neural Networks (DNNs) are very popular these days, and are the subject of a very intense investigation. A DNN is made by layers of internal units (or neurons), each of which computes an affine combination of the output of the units in the previous layer, applies a nonlinear operator, and outputs the corresponding value (also known as activation). A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero. In this (and other similar cases like max pooling, where the max operation involves more than one input value), one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where the continuous variables correspond to the output values of each unit, and a binary variable is associated with each ReLU to model its yes/no nature. In this paper we discuss the peculiarity of this kind of 0-1 MILP models, and describe an effective bound-tightening technique intended to ease its solution. We also present possible applications of the 0-1 MILP model arising in feature visualization and in the construction of adversarial examples. Preliminary computational results are reported, aimed at investigating (on small DNNs) the computational performance of a state-of-the-art MILP solver when applied to a known test case, namely, hand-written digit recognition.",
"title": ""
},
{
"docid": "29e5afc2780455b398e7b7451c08e39f",
"text": "The recent special report, “Indiana’s Vision of Response to Intervention” issued by the Center for Evaluation & Education Policy (CEEP) was the first of a three-part series aimed to build a fundamental understanding of a Response-to-Intervention (RTI) framework in Indiana’s schools to aid in the prevention and intervention of both academic and behavioral problems for all students. The report also discussed the impetus for implementation of RTI, as well as what the state of Indiana is currently doing to respond to and guide schools through this new initiative. Specifically, Indiana’s Department of Education (IDOE) has developed a framework of RTI that addresses six core components on which to focus: (1) evidence-based curriculum, instruction, intervention and extension; (2) assessment and progress monitoring; (3) data-based decision making; (4) leadership; (5) family, school, and community partnerships; and (6) cultural responsivity.",
"title": ""
},
{
"docid": "881325bbeb485fc405c2cb77f9a12dfb",
"text": "Drawing on social capital theory, this study examined whether college students’ self-disclosure on a social networking site was directly associated with social capital, or related indirectly through the degree of positive feedback students got from Internet friends. Structural equation models applied to anonymous, self-report survey data from 264 first-year students at 3 universities in Beijing, China, indicated direct effects on bridging social capital and indirect effects on bonding social capital. Effects remained significant, though modest in magnitude, after controlling for social skills level. Findings suggest ways in which social networking sites can foster social adjustment as an adolescent transition to residential col-",
"title": ""
},
{
"docid": "eeff8964179ebd51745fece9b2fd50f3",
"text": "In this paper, we present a novel structure-preserving image completion approach equipped with dynamic patches. We formulate the image completion problem into an energy minimization framework that accounts for coherence within the hole and global coherence simultaneously. The completion of the hole is achieved through iterative optimizations combined with a multi-scale solution. In order to avoid abnormal structure and disordered texture, we utilize a dynamic patch system to achieve efficient structure restoration. Our dynamic patch system functions in both horizontal and vertical directions of the image pyramid. In the horizontal direction, we conduct a parallel search for multi-size patches in each pyramid level and design a competitive mechanism to select the most suitable patch. In the vertical direction, we use large patches in higher pyramid level to maximize the structure restoration and use small patches in lower pyramid level to reduce computational workload. We test our approach on massive images with complex structure and texture. The results are visually pleasing and preserve nice structure. Apart from effective structure preservation, our approach outperforms previous state-of-the-art methods in time consumption.",
"title": ""
},
{
"docid": "853f361ca9e99c120a0462d957da12df",
"text": "This study examined Japanese speakers’ learning of American English during their first years of immersion in the United States (U.S.). Native Japanesespeaking (NJ) children (n=16) and adults (n=16) were tested on two occasions, averaging 0.5 (T1) and 1.6 years (T2) after arrival in the U.S. Age-matched groups of native English-speaking children (n=16) and adults (n=16) also participated. The NJ adults’ scores for segmental perception and production were higher than the NJ children’s at T1. The NJ children’s foreign accent scores and pronunciation of English fricatives improved significantly between T1 and T2, whereas the NJ adults’ scores did not. These findings suggest that adults may have an advantage over children initially but that children may improve in production more quickly than adults when immersed in an L2-speaking environment.",
"title": ""
},
{
"docid": "051f9518ffd3d5b3073a70cb2d4fe683",
"text": "This paper describes a novel Byzantine fault tolerant protocol that allows replicas to join and exit dynamically. With the astonishing success of cryptocurrencies, people attach great importance in “blockchain” and robust Byzantine fault tolerant (BFT) protocols for consensus. Among the conventional wisdom, the Practical Byzantine Fault Tolerance (PBFT), proposed by Miguel and Liskov in 1999, occupies an important position. Although PBFT has many advantages, it has fatal disadvantages. Firstly, it works in a completely enclosed environment, where users who want to add or take out any node must stop the whole system. Secondly, although PBFT guarantees liveness and safety if at most $\\left\\lfloor {\\frac{{{\\rm{n}} - 1}}{3}} \\right\\rfloor$ out of a total n replicas are faulty, it takes no measure to deal with these ineffective or malicious replicas, which is harmful to the system and will cause system crash finally. These drawbacks are unbearable in practice. In order to solve them, we present an alternative, Dynamic PBFT.",
"title": ""
},
{
"docid": "d2f7f7a355f133a8e5f40c67ca42a076",
"text": "In present times, giving a computer to carry out any task requires a set of specific instructions or the implementation of an algorithm that defines the rules that need to be followed. The present day computer system has no ability to learn from past experiences and hence cannot readily improve on the basis of past mistakes. So, giving a computer or instructing a computer controlled programme to perform a task requires one to define a complete and correct algorithm for task and then programme the algorithm into the computer. Such activities involve tedious and time consuming effort by specially trained teacher or person. Jaime et al (Jaime G. Carbonell, 1983) also explained that the present day computer systems cannot truly learn to perform a task through examples or through previous solved task and they cannot improve on the basis of past mistakes or acquire new abilities by observing and imitating experts. Machine Learning research endeavours to open the possibility of instruction the computer in such a new way and thereby promise to ease the burden of hand writing programmes and growing problems of complex information that get complicated in the computer. When approaching a task-oriented acquisition task, one must be aware that the resultant computer system must interact with human and therefore should closely match human abilities. So, learning machine or programme on the other hand will have to interact with computer users who make use of them and consequently the concept and skills they acquireif not necessarily their internal mechanism must be understandable to humans. Also Alpaydin (Alpaydin, 2004) stated that with advances in computer technology, we currently have the ability to store and process large amount of data, as well as access it from physically distant locations over computer network. Most data acquisition devices are digital now and record reliable data. For example, a supermarket chain that has hundreds of stores all over the country selling thousands of goods to millions of customers. The point of sale terminals record the details of each transaction: date, customer identification code, goods bought and their amount, total money spent and so forth, This typically amounts to gigabytes of data every day. This store data becomes useful only when it is analysed and tuned into information that can be used or be predicted. We do not know exactly which people are likely to buy a particular product or which author to suggest to people who enjoy reading Hemingway. If we knew, we would not need any analysis of the data; we would just go ahead and write down code. But because we do not, we can only collect data and hope to extract the answers to these and similar question from 1",
"title": ""
},
{
"docid": "f3e83402c548ea1b5bcdaab4eb123ace",
"text": "This paper advocates a novel multiscale, structure-sensitive saliency detection method, which can distinguish multilevel, reliable saliency from various natural pictures in a robust and versatile way. One key challenge for saliency detection is to guarantee the entire salient object being characterized differently from nonsalient background. To tackle this, our strategy is to design a structure-aware descriptor based on the intrinsic biharmonic distance metric. One benefit of introducing this descriptor is its ability to simultaneously integrate local and global structure information, which is extremely valuable for separating the salient object from nonsalient background in a multiscale sense. Upon devising such powerful shape descriptor, the remaining challenge is to capture the saliency to make sure that salient subparts actually stand out among all possible candidates. Toward this goal, we conduct multilevel low-rank and sparse analysis in the intrinsic feature space spanned by the shape descriptors defined on over-segmented super-pixels. Since the low-rank property emphasizes much more on stronger similarities among super-pixels, we naturally obtain a scale space along the rank dimension in this way. Multiscale saliency can be obtained by simply computing differences among the low-rank components across the rank scale. We conduct extensive experiments on some public benchmarks, and make comprehensive, quantitative evaluation between our method and existing state-of-the-art techniques. All the results demonstrate the superiority of our method in accuracy, reliability, robustness, and versatility.",
"title": ""
},
{
"docid": "8942b664429435fbd66c765215bec284",
"text": "In this paper, we present a technique for generating animation from a variety of user-defined constraints. We pose constraint-based motion synthesis as a maximum a posterior (MAP) problem and develop an optimization framework that generates natural motion satisfying user constraints. The system automatically learns a statistical dynamic model from motion capture data and then enforces it as a motion prior. This motion prior, together with user-defined constraints, comprises a trajectory optimization problem. Solving this problem in the low-dimensional space yields optimal natural motion that achieves the goals specified by the user. We demonstrate the effectiveness of this approach by generating whole-body and facial motion from a variety of spatial-temporal constraints.",
"title": ""
},
{
"docid": "b0d2e7d1c9f347adc58cb88aead3fd07",
"text": "The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.",
"title": ""
},
{
"docid": "3c24165f70675be40ad42f8d8ce09c33",
"text": "This paper describes the design of a smart, motorized, voice controlled wheelchair using embedded system. Proposed design supports voice activation system for physically differently abled persons incorporating manual operation. This paper represents the “Voice-controlled Wheel chair” for the physically differently abled person where the voice command controls the movements of the wheelchair. The voice command is given through a cellular device having Bluetooth and the command is transferred and converted to string by the BT Voice Control for Arduino and is transferred to the Bluetooth Module SR-04connected to the Arduino board for the control of the Wheelchair. For example, when the user says „Go‟ then chair will move in forward direction and when he says „Back‟ then the chair will move in backward direction and similarly „Left‟, „Right‟ for rotating it in left and right directions respectively and „Stop‟ for making it stop. This system was designed and developed to save cost, time and energy of the patient. Ultrasonic sensor is also made a part of the design and it helps to detect obstacles lying ahead in the way of the wheelchair that can hinder the passage of the wheelchair.",
"title": ""
},
{
"docid": "2a41af8ad6000163951b9e7399ce7444",
"text": "Accurate location of the endpoints of an isolated word is important for reliable and robust word recognition. The endpoint detection problem is nontrivial for nonstationary backgrounds where artifacts (i.e., nonspeech events) may be introduced by the speaker, the recording environment, and the transmission system. Several techniques for the detection of the endpoints of isolated words recorded over a dialed-up telephone line were studied. The techniques were broadly classified as either explicit, implicit, or hybrid in concept. The explicit techniques for endpoint detection locate the endpoints prior to and independent of the recognition and decision stages of the system. For the implicit methods, the endpoints are determined solely by the recognition and decision stages Of the system, i.e., there is no separate stage for endpoint detection. The hybrid techniques incorporate aspects from both the explicit and implicit methods. Investigations showed that the hybrid techniques consistently provided the best estimates for both of the word endpoints and, correspondingly, the highest recognition accuracy of the three classes studied. A hybrid endpoint detector is proposed which gives a rejection rate of less than 0.5 percent, while providing recognition accuracy close to that obtained from hand-edited endpoints.",
"title": ""
},
{
"docid": "6d55978aa80f177f6a859a55380ffed8",
"text": "This paper investigates the effect of lowering the supply and threshold voltages on the energy efficiency of CMOS circuits. Using a first-order model of the energy and delay of a CMOS circuit, we show that lowering the supply and threshold voltage is generally advantageous, especially when the transistors are velocity saturated and the nodes have a high activity factor. In fact, for modern submicron technologies, this simple analysis suggests optimal energy efficiency at supply voltages under 0.5 V. Other process and circuit parameters have almost no effect on this optimal operating point. If there is some uncertainty in the value of the threshold or supply voltage, however, the power advantage of this very low voltage operation diminishes. Therefore, unless active feedback is used to control the uncertainty, in the future the supply and threshold voltage will not decrease drastically, but rather will continue to scale down to maintain constant electric fields.",
"title": ""
},
{
"docid": "0e32cef3d4f4e6bd23a3004a44b138a6",
"text": "There have been some works that learn a lexicon together with the corpus to improve the word embeddings. However, they either model the lexicon separately but update the neural networks for both the corpus and the lexicon by the same likelihood, or minimize the distance between all of the synonym pairs in the lexicon. Such methods do not consider the relatedness and difference of the corpus and the lexicon, and may not be the best optimized. In this paper, we propose a novel method that considers the relatedness and difference of the corpus and the lexicon. It trains word embeddings by learning the corpus to predicate a word and its corresponding synonym under the context at the same time. For polysemous words, we use a word sense disambiguation filter to eliminate the synonyms that have different meanings for the context. To evaluate the proposed method, we compare the performance of the word embeddings trained by our proposed model, the control groups without the filter or the lexicon, and the prior works in the word similarity tasks and text classification task. The experimental results show that the proposed model provides better embeddings for polysemous words and improves the performance for text classification.",
"title": ""
},
{
"docid": "27c0c6c43012139fc3e4ee64ae043c0b",
"text": "This paper presents a method for measuring signal backscattering from RFID tags, and for calculating a tag's radar cross section (RCS). We derive a theoretical formula for the RCS of an RFID tag with a minimum-scattering antenna. We describe an experimental measurement technique, which involves using a network analyzer connected to an anechoic chamber with and without the tag. The return loss measured in this way allows us to calculate the backscattered power and to find the tag's RCS. Measurements were performed using an RFID tag operating in the UHF band. To determine whether the tag was turned on, we used an RFID tag tester. The tag's RCS was also calculated theoretically, using electromagnetic simulation software. The theoretical results were found to be in good agreement with experimental data",
"title": ""
},
{
"docid": "c49ae120bca82ef0d9e94115ad7107f2",
"text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: [email protected] 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556",
"title": ""
},
{
"docid": "d9df79b3724820a12d5517f2dcdc33ff",
"text": "The tremendous growth of machine-tomachine (M2M) applications has been a great attractor to cellular network operators to provide machine-type communication services. One of the important challenges for cellular systems supporting M2M terminals is coverage, because terminals can be located in spaces in buildings and structures suffering from significant penetration losses. Since these terminals are also often stationary, they are permanently without cellular coverage. To address this critical issue, the third generation partnership project (3GPP), and in particular its radio access network technical specification group, commenced work on coverage enhancement (CE) for long-term evolution (LTE) systems in June 2013. This article reviews the CE objectives defined for LTE machine-type communication and presents CE methods for LTE downlink and uplink channels discussed in this group. The presented methods achieve CE in a spectrally efficient manner and without notably affecting performance for legacy (non- M2M) devices.",
"title": ""
},
{
"docid": "e103d3a7be2ce1933eac191d2324e85b",
"text": "With recent progress in the medical signals processing, the EEG allows to study the Brain functioning with a high temporal and spatial resolution. This approach is possible by combining the standard processing algorithms of cortical brain waves with characterization and interpolation methods. First, a new vector of characteristics for each EEG channel was introduced using the Extended Kalman filter (EKF). Next, the spherical spline interpolation technique was applied in order to rebuild other vectors corresponding to virtual electrodes. The temporal variation of these vectors was restored by applying the EKF. Finally, the accuracy of the method has been estimated by calculating the error between the actual and interpolated signal after passing by the characterization method with the Root Mean Square Error algorithm (RMSE).",
"title": ""
}
] |
scidocsrr
|
e0a8bda10c5595a4a07a428ce6dd2a29
|
A new image filtering method: Nonlocal image guided averaging
|
[
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "a9612aacde205be2d753c5119b9d95d3",
"text": "We propose a multi-object multi-camera framework for tracking large numbers of tightly-spaced objects that rapidly move in three dimensions. We formulate the problem of finding correspondences across multiple views as a multidimensional assignment problem and use a greedy randomized adaptive search procedure to solve this NP-hard problem efficiently. To account for occlusions, we relax the one-to-one constraint that one measurement corresponds to one object and iteratively solve the relaxed assignment problem. After correspondences are established, object trajectories are estimated by stereoscopic reconstruction using an epipolar-neighborhood search. We embedded our method into a tracker-to-tracker multi-view fusion system that not only obtains the three-dimensional trajectories of closely-moving objects but also accurately settles track uncertainties that could not be resolved from single views due to occlusion. We conducted experiments to validate our greedy assignment procedure and our technique to recover from occlusions. We successfully track hundreds of flying bats and provide an analysis of their group behavior based on 150 reconstructed 3D trajectories.",
"title": ""
},
{
"docid": "e70c6ccc129f602bd18a49d816ee02a9",
"text": "This purpose of this paper is to show how prevalent features of successful human tutoring interactions can be integrated into a pedagogical agent, AutoTutor. AutoTutor is a fully automated computer tutor that responds to learner input by simulating the dialog moves of effective, normal human tutors. AutoTutor’s delivery of dialog moves is organized within a 5step framework that is unique to normal human tutoring interactions. We assessed AutoTutor’s performance as an effective tutor and conversational partner during tutoring sessions with virtual students of varying ability levels. Results from three evaluation cycles indicate the following: (1) AutoTutor is capable of delivering pedagogically effective dialog moves that mimic the dialog move choices of human tutors, and (2) AutoTutor is a reasonably effective conversational partner. INTRODUCTION AND BACKGROUND Over the last decade a number of researchers have attempted to uncover the mechanisms of human tutoring that are responsible for student learning gains. Many of the informative findings have been reported in studies that have systematically analyzed the collaborative discourse that occurs between tutors and students (Fox, 1993; Graesser & Person, 1994; Graesser, Person, & Magliano, 1995; Hume, Michael, Rovick, & Evens, 1996; McArthur, Stasz, & Zmuidzinas, 1990; Merrill, Reiser, Ranney, & Trafton, 1992; Moore, 1995; Person & Graesser, 1999; Person, Graesser, Magliano, & Kreuz, 1994; Person, Kreuz, Zwaan, & Graesser, 1995; Putnam, 1987). For example, we have learned that the tutorial session is predominately controlled by the tutor. That is, tutors, not students, typically determine when and what topics will be covered in the session. Further, we know that human tutors rarely employ sophisticated or “ideal” tutoring models that are often incorporated into intelligent tutoring systems. Instead, human tutors are more likely to rely on localized strategies that are embedded within conversational turns. Although many findings such as these have illuminated the tutoring process, they present formidable challenges for designers of intelligent tutoring systems. After all, building a knowledgeable conversational partner is no small feat. However, if designers of future tutoring systems wish to capitalize on the knowledge gained from human tutoring studies, the next generation of tutoring systems will incorporate pedagogical agents that engage in learning dialogs with students. The purpose of this paper is twofold. First, we will describe how prevalent features of successful human tutoring interactions can be incorporated into a pedagogical agent, AutoTutor. Second, we will provide data from several preliminary performance evaluations in which AutoTutor interacts with virtual students of varying ability levels. Person, Graesser, Kreuz, Pomeroy, and the Tutoring Research Group AutoTutor is a fully automated computer tutor that is currently being developed by the Tutoring Research Group (TRG). AutoTutor is a working system that attempts to comprehend students’ natural language contributions and then respond to the student input by simulating the dialogue moves of human tutors. AutoTutor differs from other natural language tutors in several ways. First, AutoTutor does not restrict the natural language input of the student like other systems (e.g., Adele (Shaw, Johnson, & Ganeshan, 1999); the Ymir agents (Cassell & Thórisson, 1999); Cirscim-Tutor (Hume, Michael, Rovick, & Evens, 1996; Zhou et al., 1999); Atlas (Freedman, 1999); and Basic Electricity and Electronics (Moore, 1995; Rose, Di Eugenio, & Moore, 1999)). These systems tend to limit student input to a small subset of judiciously worded speech acts. Second, AutoTutor does not allow the user to substitute natural language contributions with GUI menu options like those in the Atlas and Adele systems. The third difference involves the open-world nature of AutoTutor’s content domain (i.e., computer literacy). The previously mentioned tutoring systems are relatively more closed-world in nature, and therefore, constrain the scope of student contributions. The current version of AutoTutor simulates the tutorial dialog moves of normal, untrained tutors; however, plans for subsequent versions include the integration of more sophisticated ideal tutoring strategies. AutoTutor is currently designed to assist college students learn about topics covered in an introductory computer literacy course. In a typical tutoring session with AutoTutor, students will learn the fundamentals of computer hardware, the operating system, and the Internet. A Brief Sketch of AutoTutor AutoTutor is an animated pedagogical agent that serves as a conversational partner with the student. AutoTutor’s interface is comprised of four features: a two-dimensional, talking head, a text box for typed student input, a text box that displays the problem/question being discussed, and a graphics box that displays pictures and animations that are related to the topic at hand. AutoTutor begins the session by introducing himself and then presents the student with a question or problem that is selected from a curriculum script. The question/problem remains in a text box at the top of the screen until AutoTutor moves on to the next topic. For some questions and problems, there are graphical displays and animations that appear in a specially designated box on the screen. Once AutoTutor has presented the student with a problem or question, a multi-turn tutorial dialog occurs between AutoTutor and the learner. All student contributions are typed into the keyboard and appear in a text box at the bottom of the screen. AutoTutor responds to each student contribution with one or a combination of pedagogically appropriate dialog moves. These dialog moves are conveyed via synthesized speech, appropriate intonation, facial expressions, and gestures and do not appear in text form on the screen. In the future, we hope to have AutoTutor handle speech recognition, so students can speak their contributions. However, current speech recognition packages require time-consuming training that is not optimal for systems that interact with multiple users. The various modules that enable AutoTutor to interact with the learner will be described in subsequent sections of the paper. For now, however, it is important to note that our initial goals for building AutoTutor have been achieved. That is, we have designed a computer tutor that participates in a conversation with the learner while simulating the dialog moves of normal human tutors. WHY SIMULATE NORMAL HUMAN TUTORS? It has been well documented that normal, untrained human tutors are effective. Effect sizes ranging between .5 and 2.3 have been reported in studies where student learning gains were measured (Bloom, 1984; Cohen, Kulik, & Kulik, 1982). For quite a while, these rather large effect sizes were somewhat puzzling. That is, normal tutors typically do not have expert domain knowledge nor do they have knowledge about sophisticated tutoring strategies. In order to gain a better understanding of the primary mechanisms that are responsible for student learning Simulating Human Tutor Dialog Moves in AutoTutor gains, a handful of researchers have systematically analyzed the dialogue that occurs between normal, untrained tutors and students (Graesser & Person, 1994; Graesser et al., 1995; Person & Graesser, 1999; Person et al., 1994; Person et al., 1995). Graesser, Person, and colleagues analyzed over 100 hours of tutoring interactions and identified two prominent features of human tutoring dialogs: (1) a five-step dialog frame that is unique to tutoring interactions, and (2) a set of tutor-initiated dialog moves that serve specific pedagogical functions. We believe these two features are responsible for the positive learning outcomes that occur in typical tutoring settings, and further, these features can be implemented in a tutoring system more easily than the sophisticated methods and strategies that have been advocated by other educational researchers and ITS developers. Five-step Dialog Frame The structure of human tutorial dialogs differs from learning dialogs that often occur in classrooms. Mehan (1979) and others have reported a 3-step pattern that is prevalent in classroom interactions. This pattern is often referred to as IRE, which stands for Initiation (a question or claim articulated by the teacher), Response (an answer or comment provided by the student), and Evaluation (teacher evaluates the student contribution). In tutoring, however, the dialog is managed by a 5-step dialog frame (Graesser & Person, 1994; Graesser et al., 1995). The five steps in this frame are presented below. Step 1: Tutor asks question (or presents problem). Step 2: Learner answers question (or begins to solve problem). Step 3: Tutor gives short immediate feedback on the quality of the answer (or solution). Step 4: Tutor and learner collaboratively improve the quality of the answer. Step 5: Tutor assesses learner’s understanding of the answer. This 5-step dialog frame in tutoring is a significant augmentation over the 3-step dialog frame in classrooms. We believe that the advantage of tutoring over classroom settings lies primarily in Step 4. Typically, Step 4 is a lengthy multi-turn dialog in which the tutor and student collaboratively contribute to the explanation that answers the question or solves the problem. At a macro-level, the dialog that occurs between AutoTutor and the learner conforms to Steps 1 through 4 of the 5-step frame. For example, at the beginning of each new topic, AutoTutor presents the learner with a problem or asks the learner a question (Step 1). The learner then attempts to solve the problem or answer the question (Step 2). Next, AutoTutor provides some type of short, evaluative feedback (Step 3). During Step 4, AutoTutor employs a variety of dialog moves (see next section) that encourage learner participation. Thus, ins",
"title": ""
},
{
"docid": "40495cc96353f56481ed30f7f5709756",
"text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.",
"title": ""
},
{
"docid": "2753e0a54d1a58993fcdd79ee40f0aac",
"text": "This study investigated the effectiveness of the WAIS-R Block Design subtest to predict everyday spatial ability for 65 university undergraduates (15 men, 50 women) who were administered Block Design, the Standardized Road Map Test of Direction Sense, and the Everyday Spatial Activities Test. In addition, the verbally loaded National Adult Reading Test was administered to assess whether the more visuospatial Block Design subtest was a better predictor of spatial ability. Moderate support was found. When age and sex were accounted for, Block Design accounted for 36% of the variance in performance (r = -.62) on the Road Map Test and 19% of the variance on the performance of the Everyday Spatial Activities Test (r = .42). In contrast, the scores on the National Adult Reading Test did not predict performance on the Road Map Test or Everyday Spatial Abilities Test. This suggests that, with appropriate caution, Block Design could be used as a measure of everyday spatial abilities.",
"title": ""
},
{
"docid": "c70d8ae9aeb8a36d1f68ba0067c74696",
"text": "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on simple link structure between a finite set of entities, ignoring the variety of data types that are often used in knowledge bases, such as text, images, and numerical values. In this paper, we propose multimodal knowledge base embeddings (MKBE) that use different neural encoders for this variety of observed data, and combine them with existing relational models to learn embeddings of the entities and multimodal data. Further, using these learned embedings and different neural decoders, we introduce a novel multimodal imputation model to generate missing multimodal values, like text and images, from information in the knowledge base. We enrich existing relational datasets to create two novel benchmarks that contain additional information such as textual descriptions and images of the original entities. We demonstrate that our models utilize this additional information effectively to provide more accurate link prediction, achieving state-of-the-art results with a considerable gap of 5-7% over existing methods. Further, we evaluate the quality of our generated multimodal values via a user study. We have release the datasets and the opensource implementation of our models at https: //github.com/pouyapez/mkbe.",
"title": ""
},
{
"docid": "d3b2283ce3815576a084f98c34f37358",
"text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.",
"title": ""
},
{
"docid": "3b9491f337ab93d65831a0dfe687a639",
"text": "—The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/. [Algorithm; computer simulations; maximum likelihood; phylogeny; rbcL; RDPII project.] The size of homologous sequence data sets has increased dramatically in recent years, and many of these data sets now involve several hundreds of taxa. Moreover, current probabilistic sequence evolution models (Swofford et al., 1996; Page and Holmes, 1998), notably those including rate variation among sites (Uzzell and Corbin, 1971; Jin and Nei, 1990; Yang, 1996), require an increasing number of calculations. Therefore, the speed of phylogeny reconstruction methods is becoming a significant requirement and good compromises between speed and accuracy must be found. The maximum likelihood (ML) approach is especially accurate for building molecular phylogenies. Felsenstein (1981) brought this framework to nucleotide-based phylogenetic inference, and it was later also applied to amino acid sequences (Kishino et al., 1990). Several variants were proposed, most notably the Bayesian methods (Rannala and Yang 1996; and see below), and the discrete Fourier analysis of Hendy et al. (1994), for example. Numerous computer studies (Huelsenbeck and Hillis, 1993; Kuhner and Felsenstein, 1994; Huelsenbeck, 1995; Rosenberg and Kumar, 2001; Ranwez and Gascuel, 2002) have shown that ML programs can recover the correct tree from simulated data sets more frequently than other methods can. Another important advantage of the ML approach is the ability to compare different trees and evolutionary models within a statistical framework (see Whelan et al., 2001, for a review). However, like all optimality criterion–based phylogenetic reconstruction approaches, ML is hampered by computational difficulties, making it impossible to obtain the optimal tree with certainty from even moderate data sets (Swofford et al., 1996). Therefore, all practical methods rely on heuristics that obtain near-optimal trees in reasonable computing time. Moreover, the computation problem is especially difficult with ML, because the tree likelihood not only depends on the tree topology but also on numerical parameters, including branch lengths. Even computing the optimal values of these parameters on a single tree is not an easy task, particularly because of possible local optima (Chor et al., 2000). The usual heuristic method, implemented in the popular PHYLIP (Felsenstein, 1993 ) and PAUP∗ (Swofford, 1999 ) packages, is based on hill climbing. It combines stepwise insertion of taxa in a growing tree and topological rearrangement. For each possible insertion position and rearrangement, the branch lengths of the resulting tree are optimized and the tree likelihood is computed. When the rearrangement improves the current tree or when the position insertion is the best among all possible positions, the corresponding tree becomes the new current tree. Simple rearrangements are used during tree growing, namely “nearest neighbor interchanges” (see below), while more intense rearrangements can be used once all taxa have been inserted. The procedure stops when no rearrangement improves the current best tree. Despite significant decreases in computing times, notably in fastDNAml (Olsen et al., 1994 ), this heuristic becomes impracticable with several hundreds of taxa. This is mainly due to the two-level strategy, which separates branch lengths and tree topology optimization. Indeed, most calculations are done to optimize the branch lengths and evaluate the likelihood of trees that are finally rejected. New methods have thus been proposed. Strimmer and von Haeseler (1996) and others have assembled fourtaxon (quartet) trees inferred by ML, in order to reconstruct a complete tree. However, the results of this approach have not been very satisfactory to date (Ranwez and Gascuel, 2001 ). Ota and Li (2000, 2001) described",
"title": ""
},
{
"docid": "32ec9f1c0dbc7caaf6ece7ba105eace1",
"text": "A major problem worldwide is the potential loss of fisheries, forests, and water resources. Understanding of the processes that lead to improvements in or deterioration of natural resources is limited, because scientific disciplines use different concepts and languages to describe and explain complex social-ecological systems (SESs). Without a common framework to organize findings, isolated knowledge does not cumulate. Until recently, accepted theory has assumed that resource users will never self-organize to maintain their resources and that governments must impose solutions. Research in multiple disciplines, however, has found that some government policies accelerate resource destruction, whereas some resource users have invested their time and energy to achieve sustainability. A general framework is used to identify 10 subsystem variables that affect the likelihood of self-organization in efforts to achieve a sustainable SES.",
"title": ""
},
{
"docid": "b9a2a41e12e259fbb646ff92956e148e",
"text": "The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.",
"title": ""
},
{
"docid": "8c3e545f12c621e0ffe1460b9db959e7",
"text": "A unique behavior of humans is modifying one’s unobservable behavior based on the reaction of others for cooperation. We used a card game called Hanabi as an evaluation task of imitating human reflective intelligence with artificial intelligence. Hanabi is a cooperative card game with incomplete information. A player cooperates with an opponent in building several card sets constructed with the same color and ordered numbers. However, like a blind man's bluff, each player sees the cards of all other players except his/her own. Also, communication between players is restricted to information about the same numbers and colors, and the player is required to read his/his opponent's intention with the opponent's hand, estimate his/her cards with incomplete information, and play one of them for building a set. We compared human play with several simulated strategies. The results indicate that the strategy with feedbacks from simulated opponent's viewpoints achieves more score than other strategies. Introduction of Cooperative Game Social Intelligence estimating an opponent's thoughts from his/her behavior – is a unique function of humans. The solving process of this social intelligence is one of the interesting challenges for both artificial intelligence (AI) and cognitive science. Bryne et al. hypothesized that the human brain increases mainly due to this type of social requirement as a evolutionary pressure (Byrne & Whiten 1989). One of the most difficult tasks for using social intelligence is estimating one’s own unobservable information from the behavior of others and to modify one's own information. This type of reflective behavior – using other behavior as a looking glass – is both a biological and psychological task. For example, the human voice is informed by others via sound waves through the air, but informed by him/herself through bone conduction (Chen et al. 2007). In this scenario, a person cannot observe his/her own voice directly. For improving social Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. influence from one's voice, one needs to observe others' reactions and modify his/her voice. Joseph et al. also defined such unobservable information from oneself as a \"blind spot\" from a psychological viewpoint (Luft & Ingham 1961). In this study, we solved such reflective estimation tasks using a cooperative game involving incomplete information. We used a card game called Hanabi as a challenge task. Hanabi is a cooperative card game. It has three unique features for contributing to AI and multi-agent system (MAS) studies compared with other card games that have been used in AI studies. First, it is a cooperative card game and not a battle card game. Every player is required to cooperate and build a set of five different colored fireworks (Hanabi in Japanese) before the cards run out. This requires the AI program to handle cooperation of multiple agents. Second, every player can observe all other players' cards except his/her own. This does not require a coordinative leader and requires pure coordination between multiple agents. Finally, communication between players is prohibited except for restricted informing actions for a color or a number of opponent's cards. This allows the AI program to avoid handling natural language processing matters directly. Hanabi won the top German game award due to these unique features (Jahres 2013). We created an AI program to play Hanabi with multiple strategies including simulation of opponents' viewpoints with opponents' behavior, and evaluated how this type of reflective simulation contributes to earning a high score in this game. The paper is organized as follows. Section 2 gives background on incomplete information games involving AI and what challenges there are with Hanabi. Section 3 explains the rules of Hanabi and models. We focused on a two-player game in this paper. Section 4 explains several strategies for playing Hanabi. Section 5 evaluates these strategies and the results are discussed in Section 6. Section 7 explains the contribution of our research, limitations, and future work, and Section 8 concludes our paper. 37 Computer Poker and Imperfect Information: Papers from the 2015 AAAI Workshop",
"title": ""
},
{
"docid": "62cf2ae97e48e6b57139f305d616ec1b",
"text": "Many analytics applications generate mixed workloads, i.e., workloads comprised of analytical tasks with different processing characteristics including data pre-processing, SQL, and iterative machine learning algorithms. Examples of such mixed workloads can be found in web data analysis, social media analysis, and graph analytics, where they are executed repetitively on large input datasets (e.g., Find the average user time spent on the top 10 most popular web pages on the UK domain web graph.). Scale-out processing engines satisfy the needs of these applications by distributing the data and the processing task efficiently among multiple workers that are first reserved and then used to execute the task in parallel on a cluster of machines. Finding the resource allocation that can complete the workload execution within a given time constraint, and optimizing cluster resource allocations among multiple analytical workloads motivates the need for estimating the runtime of the workload before its actual execution. Predicting runtime of analytical workloads is a challenging problem as runtime depends on a large number of factors that are hard to model a priori execution. These factors can be summarized as workload characteristics (data statistics and processing costs) , the execution configuration (deployment, resource allocation, and software settings), and the cost model that captures the interplay among all of the above parameters. While conventional cost models proposed in the context of query optimization can assess the relative order among alternative SQL query plans, they are not aimed to estimate absolute runtime. Additionally, conventional models are ill-equipped to estimate the runtime of iterative analytics that are executed repetitively until convergence and that of user defined data pre-processing operators which are not “owned” by the underlying data management system. This thesis demonstrates that runtime for data analytics can be predicted accurately by breaking the analytical tasks into multiple processing phases, collecting key input features during a reference execution on a sample of the dataset, and then using the features to build per-phase cost models. We develop prediction models for three categories of data analytics produced by social media applications: iterative machine learning, data pre-processing, and reporting SQL. The prediction framework for iterative analytics, PREDIcT, addresses the challenging problem of estimating the number of iterations, and per-iteration runtime for a class of iterative machine learning algorithms that are run repetitively until convergence. The hybrid prediction models we develop for data pre-processing tasks and for reporting SQL combine the benefits of analytical modeling with that of machine learning-based models. Through a",
"title": ""
},
{
"docid": "cf54533bc317b960fc80f22baa26d7b1",
"text": "The state-of-the-art named entity recognition (NER) systems are statistical machine learning models that have strong generalization capability (i.e., can recognize unseen entities that do not appear in training data) based on lexical and contextual information. However, such a model could still make mistakes if its features favor a wrong entity type. In this paper, we utilize Wikipedia as an open knowledge base to improve multilingual NER systems. Central to our approach is the construction of high-accuracy, highcoverage multilingual Wikipedia entity type mappings. These mappings are built from weakly annotated data and can be extended to new languages with no human annotation or language-dependent knowledge involved. Based on these mappings, we develop several approaches to improve an NER system. We evaluate the performance of the approaches via experiments on NER systems trained for 6 languages. Experimental results show that the proposed approaches are effective in improving the accuracy of such systems on unseen entities, especially when a system is applied to a new domain or it is trained with little training data (up to 18.3 F1 score improvement).",
"title": ""
},
{
"docid": "79b0f13bec3201bf2ca770b268085306",
"text": "In this paper, we introduce a new 3D hand gesture recognition approach based on a deep learning model. We propose a new Convolutional Neural Network (CNN) where sequences of hand-skeletal joints' positions are processed by parallel convolutions; we then investigate the performance of this model on hand gesture sequence classification tasks. Our model only uses hand-skeletal data and no depth image. Experimental results show that our approach achieves a state-of-the-art performance on a challenging dataset (DHG dataset from the SHREC 2017 3D Shape Retrieval Contest), when compared to other published approaches. Our model achieves a 91.28% classification accuracy for the 14 gesture classes case and an 84.35% classification accuracy for the 28 gesture classes case.",
"title": ""
},
{
"docid": "7ca62c2da424c826744bca7196f07def",
"text": "Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel ‘fact-based’ visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to ‘reason’ about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art.",
"title": ""
},
{
"docid": "9bcf4fcb795ab4cfe4e9d2a447179feb",
"text": "In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. The nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. Acting on the hypothesis that the “inputs” into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in detect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there must be other factors which need to be identified.",
"title": ""
},
{
"docid": "3c5bb0b08b365029a3fc1a7ef73e3aa7",
"text": "This paper proposes an estimation method to identify the electrical model parameters of photovoltaic (PV) modules and makes a comparison with other methods already popular in the technical literature. Based on the full single-diode model, the mathematical description of the I-V characteristic of modules is generally represented by a coupled nonlinear equation with five unknown parameters, which is difficult to solve by an analytical approach. The aim of the proposed method is to find the five unknown parameters that guarantee the minimum absolute error between the P-V curves generated by the electrical model and the P-V curves provided by the manufacturers' datasheets for different external conditions such as temperature and irradiance. The first advantage of the proposed method is that the parameters are estimated using the P-V curves instead of I-V curves, since most of the applications that use the electrical model want to accurately estimate the extracted power. The second advantage is that the value ranges of each unknown parameter respect their physical meaning. In order to prove the effectiveness of the proposition, a comparison among methods is carried out using both types of P-V and I-V curves: those obtained by manufacturers' datasheets and those extracted experimentally in the laboratory.",
"title": ""
},
{
"docid": "f5405c8fb7ad62d4277837bd7036b0d3",
"text": "Context awareness is one of the important fields in ubiquitous computing. Smart Home, a specific instance of ubiquitous computing, provides every family with opportunities to enjoy the power of hi-tech home living. Discovering that relationship among user, activity and context data in home environment is semantic, therefore, we apply ontology to model these relationships and then reason them as the semantic information. In this paper, we present the realization of smart home’s context-aware system based on ontology. We discuss the current challenges in realizing the ontology context base. These challenges can be listed as collecting context information from heterogeneous sources, such as devices, agents, sensors into ontology, ontology management, ontology querying, and the issue related to environment database explosion.",
"title": ""
},
{
"docid": "36e8ecc13c1f92ca3b056359e2d803f0",
"text": "We propose a novel module, the reviewer module, to improve the encoder-decoder learning framework. The reviewer module is generic, and can be plugged into an existing encoder-decoder model. The reviewer module performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a fact vector after each review step; the fact vectors are used as the input of the attention mechanism in the decoder. We show that the conventional encoderdecoders are a special case of our framework. Empirically, we show that our framework can improve over state-of-the-art encoder-decoder systems on the tasks of image captioning and source code captioning.",
"title": ""
},
{
"docid": "59f29d3795e747bb9cee8fcbf87cb86f",
"text": "This paper introduces the development of a semi-active friction based variable physical damping actuator (VPDA) unit. The realization of this unit aims to facilitate the control of compliant robotic joints by providing physical variable damping on demand assisting on the regulation of the oscillations induced by the introduction of compliance. The mechatronics details and the dynamic model of the damper are introduced. The proposed variable damper mechanism is evaluated on a simple 1-DOF compliant joint linked to the ground through a torsion spring. This flexible connection emulates a compliant joint, generating oscillations when the link is perturbed. Preliminary results are presented to show that the unit and the proposed control scheme are capable of replicating simulated relative damping values with good fidelity.",
"title": ""
},
{
"docid": "3b31d07c6a5f7522e2060d5032ca5177",
"text": "In the past few years detection of repeatable and distinctive keypoints on 3D surfaces has been the focus of intense research activity, due on the one hand to the increasing diffusion of low-cost 3D sensors, on the other to the growing importance of applications such as 3D shape retrieval and 3D object recognition. This work aims at contributing to the maturity of this field by a thorough evaluation of several recent 3D keypoint detectors. A categorization of existing methods in two classes, that allows for highlighting their common traits, is proposed, so as to abstract all algorithms to two general structures. Moreover, a comprehensive experimental evaluation is carried out in terms of repeatability, distinctiveness and computational efficiency, based on a vast data corpus characterized by nuisances such as noise, clutter, occlusions and viewpoint changes.",
"title": ""
}
] |
scidocsrr
|
d9f71acd36247ac5f2ce09592a3fc642
|
A Survey of Communication Sub-systems for Intersatellite Linked Systems and CubeSat Missions
|
[
{
"docid": "60f6e3345aae1f91acb187ba698f073b",
"text": "A Cube-Satellite (CubeSat) is a small satellite weighing no more than one kilogram. CubeSats are used for space research, but their low-rate communication capability limits functionality. As greater payload and instrumentation functions are sought, increased data rate is needed. Since most CubeSats currently transmit at a 437 MHz frequency, several directional antenna types were studied for a 2.45 GHz, larger bandwidth transmission. This higher frequency provides the bandwidth needed for increasing the data rate. A deployable antenna mechanism maybe needed because most directional antennas are bigger than the CubeSat size constraints. From the study, a deployable hemispherical helical antenna prototype was built. Transmission between two prototype antenna equipped transceivers at varying distances tested the helical performance. When comparing the prototype antenna's maximum transmission distance to the other commercial antennas, the prototype outperformed all commercial antennas, except the patch antenna. The root cause was due to the helical antenna's narrow beam width. Future work can be done in attaining a more accurate alignment with the satellite's directional antenna to downlink with a terrestrial ground station.",
"title": ""
}
] |
[
{
"docid": "a22bc61f0fa5733a1835f61056810422",
"text": "Humans are able to accelerate their learning by selecting training materials that are the most informative and at the appropriate level of difficulty. We propose a framework for distributing deep learning in which one set of workers search for the most informative examples in parallel while a single worker updates the model on examples selected by importance sampling. This leads the model to update using an unbiased estimate of the gradient which also has minimum variance when the sampling proposal is proportional to the L2-norm of the gradient. We show experimentally that this method reduces gradient variance even in a context where the cost of synchronization across machines cannot be ignored, and where the factors for importance sampling are not updated instantly across the training set.",
"title": ""
},
{
"docid": "7120cc5882438207ae432eb556d65e72",
"text": "A radar system with an ultra-wide FMCW ramp bandwidth of 25.6 GHz (≈32%) around a center frequency of 80 GHz is presented. The system is based on a monostatic fully integrated SiGe transceiver chip, which is stabilized using conventional fractional-N PLL chips at a reference frequency of 100 MHz. The achieved in-loop phase noise is ≈ -88 dBc/Hz (10 kHz offset frequency) for the center frequency and below ≈-80 dBc/Hz in the wide frequency band of 25.6 GHz for all offset frequencies >;1 kHz. The ultra-wide PLL-stabilization was achieved using a reverse frequency position mixer in the PLL (offset-PLL) resulting in a compensation of the variation of the oscillators tuning sensitivity with the variation of the N-divider in the PLL. The output power of the transceiver chip, as well as of the mm-wave module (containing a waveguide transition), is sufficiently flat versus the output frequency (variation <;3 dB). In radar measurements using the full bandwidth an ultra-high spatial resolution of 7.12 mm was achieved. The standard deviation between repeated measurements of the same target is 0.36 μm.",
"title": ""
},
{
"docid": "704cad33eed2b81125f856c4efbff4fa",
"text": "In order to realize missile real-time change flight trajectory, three-loop autopilot is setting up. The structure characteristics, autopilot model, and control parameters design method were researched. Firstly, this paper introduced the 11th order three-loop autopilot model. With the principle of systems reduce model order, the 5th order model was deduced. On that basis, open-loop frequency characteristic and closed-loop frequency characteristic were analyzed. The variables of velocity ratio, dynamic pressure ratio and elevator efficiency ratio were leading to correct system nonlinear. And then autopilot gains design method were induced. System flight simulations were done, and result shows that autopilot gains played a good job in the flight trajectory, autopilot satisfied the flight index.",
"title": ""
},
{
"docid": "8583f3735314a7d38bcb82f6acf781ce",
"text": "Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions—expressed as control barrier functions— to be unified with performance objectives—expressed as control Lyapunov functions—in the context of real-time optimizationbased controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds.",
"title": ""
},
{
"docid": "07cd406cead1a086f61f363269de1aac",
"text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.",
"title": ""
},
{
"docid": "41611aef9542367f80d8898b1f71bead",
"text": "The economy-wide implications of sea level rise in 2050 are estimated using a static computable general equilibrium model. Overall, general equilibrium effects increase the costs of sea level rise, but not necessarily in every sector or region. In the absence of coastal protection, economies that rely most on agriculture are hit hardest. Although energy is substituted for land, overall energy consumption falls with the shrinking economy, hurting energy exporters. With full coastal protection, GDP increases, particularly in regions that do a lot of dike building, but utility falls, least in regions that build a lot of dikes and export energy. Energy prices rise and energy consumption falls. The costs of full protection exceed the costs of losing land.",
"title": ""
},
{
"docid": "816b2ed7d4b8ce3a8fc54e020bc2f712",
"text": "As a standardized communication protocol, OPC UA is the main focal point with regard to information exchange in the ongoing initiative Industrie 4.0. But there are also considerations to use it within the Internet of Things. The fact that currently no open reference implementation can be used in research for free represents a major problem in this context. The authors have the opinion that open source software can stabilize the ongoing theoretical work. Recent efforts to develop an open implementation for OPC UA were not able to meet the requirements of practical and industrial automation technology. This issue is addressed by the open62541 project which is presented in this article including an overview of its application fields and main research issues.",
"title": ""
},
{
"docid": "6f9be23e33910d44551b5befa219e557",
"text": "The Lecture Notes are used for the a short course on the theory and applications of the lattice Boltzmann methods for computational uid dynamics taugh by the author at Institut f ur Computeranwendungen im Bauingenieurwesen (CAB), Technischen Universitat Braunschweig, during August 7 { 12, 2003. The lectures cover the basic theory of the lattice Boltzmann equation and its applications to hydrodynamics. Lecture One brie y reviews the history of the lattice gas automata and the lattice Boltzmann equation and their connections. Lecture Two provides an a priori derivation of the lattice Boltzmann equation, which connects the lattice Boltzmann equation to the continuous Boltzmann equation and demonstrates that the lattice Boltzmann equation is indeed a special nite di erence form of the Boltzmann equation. Lecture Two also includes the derivation of the lattice Boltzmann model for nonideal gases from the Enskog equation for dense gases. Lecture Three studies the generalized lattice Boltzmann equation with multiple relaxation times. A summary is provided at the end of each Lecture. Lecture Four discusses the uid-solid boundary conditions in the lattice Boltzmann methods. Applications of the lattice Boltzmann mehod to particulate suspensions, turbulence ows, and other ows are also shown. An Epilogue on the rationale of the lattice Boltzmann method is given. Some key references in the literature is also provided.",
"title": ""
},
{
"docid": "0851caf6599f97bbeaf68b57e49b4da5",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "9dc9b5bad3422a6f1c7f33ccb25fdead",
"text": "We present a named entity recognition (NER) system for extracting product attributes and values from listing titles. Information extraction from short listing titles present a unique challenge, with the lack of informative context and grammatical structure. In this work, we combine supervised NER with bootstrapping to expand the seed list, and output normalized results. Focusing on listings from eBay’s clothing and shoes categories, our bootstrapped NER system is able to identify new brands corresponding to spelling variants and typographical errors of the known brands, as well as identifying novel brands. Among the top 300 new brands predicted, our system achieves 90.33% precision. To output normalized attribute values, we explore several string comparison algorithms and found n-gram substring matching to work well in practice.",
"title": ""
},
{
"docid": "5c9ea5fcfef7bac1513a79fd918d3194",
"text": "Elderly suffers from injuries or disabilities through falls every year. With a high likelihood of falls causing serious injury or death, falling can be extremely dangerous, especially when the victim is home-alone and is unable to seek timely medical assistance. Our fall detection systems aims to solve this problem by automatically detecting falls and notify healthcare services or the victim’s caregivers so as to provide help. In this paper, development of a fall detection system based on Kinect sensor is introduced. Current fall detection algorithms were surveyed and we developed a novel posture recognition algorithm to improve the specificity of the system. Data obtained through trial testing with human subjects showed a 26.5% increase in fall detection compared to control algorithms. With our novel detection algorithm, the system conducted in a simulated ward scenario can achieve up to 90% fall detection rate.",
"title": ""
},
{
"docid": "47398ca11079b699e050f10e292855ac",
"text": "It is well known that 3DIC integration is the next generation semiconductor technology with the advantages of small form factor, high performance and low power consumption. However the device TSV process and design rules are not mature. Assembly the chips on top of the Si interposer is the current most desirable method to achieve the requirement of good performance. In this study, a new packaging concept, the Embedded Interposer Carrier (EIC) technology was developed. It aims to solve some of the problems facing current interposer assemble issues. It eliminates the joining process of silicon interposer to the laminate carrier substrate. The concept of EIC is to embed one or multiple interposer chips into the build-up dielectric layers in the laminated substrate. The process development of EIC structure is investigated in this paper. EIC technology not only can shrink an electronic package and system size but also provide a better electronic performance for high-bandwidth applications. EIC technology can be one of the potential solutions for 3D System-in-Package.",
"title": ""
},
{
"docid": "1c1a677e4e95ee6a7656db9683a19c9b",
"text": "With the rapid development of the Intelligent Transportation System (ITS), vehicular communication networks have been widely studied in recent years. Dedicated Short Range Communication (DSRC) can provide efficient real-time information exchange among vehicles without the need of pervasive roadside communication infrastructure. Although mobile cellular networks are capable of providing wide coverage for vehicular users, the requirements of services that require stringent real-time safety cannot always be guaranteed by cellular networks. Therefore, the Heterogeneous Vehicular NETwork (HetVNET), which integrates cellular networks with DSRC, is a potential solution for meeting the communication requirements of the ITS. Although there are a plethora of reported studies on either DSRC or cellular networks, joint research of these two areas is still at its infancy. This paper provides a comprehensive survey on recent wireless networks techniques applied to HetVNETs. Firstly, the requirements and use cases of safety and non-safety services are summarized and compared. Consequently, a HetVNET framework that utilizes a variety of wireless networking techniques is presented, followed by the descriptions of various applications for some typical scenarios. Building such HetVNETs requires a deep understanding of heterogeneity and its associated challenges. Thus, major challenges and solutions that are related to both the Medium Access Control (MAC) and network layers in HetVNETs are studied and discussed in detail. Finally, we outline open issues that help to identify new research directions in HetVNETs.",
"title": ""
},
{
"docid": "29fc090c5d1e325fd28e6bbcb690fb8d",
"text": "Many forensic computing practitioners work in a high workload and low resource environment. With the move by the discipline to seek ISO 17025 laboratory accreditation, practitioners are finding it difficult to meet the demands of validation and verification of their tools and still meet the demands of the accreditation framework. Many agencies are ill-equipped to reproduce tests conducted by organizations such as NIST since they cannot verify the results with their equipment and in many cases rely solely on an independent validation study of other peoples' equipment. This creates the issue of tools in reality never being tested. Studies have shown that independent validation and verification of complex forensic tools is expensive and time consuming, and many practitioners also use tools that were not originally designed for forensic purposes. This paper explores the issues of validation and verification in the accreditation environment and proposes a paradigm that will reduce the time and expense required to validate and verify forensic software tools",
"title": ""
},
{
"docid": "d537214f407128585d6a4e6bab55a45b",
"text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.",
"title": ""
},
{
"docid": "8f0d90a605829209c7b6d777c11b299d",
"text": "Researchers and educators have designed curricula and resources for introductory programming environments such as Scratch, App Inventor, and Kodu to foster computational thinking in K-12. This paper is an empirical study of the effectiveness and usefulness of tiles and flashcards developed for Microsoft Kodu Game Lab to support students in learning how to program and develop games. In particular, we investigated the impact of physical manipulatives on 3rd -- 5th grade students' ability to understand, recognize, construct, and use game programming design patterns. We found that the students who used physical manipulatives performed well in rule construction, whereas the students who engaged more with the rule editor of the programming environment had better mental simulation of the rules and understanding of the concepts.",
"title": ""
},
{
"docid": "a0589d0c1df89328685bdabd94a1a8a2",
"text": "We present a translation of §§160–166 of Dedekind’s Supplement XI to Dirichlet’s Vorlesungen über Zahlentheorie, which contain an investigation of the subfields of C. In particular, Dedekind explores the lattice structure of these subfields, by studying isomorphisms between them. He also indicates how his ideas apply to Galois theory. After a brief introduction, we summarize the translated excerpt, emphasizing its Galois-theoretic highlights. We then take issue with Kiernan’s characterization of Dedekind’s work in his extensive survey article on the history of Galois theory; Dedekind has a nearly complete realization of the modern “fundamental theorem of Galois theory” (for subfields of C), in stark contrast to the picture presented by Kiernan at points. We intend a sequel to this article of an historical and philosophical nature. With that in mind, we have sought to make Dedekind’s text accessible to as wide an audience as possible. Thus we include a fair amount of background and exposition.",
"title": ""
},
{
"docid": "8b0a09cbac4b1cbf027579ece3dea9ef",
"text": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
}
] |
scidocsrr
|
a7a0e5af984ac414db1a85f2fe606907
|
T-drive: driving directions based on taxi trajectories
|
[
{
"docid": "cbaf7cd4e17c420b7546d132959b3283",
"text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"title": ""
}
] |
[
{
"docid": "66ab561342d6f0c80a0eb8d4c2b19a97",
"text": "Impedance spectroscopy of biological cells has been used to monitor cell status, e.g. cell proliferation, viability, etc. It is also a fundamental method for the study of the electrical properties of cells which has been utilised for cell identification in investigations of cell behaviour in the presence of an applied electric field, e.g. electroporation. There are two standard methods for impedance measurement on cells. The use of microelectrodes for single cell impedance measurement is one method to realise the measurement, but the variations between individual cells introduce significant measurement errors. Another method to measure electrical properties is by the measurement of cell suspensions, i.e. a group of cells within a culture medium or buffer. This paper presents an investigation of the impedance of normal and cancerous breast cells in suspension using the Maxwell-Wagner mixture theory to analyse the results and extract the electrical parameters of a single cell. The results show that normal and different stages of cancer breast cells can be distinguished by the conductivity presented by each cell.",
"title": ""
},
{
"docid": "1d9f683409c3d6f19b9b6738a1a76c4a",
"text": "The empirical fact that classifiers, trained on given data collections, perform poorly when tested on data acquired in different settings is theoretically explained in domain adaptation through a shift among distributions of the source and target domains. Alleviating the domain shift problem, especially in the challenging setting where no labeled data are available for the target domain, is paramount for having visual recognition systems working in the wild. As the problem stems from a shift among distributions, intuitively one should try to align them. In the literature, this has resulted in a stream of works attempting to align the feature representations learned from the source and target domains. Here we take a different route. Rather than introducing regularization terms aiming to promote the alignment of the two representations, we act at the distribution level through the introduction of DomaIn Alignment Layers (DIAL), able to match the observed source and target data distributions to a reference one. Thorough experiments on three different public benchmarks we confirm the power of our approach. ∗This work was partially supported by the ERC grant 637076 RoboExNovo (B.C.), and the CHIST-ERA project ALOOF (B.C, F. M. C.).",
"title": ""
},
{
"docid": "83187228617d62fb37f99cf107c7602a",
"text": "A very important class of spatial queries consists of nearestneighbor (NN) query and its variations. Many studies in the past decade utilize R-trees as their underlying index structures to address NN queries efficiently. The general approach is to use R-tree in two phases. First, R-tree’s hierarchical structure is used to quickly arrive to the neighborhood of the result set. Second, the R-tree nodes intersecting with the local neighborhood (Search Region) of an initial answer are investigated to find all the members of the result set. While R-trees are very efficient for the first phase, they usually result in the unnecessary investigation of many nodes that none or only a small subset of their including points belongs to the actual result set. On the other hand, several recent studies showed that the Voronoi diagrams are extremely efficient in exploring an NN search region, while due to lack of an efficient access method, their arrival to this region is slow. In this paper, we propose a new index structure, termed VoR-Tree that incorporates Voronoi diagrams into R-tree, benefiting from the best of both worlds. The coarse granule rectangle nodes of R-tree enable us to get to the search region in logarithmic time while the fine granule polygons of Voronoi diagram allow us to efficiently tile or cover the region and find the result. Utilizing VoR-Tree, we propose efficient algorithms for various Nearest Neighbor queries, and show that our algorithms have better I/O complexity than their best competitors.",
"title": ""
},
{
"docid": "e86a7a9b16ab38954a4c71d33f717106",
"text": "OBJECTIVE\nTo evaluate the usefulness of septal batten grafts to correct cartilaginous septal deformities in endonasal septoplasty.\n\n\nDESIGN\nRetrospective study.\n\n\nSETTING\nUniversity medical center.\n\n\nPATIENTS\nOf 430 patients who underwent endonasal septoplasties from January 2006 to January 2011, 30 received septal batten grafts and were enrolled in the study. Twenty-eight patients were male and 2 were female.\n\n\nMAIN OUTCOME MEASURES\nThirty consecutive patients received septal batten grafts and were followed up for more than 6 months. Patterns of septal deformity, materials used for batten graft, surgical results, symptom improvement, findings of acoustic rhinometry, and surgical complications were investigated.\n\n\nRESULTS\nAmong the 30 patients, 5 were revision cases. Most of the deformities were characterized as moderate to severe degrees of curved or angulated deviations of the cartilaginous septum. The batten graft was performed with either septal cartilage (n = 21) or bony septum (n = 9). A straight septum was achieved in 90% of all procedures. Subjective symptoms of nasal obstruction were improved in all patients, as evaluated by the Nasal Obstruction Symptom Evaluation scale. Acoustic rhinometry revealed that after surgery the mean minimal cross-sectional area changed from 0.33 cm² to 0.42 cm² (P = .02) and the nasal volume from 4.71 mL to 6.28 mL (P = .02). There were no major complications, eg, septal perforation or saddle nose, and no revision surgery was needed.\n\n\nCONCLUSION\nEndonasal septal batten graft is a safe, useful, and effective technique to straighten moderate to severe septal cartilage deformities that are otherwise not correctable via conventional septoplasty techniques.",
"title": ""
},
{
"docid": "bda1e2a1f27673dceed36adddfdc3e36",
"text": "IEEE 802.11 WLANs are a very important technology to provide high speed wireless Internet access. Especially at airports, university campuses or in city centers, WLAN coverage is becoming ubiquitous leading to a deployment of hundreds or thousands of Access Points (AP). Managing and configuring such large WLAN deployments is a challenge. Current WLAN management protocols such as CAPWAP are hard to extend with new functionality. In this paper, we present CloudMAC, a novel architecture for enterprise or carrier grade WLAN systems. By partially offloading the MAC layer processing to virtual machines provided by cloud services and by integrating our architecture with OpenFlow, a software defined networking approach, we achieve a new level of flexibility and reconfigurability. In Cloud-MAC APs just forward MAC frames between virtual APs and IEEE 802.11 stations. The processing of MAC layer frames as well as the creation of management frames is handled at the virtual APs while the binding between the virtual APs and the physical APs is managed using OpenFlow. The testbed evaluation shows that CloudMAC achieves similar performance as normal WLANs, but allows novel services to be implemented easily in high level programming languages. The paper presents a case study which shows that dynamically switching off APs to save energy can be performed seamlessly with CloudMAC, while a traditional WLAN architecture causes large interruptions for users.",
"title": ""
},
{
"docid": "12e2d86add1918393291ea55f99a44a0",
"text": "Supervised classification algorithms aim at producing a learning model from a labeled training set. Various successful techniques have been proposed to solve the problem in the binary classification case. The multiclass classification case is more delicate, as many of the algorithms were introduced basically to solve binary classification problems. In this short survey we investigate the various techniques for solving the multiclass classification problem.",
"title": ""
},
{
"docid": "b0cba371bb9628ac96a9ae2bb228f5a9",
"text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.",
"title": ""
},
{
"docid": "192f8528ca2416f9a49ce152def2fbe6",
"text": "We study the extent to which we can infer users’ geographical locations from social media. Location inference from social media can benet many applications, such as disaster management, targeted advertising, and news content tailoring. In recent years, a number of algorithms have been proposed for identifying user locations on social media platforms such as Twier and Facebook from message contents, friend networks, and interactions between users. In this paper, we propose a novel probabilistic model based on factor graphs for location inference that oers several unique advantages for this task. First, the model generalizes previous methods by incorporating content, network, and deep features learned from social context. e model is also exible enough to support both supervised learning and semi-supervised learning. Second, we explore several learning algorithms for the proposed model, and present a Two-chain Metropolis-Hastings (MH+) algorithm, which improves the inference accuracy. ird, we validate the proposed model on three dierent genres of data – Twier, Weibo, and Facebook – and demonstrate that the proposed model can substantially improve the inference accuracy (+3.3-18.5% by F1-score) over that of several state-of-the-art methods.",
"title": ""
},
{
"docid": "c86c10428bfca028611a5e989ca31d3f",
"text": "In the study, we discussed the ARCH/GARCH family models and enhanced them with artificial neural networks to evaluate the volatility of daily returns for 23.10.1987–22.02.2008 period in Istanbul Stock Exchange. We proposed ANN-APGARCH model to increase the forecasting performance of APGARCH model. The ANN-extended versions of the obtained GARCH models improved forecast results. It is noteworthy that daily returns in the ISE show strong volatility clustering, asymmetry and nonlinearity characteristics. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2d67465fbc2799f815237a05905b8d7a",
"text": "This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions.",
"title": ""
},
{
"docid": "850f29a1d3c5bc96bb36787aba428331",
"text": "In this paper, we introduce a novel framework for WEakly supervised Learning of Deep cOnvolutional neural Networks (WELDON). Our method is dedicated to automatically selecting relevant image regions from weak annotations, e.g. global image labels, and encompasses the following contributions. Firstly, WELDON leverages recent improvements on the Multiple Instance Learning paradigm, i.e. negative evidence scoring and top instance selection. Secondly, the deep CNN is trained to optimize Average Precision, and fine-tuned on the target dataset with efficient computations due to convolutional feature sharing. A thorough experimental validation shows that WELDON outperforms state-of-the-art results on six different datasets.",
"title": ""
},
{
"docid": "0f72c9034647612097c2096d1f31c980",
"text": "We tackle a fundamental problem to detect and estimate just noticeable blur (JNB) caused by defocus that spans a small number of pixels in images. This type of blur is common during photo taking. Although it is not strong, the slight edge blurriness contains informative clues related to depth. We found existing blur descriptors based on local information cannot distinguish this type of small blur reliably from unblurred structures. We propose a simple yet effective blur feature via sparse representation and image decomposition. It directly establishes correspondence between sparse edge representation and blur strength estimation. Extensive experiments manifest the generality and robustness of this feature.",
"title": ""
},
{
"docid": "160d0ba08cfade25b512c8fd46363451",
"text": "We present structured data fusion (SDF) as a framework for the rapid prototyping of knowledge discovery in one or more possibly incomplete data sets. In SDF, each data set-stored as a dense, sparse, or incomplete tensor-is factorized with a matrix or tensor decomposition. Factorizations can be coupled, or fused, with each other by indicating which factors should be shared between data sets. At the same time, factors may be imposed to have any type of structure that can be constructed as an explicit function of some underlying variables. With the right choice of decomposition type and factor structure, even well-known matrix factorizations such as the eigenvalue decomposition, singular value decomposition and QR factorization can be computed with SDF. A domain specific language (DSL) for SDF is implemented as part of the software package Tensorlab, with which we offer a library of tensor decompositions and factor structures to choose from. The versatility of the SDF framework is demonstrated by means of four diverse applications, which are all solved entirely within Tensorlab's DSL.",
"title": ""
},
{
"docid": "4227d667cac37fe0e2ecf5a8c199d885",
"text": "Plant diseases problem can cause significant reduction in both quality and quantity of agricultural products. Automatic detection of plant leaf diseases is an essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect the symptoms of diseases as soon as they appear on plant leaves. The proposed system is a software solution for automatic detection and computation of plant leaf diseases. The developed processing scheme consists of five main steps, first a color transformation structure for the input RGB image is created, then the noise i.e. unnecessary part is removed using specific threshold value, then the image is segmented with connected component labeling and the useful segments are extracted, finally the ANN classification is computed by giving different features i.e. size, color, proximity and average centroid distance. Experimental results on a database of 4 different diseases confirms the robustness of the proposed approach. KeywordsANN, Color, Plant Leaf Diseases, RGB ————————————————————",
"title": ""
},
{
"docid": "53d1ddf4809ab735aa61f4059a1a38b1",
"text": "In this paper we present a wearable Haptic Feedback Device to convey intuitive motion direction to the user through haptic feedback based on vibrotactile illusions. Vibrotactile illusions occur on the skin when two or more vibrotactile actuators in proximity are actuated in coordinated sequence, causing the user to feel combined sensations, instead of separate ones. By combining these illusions we can produce various sensation patterns that are discernible by the user, thus allowing to convey different information with each pattern. A method to provide information about direction through vibrotactile illusions is introduced on this paper. This method uses a grid of vibrotactile actuators around the arm actuated in coordination. The sensation felt on the skin is consistent with the desired direction of motion, so the desired motion can be intuitively understood. We show that the users can recognize the conveyed direction, and implemented a proof of concept of the proposed method to guide users' elbow flexion/extension motion.",
"title": ""
},
{
"docid": "ec5abeb42b63ed1976cd47d3078c35c9",
"text": "In semistructured data, the information that is normally associated with a schema is contained within the data, which is sometimes called “self-describing”. In some forms of semistructured data there is no separate schema, in others it exists but only places loose constraints on the data. Semistructured data has recently emerged as an important topic of study for a variety of reasons. First, there are data sources such as the Web, which we would like to treat as databases but which cannot be constrained by a schema. Second, it may be desirable to have an extremely flexible format for data exchange between disparate databases. Third, even when dealing with structured data, it may be helpful to view it. as semistructured for the purposes of browsing. This tutorial will cover a number of issues surrounding such data: finding a concise formulation, building a sufficiently expressive language for querying and transformation, and optimizat,ion problems.",
"title": ""
},
{
"docid": "d597b9229a3f9a9c680d25180a4b6308",
"text": "Mental health problems are highly prevalent and increasing in frequency and severity among the college student population. The upsurge in mobile and wearable wireless technologies capable of intense, longitudinal tracking of individuals, provide enormously valuable opportunities in mental health research to examine temporal patterns and dynamic interactions of key variables. In this paper, we present an integrative framework for social anxiety and depression (SAD) monitoring, two of the most common disorders in the college student population. We have developed a smartphone application and the supporting infrastructure to collect both passive sensor data and active event-driven data. This supports intense, longitudinal, dynamic tracking of anxious and depressed college students to evaluate how their emotions and social behaviors change in the college campus environment. The data will provide critical information about how student mental health problems are maintained and, ultimately, how student patterns on campus shift following treatment.",
"title": ""
},
{
"docid": "23832f031f7c700f741843e54ff81b4e",
"text": "Data Mining in medicine is an emerging field of great importance to provide a prognosis and deeper understanding of disease classification, specifically in Mental Health areas. The main objective of this paper is to present a review of the existing research works in the literature, referring to the techniques and algorithms of Data Mining in Mental Health, specifically in the most prevalent diseases such as: Dementia, Alzheimer, Schizophrenia and Depression. Academic databases that were used to perform the searches are Google Scholar, IEEE Xplore, PubMed, Science Direct, Scopus and Web of Science, taking into account as date of publication the last 10 years, from 2008 to the present. Several search criteria were established such as ‘techniques’ AND ‘Data Mining’ AND ‘Mental Health’, ‘algorithms’ AND ‘Data Mining’ AND ‘dementia’ AND ‘schizophrenia’ AND ‘depression’, etc. selecting the papers of greatest interest. A total of 211 articles were found related to techniques and algorithms of Data Mining applied to the main Mental Health diseases. 72 articles have been identified as relevant works of which 32% are Alzheimer’s, 22% dementia, 24% depression, 14% schizophrenia and 8% bipolar disorders. Many of the papers show the prediction of risk factors in these diseases. From the review of the research articles analyzed, it can be said that use of Data Mining techniques applied to diseases such as dementia, schizophrenia, depression, etc. can be of great help to the clinical decision, diagnosis prediction and improve the patient’s quality of life.",
"title": ""
},
{
"docid": "678a4872dfe753bac26bff2b29ac26b0",
"text": "Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks.",
"title": ""
}
] |
scidocsrr
|
0f1525313cf095d9a5cd350e1f6197c7
|
Semantic Web in data mining and knowledge discovery: A comprehensive survey
|
[
{
"docid": "cb08df0c8ff08eecba5d7fed70c14f1e",
"text": "In this article, we propose a family of efficient kernels for l a ge graphs with discrete node labels. Key to our method is a rapid feature extraction scheme b as d on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequ ence of graphs, whose node attributes capture topological and label information. A fami ly of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly e ffici nt kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of e dges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classifica tion benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale ap plic tions of graph kernels in various disciplines such as computational biology and social netwo rk analysis.",
"title": ""
},
{
"docid": "ec58ee349217d316f87ff684dba5ac2b",
"text": "This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases.",
"title": ""
}
] |
[
{
"docid": "c5759678a84864a843c20c5f4a23f29f",
"text": "We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.",
"title": ""
},
{
"docid": "4d2b0b01fae0ff2402fc2feaa5657574",
"text": "In this paper, we give an algorithm for the analysis and correction of the distorted QR barcode (QR-code) image. The introduced algorithm is based on the code area finding by four corners detection for 2D barcode. We combine Canny edge detection with contours finding algorithms to erase noises and reduce computation and utilize two tangents to approximate the right-bottom point. Then, we give a detail description on how to use inverse perspective transformation in rebuilding a QR-code image from a distorted one. We test our algorithm on images taken by mobile phones. The experiment shows that our algorithm is effective.",
"title": ""
},
{
"docid": "66a8e7c076ad2cfb7bbe42836607a039",
"text": "The Spider system at the Oak Ridge National Laboratory’s Leadership Computing Facility (OLCF) is the world’s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF’s diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF’s diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x 240 GB/sec, and 17x 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing the file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.",
"title": ""
},
{
"docid": "70ec2398526863c05b41866593214d0a",
"text": "Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation.",
"title": ""
},
{
"docid": "933f8ba333e8cbef574b56348872b313",
"text": "Automatic image annotation has been an important research topic in facilitating large scale image management and retrieval. Existing methods focus on learning image-tag correlation or correlation between tags to improve annotation accuracy. However, most of these methods evaluate their performance using top-k retrieval performance, where k is fixed. Although such setting gives convenience for comparing different methods, it is not the natural way that humans annotate images. The number of annotated tags should depend on image contents. Inspired by the recent progress in machine translation and image captioning, we propose a novel Recurrent Image Annotator (RIA) model that forms image annotation task as a sequence generation problem so that RIA can natively predict the proper length of tags according to image contents. We evaluate the proposed model on various image annotation datasets. In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task. Moreover, the results of our experiments show that the order of tags in training phase has a great impact on the final annotation performance.",
"title": ""
},
{
"docid": "b0575058a6950bc17a976504145dca0e",
"text": "BACKGROUND\nCitation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.\n\n\nMETHODS\nFour systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.\n\n\nRESULTS\nOf the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.\n\n\nCONCLUSIONS\nSemi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.",
"title": ""
},
{
"docid": "9feeeabb8491a06ae130c99086a9d069",
"text": "Dopamine (DA) is a key transmitter in the basal ganglia, yet DA transmission does not conform to several aspects of the classic synaptic doctrine. Axonal DA release occurs through vesicular exocytosis and is action potential- and Ca²⁺-dependent. However, in addition to axonal release, DA neurons in midbrain exhibit somatodendritic release by an incompletely understood, but apparently exocytotic, mechanism. Even in striatum, axonal release sites are controversial, with evidence for DA varicosities that lack postsynaptic specialization, and largely extrasynaptic DA receptors and transporters. Moreover, DA release is often assumed to reflect a global response to a population of activities in midbrain DA neurons, whether tonic or phasic, with precise timing and specificity of action governed by other basal ganglia circuits. This view has been reinforced by anatomical evidence showing dense axonal DA arbors throughout striatum, and a lattice network formed by DA axons and glutamatergic input from cortex and thalamus. Nonetheless, localized DA transients are seen in vivo using voltammetric methods with high spatial and temporal resolution. Mechanistic studies using similar methods in vitro have revealed local regulation of DA release by other transmitters and modulators, as well as by proteins known to be disrupted in Parkinson's disease and other movement disorders. Notably, the actions of most other striatal transmitters on DA release also do not conform to the synaptic doctrine, with the absence of direct synaptic contacts for glutamate, GABA, and acetylcholine (ACh) on striatal DA axons. Overall, the findings reviewed here indicate that DA signaling in the basal ganglia is sculpted by cooperation between the timing and pattern of DA input and those of local regulatory factors.",
"title": ""
},
{
"docid": "b2c299e13eff8776375c14357019d82e",
"text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.",
"title": ""
},
{
"docid": "353f91c6e35cd5703b5b238f929f543e",
"text": "This paper provides an overview of prominent deep learning toolkits and, in particular, reports on recent publications that contributed open source software for implementing tasks that are common in intelligent user interfaces (IUI). We provide a scientific reference for researchers and software engineers who plan to utilise deep learning techniques within their IUI research and development projects. ACM Classification",
"title": ""
},
{
"docid": "7ddfa92cee856e2ef24caf3e88d92b93",
"text": "Applications are getting increasingly interconnected. Although the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience.",
"title": ""
},
{
"docid": "786ef1b656c182ab71f7a63e7f263b3f",
"text": "The spectrum of a first-order sentence is the set of cardinalities of its finite models. This paper is concerned with spectra of sentences over languages that contain only unary function symbols. In particular, it is shown that a set S of natural numbers is the spectrum of a sentence over the language of one unary function symbol precisely if S is an eventually periodic set.",
"title": ""
},
{
"docid": "a9b96c162e9a7f39a90c294167178c05",
"text": "The performance of automotive radar systems is expected to significantly increase in the near future. With enhanced resolution capabilities more accurate and denser point clouds of traffic participants and roadside infrastructure can be acquired and so the amount of gathered information is growing drastically. One main driver for this development is the global trend towards self-driving cars, which all rely on precise and fine-grained sensor information. New radar signal processing concepts have to be developed in order to provide this additional information. This paper presents a prototype high resolution radar sensor which helps to facilitate algorithm development and verification. The system is operational under real-time conditions and achieves excellent performance in terms of range, velocity and angular resolution. Complex traffic scenarios can be acquired out of a moving test vehicle, which is very close to the target application. First measurement runs on public roads are extremely promising and show an outstanding single-snapshot performance. Complex objects can be precisely located and recognized by their contour shape. In order to increase the possible recording time, the raw data rate is reduced by several orders of magnitude in real-time by means of constant false alarm rate (CFAR) processing. The number of target cells can still exceed more than 10 000 points in a single measurement cycle for typical road scenarios.",
"title": ""
},
{
"docid": "7b45559be60b099de0bcf109c9a539b7",
"text": "The split-heel technique has distinct advantages over the conventional medial or lateral approach in the operative debridement of extensive and predominantly plantar chronic calcaneal osteomyelitis in children above 5 years of age. We report three cases (age 5.5-11 years old) of chronic calcaneal osteomyelitis in children treated using the split-heel approach with 3-10 years follow-up showing excellent functional and cosmetic results.",
"title": ""
},
{
"docid": "b27e5e9540e625912a4e395079f6ac68",
"text": "We propose Cooperative Training (CoT) for training generative models that measure a tractable density for discrete data. CoT coordinately trains a generator G and an auxiliary predictive mediator M . The training target of M is to estimate a mixture density of the learned distribution G and the target distribution P , and that of G is to minimize the Jensen-Shannon divergence estimated through M . CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE. This low-variance algorithm is theoretically proved to be unbiased for both generative and predictive tasks. We also theoretically and empirically show the superiority of CoT over most previous algorithms in terms of generative quality and diversity, predictive generalization ability and computational cost.",
"title": ""
},
{
"docid": "6ceab65cc9505cf21824e9409cf67944",
"text": "Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level",
"title": ""
},
{
"docid": "c2d4f97913bb3acceb3703f1501547a8",
"text": "Pattern recognition is the discipline studying the design and operation of systems capable to recognize patterns with specific properties in data so urce . Intrusion detection, on the other hand, is in charge of identifying anomalou s activities by analyzing a data source, be it the logs of an operating system or in the network traffic. It is easy to find similarities between such research fields , and it is straightforward to think of a way to combine them. As to the descriptions abov e, we can imagine an Intrusion Detection System (IDS) using techniques prop er of the pattern recognition field in order to discover an attack pattern within the network traffic. What we propose in this work is such a system, which exp loits the results of research in the field of data mining, in order to discover poten tial attacks. The paper also presents some experimental results dealing with p erformance of our system in a real-world operational scenario.",
"title": ""
},
{
"docid": "32faa5a14922d44101281c783cf6defb",
"text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.",
"title": ""
},
{
"docid": "b3ffb805b3dcffc4e5c9cec47f90e566",
"text": "Real-time ride-sharing, which enables on-the-fly matching between riders and drivers (even en-route), is an important problem due to its environmental and societal benefits. With the emergence of many ride-sharing platforms (e.g., Uber and Lyft), the design of a scalable framework to match riders and drivers based on their various constraints while maximizing the overall profit of the platform becomes a distinguishing business strategy.\n A key challenge of such framework is to satisfy both types of the users in the system, e.g., reducing both riders' and drivers' travel distances. However, the majority of the existing approaches focus only on minimizing the total travel distance of drivers which is not always equivalent to shorter trips for riders. Hence, we propose a fair pricing model that simultaneously satisfies both the riders' and drivers' constraints and desires (formulated as their profiles). In particular, we introduce a distributed auction-based framework where each driver's mobile app automatically bids on every nearby request taking into account many factors such as both the driver's and the riders' profiles, their itineraries, the pricing model, and the current number of riders in the vehicle. Subsequently, the server determines the highest bidder and assigns the rider to that driver. We show that this framework is scalable and efficient, processing hundreds of tasks per second in the presence of thousands of drivers. We compare our framework with the state-of-the-art approaches in both industry and academia through experiments on New York City's taxi dataset. Our results show that our framework can simultaneously match more riders to drivers (i.e., higher service rate) by engaging the drivers more effectively. Moreover, our frame-work schedules shorter trips for riders (i.e., better service quality). Finally, as a consequence of higher service rate and shorter trips, our framework increases the overall profit of the ride-sharing platforms.",
"title": ""
},
{
"docid": "32fe17034223a3ea9a7c52b4107da760",
"text": "With the prevalence of the internet, mobile devices and commercial streaming music services, the amount of digital music available is greater than ever. Sorting through all this music is an extremely time-consuming task. Music recommendation systems search through this music automatically and suggest new songs to users. Music recommendation systems have been developed in commercial and academic settings, but more research is needed. The perfect system would handle all the user’s listening needs while requiring only minimal user input. To this end, I have reviewed 20 articles within the field of music recommendation with the goal of finding how the field can be improved. I present a survey of music recommendation, including an explanation of collaborative and content-based filtering with their respective strengths and weaknesses. I propose a novel next-track recommendation system that incorporates techniques advocated by the literature. The system relies heavily on user skipping behavior to drive both a content-based and a collaborative approach. It uses active learning to balance the needs of exploration vs. exploitation in playing music for the user.",
"title": ""
}
] |
scidocsrr
|
67d11402a53a224307834eb226c43aa2
|
A new mobile-based multi-factor authentication scheme using pre-shared number, GPS location and time stamp
|
[
{
"docid": "6356a0272b95ade100ad7ececade9e36",
"text": "We describe a browser extension, PwdHash, that transparently produces a different password for each site, improving web password security and defending against password phishing and other attacks. Since the browser extension applies a cryptographic hash function to a combination of the plaintext password entered by the user, data associated with the web site, and (optionally) a private salt stored on the client machine, theft of the password received at one site will not yield a password that is useful at another site. While the scheme requires no changes on the server side, implementing this password method securely and transparently in a web browser extension turns out to be quite difficult. We describe the challenges we faced in implementing PwdHash and some techniques that may be useful to anyone facing similar security issues in a browser environment.",
"title": ""
},
{
"docid": "7b4e9043e11d93d8152294f410390f6d",
"text": "In this paper, we present a series of methods to authenticate a user with a graphical password. To that end, we employ the user¿s personal handheld device as the password decoder and the second factor of authentication. In our methods, a service provider challenges the user with an image password. To determine the appropriate click points and their order, the user needs some hint information transmitted only to her handheld device. We show that our method can overcome threats such as key-loggers, weak password, and shoulder surfing. With the increasing popularity of handheld devices such as cell phones, our approach can be leveraged by many organizations without forcing the user to memorize different passwords or carrying around different tokens.",
"title": ""
},
{
"docid": "679759d8f8e4c4ef5a2bb1356a61d7f5",
"text": "This paper describes a method of implementing two factor authentication using mobile phones. The proposed method guarantees that authenticating to services, such as online banking or ATM machines, is done in a very secure manner. The proposed system involves using a mobile phone as a software token for One Time Password generation. The generated One Time Password is valid for only a short user-defined period of time and is generated by factors that are unique to both, the user and the mobile device itself. Additionally, an SMS-based mechanism is implemented as both a backup mechanism for retrieving the password and as a possible mean of synchronization. The proposed method has been implemented and tested. Initial results show the success of the proposed method.",
"title": ""
},
{
"docid": "0f503bded2c4b0676de16345d4596280",
"text": "An emerging approach to the problem of reducing the identity theft is represented by the adoption of biometric authentication systems. Such systems however present however several challenges, related to privacy, reliability, security of the biometric data. Inter-operability is also required among the devices used for the authentication. Moreover, very often biometric authentication in itself is not sufficient as a conclusive proof of identity and has to be complemented with multiple other proofs of identity like passwords, SSN, or other user identifiers. Multi-factor authentication mechanisms are thus required to enforce strong authentication based on the biometric and identifiers of other nature.In this paper we provide a two-phase authentication mechanism for federated identity management systems. The first phase consists of a two-factor biometric authentication based on zero knowledge proofs. We employ techniques from vector-space model to generate cryptographic biometric keys. These keys are kept secret, thus preserving the confidentiality of the biometric data, and at the same time exploit the advantages of a biometric authentication. The second authentication combines several authentication factors in conjunction with the biometric to provide a strong authentication. A key advantage of our approach is that any unanticipated combination of factors can be used. Such authentication system leverages the information of the user that are available from the federated identity management system.",
"title": ""
}
] |
[
{
"docid": "13c7278393988ec2cfa9a396255e6ff3",
"text": "Finding good transfer functions for rendering medical volumes is difficult, non-intuitive, and time-consuming. We introduce a clustering-based framework for the automatic generation of transfer functions for volumetric data. The system first applies mean shift clustering to oversegment the volume boundaries according to their low-high (LH) values and their spatial coordinates, and then uses hierarchical clustering to group similar voxels. A transfer function is then automatically generated for each cluster such that the number of occlusions is reduced. The framework also allows for semi-automatic operation, where the user can vary the hierarchical clustering results or the transfer functions generated. The system improves the efficiency and effectiveness of visualizing medical images and is suitable for medical imaging applications.",
"title": ""
},
{
"docid": "3a80168bda1d5d92a5d767117581806a",
"text": "During the last years a wide range of algorithms and devices have been made available to easily acquire range images. The increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Locating and fitting a model to a scene are very important tasks in many scenarios such as industrial inspection, scene understanding, medical imaging and even gaming. For this reason, these problems have been addressed extensively in the literature. Several of the proposed methods adopt local descriptor-based approaches, while a number of hurdles still hinder the use of global techniques. In this paper we offer a different perspective on the topic: We adopt an evolutionary selection algorithm that seeks global agreement among surface points, while operating at a local level. The approach effectively extends the scope of local descriptors by actively selecting correspondences that satisfy global consistency constraints, allowing us to attack a more challenging scenario where model and scene have different, unknown scales. This leads to a novel and very effective pipeline for 3D object recognition, which is validated with an extensive set of experiments and comparisons with recent techniques at the state of the art.",
"title": ""
},
{
"docid": "104cf54cfa4bc540b17176593cdb77d8",
"text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.",
"title": ""
},
{
"docid": "7c1af982b6ac6aa6df4549bd16c1964c",
"text": "This paper deals with the problem of estimating the position of emitters using only direction of arrival information. We propose an improvement of newly developed algorithm for position finding of a stationary emitter called sensitivity analysis. The proposed method uses Taylor series expansion iteratively to enhance the estimation of the emitter location and reduce position finding error. Simulation results show that our proposed method makes a great improvement on accuracy of position finding with respect to sensitivity analysis method.",
"title": ""
},
{
"docid": "afadbcb8c025ad6feca693c05ce7b43f",
"text": "A data structure that implements a mergeable double-ended priority queue, namely therelaxed min-max heap, is presented. A relaxed min-max heap ofn items can be constructed inO(n) time. In the worst case, operationsfind_min() andfind_max() can be performed in constant time, while each of the operationsmerge(),insert(),delete_min(),delete_max(),decrease_key(), anddelete_key() can be performed inO(logn) time. Moreover,insert() hasO(1) amortized running time. If lazy merging is used,merge() will also haveO(1) worst-case and amortized time. The relaxed min-max heap is the first data structure that achieves these bounds using only two pointers (puls one bit) per item.",
"title": ""
},
{
"docid": "f591ae6217c769d3bca2c15a021125cc",
"text": "Recent years have witnessed an explosive growth of mobile devices. Mobile devices are permeating every aspect of our daily lives. With the increasing usage of mobile devices and intelligent applications, there is a soaring demand for mobile applications with machine learning services. Inspired by the tremendous success achieved by deep learning in many machine learning tasks, it becomes a natural trend to push deep learning towards mobile applications. However, there exist many challenges to realize deep learning in mobile applications, including the contradiction between the miniature nature of mobile devices and the resource requirement of deep neural networks, the privacy and security concerns about individuals' data, and so on. To resolve these challenges, during the past few years, great leaps have been made in this area. In this paper, we provide an overview of the current challenges and representative achievements about pushing deep learning on mobile devices from three aspects: training with mobile data, efficient inference on mobile devices, and applications of mobile deep learning. The former two aspects cover the primary tasks of deep learning. Then, we go through our two recent applications that apply the data collected by mobile devices to inferring mood disturbance and user identification. Finally, we conclude this paper with the discussion of the future of this area.",
"title": ""
},
{
"docid": "a70fa8bc2a48b3cf38bd99b6d1251140",
"text": "In many of today's online applications that facilitate data exploration, results from information filters such as recommender systems are displayed alongside traditional search tools. However, the effect of prediction algorithms on users who are performing open-ended data exploration tasks through a search interface is not well understood. This paper describes a study of three interface variations of a tool for analyzing commuter traffic anomalies in the San Francisco Bay Area. The system supports novel interaction between a prediction algorithm and a human analyst, and is designed to explore the boundaries, limitations and synergies of both. The degree of explanation of underlying data and algorithmic process was varied experimentally across each interface. The experiment (N=197) was performed to assess the impact of algorithm transparency/explanation on data analysis tasks in terms of search success, general insight into the underlying data set and user experience. Results show that 1) presence of recommendations in the user interface produced a significant improvement in recall of anomalies, 2) participants were able to detect anomalies in the data that were missed by the algorithm, 3) participants who used the prediction algorithm performed significantly better when estimating quantities in the data, and 4) participants in the most explanatory condition were the least biased by the algorithm's predictions when estimating quantities.",
"title": ""
},
{
"docid": "85c74646e74aaff7121042beaded5bfe",
"text": "We consider the sampling bias introduced in the study of online networks when collecting data through publicly available APIs (application programming interfaces). We assess differences between three samples of Twitter activity; the empirical context is given by political protests taking place in May 2012. We track online communication around these protests for the period of one month, and reconstruct the network of mentions and re-tweets according to the search and the streaming APIs, and to different filraph comparison tering parameters. We find that smaller samples do not offer an accurate picture of peripheral activity; we also find that the bias is greater for the network of mentions, partly because of the higher influence of snowballing in identifying relevant nodes. We discuss the implications of this bias for the study of diffusion dynamics and political communication through social media, and advocate the need for more uniform sampling procedures to study online communication. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "33a1450fa00705d5ef20780b4e1de6b3",
"text": "This paper reviews the range of sensors used in electronic nose (e-nose) systems to date. It outlines the operating principles and fabrication methods of each sensor type as well as the applications in which the different sensors have been utilised. It also outlines the advantages and disadvantages of each sensor for application in a cost-effective low-power handheld e-nose system.",
"title": ""
},
{
"docid": "9f2d6c872761d8922cac8a3f30b4b7ba",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs). BCIs are devices that process a user's brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted nondisabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming. The task of the BCI is to identify and predict behaviorally induced changes or \"cognitive states\" in a user's brain signals. Brain signals are recorded either noninvasively from electrodes placed on the scalp [electroencephalogram (EEG)] or invasively from electrodes placed on the surface of or inside the brain. BCIs based on these recording techniques have allowed healthy and disabled individuals to control a variety of devices. In this article, we will describe different challenges and proposed solutions for noninvasive brain-computer interfacing.",
"title": ""
},
{
"docid": "dcd705e131eb2b60c54ff5cb6ae51555",
"text": "Comprehension is one fundamental process in the software life cycle. Although necessary, this comprehension is difficult to obtain due to amount and complexity of information related to software. Thus, software visualization techniques and tools have been proposed to facilitate the comprehension process and to reduce maintenance costs. This paper shows the results from a Literature Systematic Review to identify software visualization techniques and tools. We analyzed 52 papers and we identified 28 techniques and 33 tools for software visualization. Among these techniques, 71% have been implemented and available to users, 48% use 3D visualization and 80% are generated using static analysis.",
"title": ""
},
{
"docid": "74686e9acab0a4d41c87cadd7da01889",
"text": "Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.",
"title": ""
},
{
"docid": "548b9580c2b36bd1730392a92f6640c2",
"text": "Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of magnetic resonance (MR) images. Unfortunately, MR images always contain a significant amount of noise caused by operator performance, equipment, and the environment, which can lead to serious inaccuracies with segmentation. A robust segmentation technique based on an extension to the traditional fuzzy c-means (FCM) clustering algorithm is proposed in this paper. A neighborhood attraction, which is dependent on the relative location and features of neighboring pixels, is shown to improve the segmentation performance dramatically. The degree of attraction is optimized by a neural-network model. Simulated and real brain MR images with different noise levels are segmented to demonstrate the superiority of the proposed technique compared to other FCM-based methods. This segmentation method is a key component of an MR image-based classification system for brain tumors, currently being developed.",
"title": ""
},
{
"docid": "c59652c2166aefb00469517cd270dea2",
"text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM (Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection.",
"title": ""
},
{
"docid": "681d0a6dcad967340cfb3ebe9cf7b779",
"text": "We demonstrate an integrated buck dc-dc converter for multi-V/sub CC/ microprocessors. At nominal conditions, the converter produces a 0.9-V output from a 1.2-V input. The circuit was implemented in a 90-nm CMOS technology. By operating at high switching frequency of 100 to 317 MHz with four-phase topology and fast hysteretic control, we reduced inductor and capacitor sizes by three orders of magnitude compared to previously published dc-dc converters. This eliminated the need for the inductor magnetic core and enabled integration of the output decoupling capacitor on-chip. The converter achieves 80%-87% efficiency and 10% peak-to-peak output noise for a 0.3-A output current and 2.5-nF decoupling capacitance. A forward body bias of 500 mV applied to PMOS transistors in the bridge improves efficiency by 0.5%-1%.",
"title": ""
},
{
"docid": "aa6c54a142442ee1de03c57f9afe8972",
"text": "Objectives: We present our 3 years experience with alar batten grafts, using a modified technique, for non-iatrogenic nasal valve/alar",
"title": ""
},
{
"docid": "714df72467bc3e919b7ea7424883cf26",
"text": "Although a lot of attention has been paid to software cost estimation since 1960, making accurate effort and schedule estimation is still a challenge. To collect evidence and identify potential areas of improvement in software cost estimation, it is important to investigate the estimation accuracy, the estimation method used, and the factors influencing the adoption of estimation methods in current industry. This paper analyzed 112 projects from the Chinese software project benchmarking dataset and conducted questionnaire survey on 116 organizations to investigate the above information. The paper presents the current situations related to software project estimation in China and provides evidence-based suggestions on how to improve software project estimation. Our survey results suggest, e.g., that large projects were more prone to cost and schedule overruns, that most computing managers and professionals were neither satisfied nor dissatisfied with the project estimation, that very few organizations (15%) used model-based methods, and that the high adoption cost and insignificant benefit after adoption were the main causes for low use of model-based methods.",
"title": ""
},
{
"docid": "c30ea570f744f576014aeacf545b027c",
"text": "We aimed to examine the effect of different doses of lutein supplementation on visual function in subjects with long-term computer display light exposure. Thirty-seven healthy subjects with long-term computer display light exposure ranging in age from 22 to 30 years were randomly assigned to one of three groups: Group L6 (6 mg lutein/d, n 12); Group L12 (12 mg lutein/d, n 13); and Group Placebo (maltodextrin placebo, n 12). Levels of serum lutein and visual performance indices such as visual acuity, contrast sensitivity and glare sensitivity were measured at weeks 0 and 12. After 12-week lutein supplementation, serum lutein concentrations of Groups L6 and L12 increased from 0.356 (SD 0.117) to 0.607 (SD 0.176) micromol/l, and from 0.328 (SD 0.120) to 0.733 (SD 0.354) micromol/l, respectively. No statistical changes from baseline were observed in uncorrected visual acuity and best-spectacle corrected visual acuity, whereas there was a trend toward increase in visual acuity in Group L12. Contrast sensitivity in Groups L6 and L12 increased with supplementation, and statistical significance was reached at most visual angles of Group L12. No significant change was observed in glare sensitivity over time. Visual function in healthy subjects who received the lutein supplement improved, especially in contrast sensitivity, suggesting that a higher intake of lutein may have beneficial effects on the visual performance.",
"title": ""
},
{
"docid": "decbbd09bcf7a36a3886d52864e9a08c",
"text": "INTRODUCTION\nBirth preparedness and complication readiness (BPCR) is a strategy to promote timely use of skilled maternal and neonatal care during childbirth. According to World Health Organization, BPCR should be a key component of focused antenatal care. Dakshina Kannada, a coastal district of Karnataka state, is categorized as a high-performing district (institutional delivery rate >25%) under the National Rural Health Mission. However, a substantial proportion of women in the district experience complications during pregnancy (58.3%), childbirth (45.7%), and postnatal (17.4%) period. There is a paucity of data on BPCR practice and the factors associated with it in the district. Exploring this would be of great use in the evidence-based fine-tuning of ongoing maternal and child health interventions.\n\n\nOBJECTIVE\nTo assess BPCR practice and the factors associated with it among the beneficiaries of two rural Primary Health Centers (PHCs) of Dakshina Kannada district, Karnataka, India.\n\n\nMETHODS\nA facility-based cross-sectional study was conducted among 217 pregnant (>28 weeks of gestation) and recently delivered (in the last 6 months) women in two randomly selected PHCs from June -September 2013. Exit interviews were conducted using a pre-designed semi-structured interview schedule. Information regarding socio-demographic profile, obstetric variables, and knowledge of key danger signs was collected. BPCR included information on five key components: identified the place of delivery, saved money to pay for expenses, mode of transport identified, identified a birth companion, and arranged a blood donor if the need arises. In this study, a woman who recalled at least two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (total six) was considered as knowledgeable on key danger signs. Optimal BPCR practice was defined as following at least three out of five key components of BPCR.\n\n\nOUTCOME MEASURES\nProportion, Odds ratio, and adjusted Odds ratio (adj OR) for optimal BPCR practice.\n\n\nRESULTS\nA total of 184 women completed the exit interview (mean age: 26.9±3.9 years). Optimal BPCR practice was observed in 79.3% (95% CI: 73.5-85.2%) of the women. Multivariate logistic regression revealed that age >26 years (adj OR = 2.97; 95%CI: 1.15-7.7), economic status of above poverty line (adj OR = 4.3; 95%CI: 1.12-16.5), awareness of minimum two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (adj OR = 3.98; 95%CI: 1.4-11.1), preference to private health sector for antenatal care/delivery (adj OR = 2.9; 95%CI: 1.1-8.01), and woman's discussion about the BPCR with her family members (adj OR = 3.4; 95%CI: 1.1-10.4) as the significant factors associated with optimal BPCR practice.\n\n\nCONCLUSION\nIn this study population, BPCR practice was better than other studies reported from India. Healthcare workers at the grassroots should be encouraged to involve women's family members while explaining BPCR and key danger signs with a special emphasis on young (<26 years) and economically poor women. Ensuring a reinforcing discussion between woman and her family members may further enhance the BPCR practice.",
"title": ""
}
] |
scidocsrr
|
376bfe9f5ee3773a8181c7b3fac2890d
|
A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks
|
[
{
"docid": "56a8d24e4335841cf488373e79cdeaef",
"text": "Weather forecasting is a canonical predictive challenge that has depended primarily on model-based methods. We explore new directions with forecasting weather as a data-intensive challenge that involves inferences across space and time. We study specifically the power of making predictions via a hybrid approach that combines discriminatively trained predictive models with a deep neural network that models the joint statistics of a set of weather-related variables. We show how the base model can be enhanced with spatial interpolation that uses learned long-range spatial dependencies. We also derive an efficient learning and inference procedure that allows for large scale optimization of the model parameters. We evaluate the methods with experiments on real-world meteorological data that highlight the promise of the approach.",
"title": ""
}
] |
[
{
"docid": "8a6c3614d35b21a3e6c077d20309a0bd",
"text": "A multitude of different probabilistic programming languages exists today, all extending a traditional programming language with primitives to support modeling of complex, structured probability distributions. Each of these languages employs its own probabilistic primitives, and comes with a particular syntax, semantics and inference procedure. This makes it hard to understand the underlying programming concepts and appreciate the differences between the different languages. To obtain a better understanding of probabilistic programming, we identify a number of core programming concepts underlying the primitives used by various probabilistic languages, discuss the execution mechanisms that they require and use these to position and survey state-of-the-art probabilistic languages and their implementation. While doing so, we focus on probabilistic extensions of logic programming languages such as Prolog, which have been considered for over 20 years.",
"title": ""
},
{
"docid": "7ec12c0bf639c76393954baae196a941",
"text": "Honeynets have now become a standard part of security measures within the organization. Their purpose is to protect critical information systems and information; this is complemented by acquisition of information about the network threats, attackers and attacks. It is very important to consider issues affecting the deployment and usage of the honeypots and honeynets. This paper discusses the legal issues of honeynets considering their generations. Paper focuses on legal issues of core elements of honeynets, especially data control, data capture and data collection. Paper also draws attention on the issues pertaining to privacy and liability. The analysis of legal issues is based on EU law and it is supplemented by a review of the research literature, related to legal aspects of honeypots and honeynets.",
"title": ""
},
{
"docid": "706e2131c7ebcde981e140241420116f",
"text": "Most commonly used distributed machine learning systems are either synchronous or centralized asynchronous. Synchronous algorithms like AllReduceSGD perform poorly in a heterogeneous environment, while asynchronous algorithms using a parameter server suffer from 1) communication bottleneck at parameter servers when workers are many, and 2) significantly worse convergence when the traffic to parameter server is congested. Can we design an algorithm that is robust in a heterogeneous environment, while being communication efficient and maintaining the best-possible convergence rate? In this paper, we propose an asynchronous decentralized stochastic gradient decent algorithm (AD-PSGD) satisfying all above expectations. Our theoretical analysis shows AD-PSGD converges at the optimal O(1/ √ K) rate as SGD and has linear speedup w.r.t. number of workers. Empirically, ADPSGD outperforms the best of decentralized parallel SGD (D-PSGD), asynchronous parallel SGD (APSGD), and standard data parallel SGD (AllReduceSGD), often by orders of magnitude in a heterogeneous environment. When training ResNet-50 on ImageNet with up to 128 GPUs, AD-PSGD converges (w.r.t epochs) similarly to the AllReduce-SGD, but each epoch can be up to 4-8× faster than its synchronous counterparts in a network-sharing HPC environment.",
"title": ""
},
{
"docid": "8d2d1f1a1f8e6bca488579ed35fead00",
"text": "Failure Mode and Effects Analysis (FMEA) is a well-known technique for evaluating the effects of potential failure modes of components of a system. It is a crucial reliability and safety engineering activity for critical systems requiring systematic inductive reasoning from postulated component failures. We present an approach based on SysML and Prolog to support the tasks of an FMEA analyst. SysML block diagrams of the system under analysis are annotated with valid and error states of components and of their input flows, as well as with the logical conditions that may determine erroneous outputs. From the annotated model, a Prolog knowledge base is automatically built, transparently to the analyst. This can then be queried, e.g., to obtain the flows' and blocks' states that lead to system failures, or to trace the propagation of faults. The approach is suited for integration in modern model-driven system design processes. We describe a proof-of-concept implementation based on the Papyrus modeling tool under Eclipse, and show a demo example.",
"title": ""
},
{
"docid": "e5c602d9996109ea713eba551a9bf94b",
"text": "Several focus measures were studied in this paper as the measures of image clarity, in the field of multi-focus image fusion. All these focus measures are defined in the spatial domain and can be implemented in real-time fusion systems with fast response and robustness. This paper proposed a method to assess focus measures according to focus measures’ capability of distinguishing focused image blocks from defocused image blocks. Experiments were conducted on several sets of images and results show that sum-modified-Laplacian (SML) can provide better performance than other focus measures, when the execution time is not included in the evaluation. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "47fb3483c8f4a5c0284fec3d3a309c09",
"text": "The Knowledge Base Population (KBP) track at the Text Analysis Conference 2010 marks the second year of this important information extraction evaluation. This paper describes the design and implementation of LCC’s systems which participated in the tasks of Entity Linking, Slot Filling, and the new task of Surprise Slot Filling. For the entity linking task, our top score was achieved through a robust context modeling approach which incorporates topical evidence. For slot filling, we used the output of the entity linking system together with a combination of different types of relation extractors. For surprise slot filling, our customizable extraction system was extremely useful due to the time sensitive nature of the task.",
"title": ""
},
{
"docid": "ac02caa1c12ea9d883cf6599f60902c6",
"text": "Biometrics already form a significant component of current and emerging identification technologies. Biometrics systems aim to determine or verify the identity of an individual from their behavioral and/or biological characteristics. Despite significant progress, some biometric systems fail to meet the multitude of stringent security and robustness requirements to support their deployment in some practical scenarios. Among current concerns are vulnerabilities to spoofing?persons who masquerade as others to gain illegitimate accesses to protected data, services, or facilities. While the study of spoofing, or rather antispoofing, has attracted growing interest in recent years, the problem is far from being solved and will require far greater attention in the coming years. This tutorial article presents an introduction to spoofing and antispoofing research. It describes the vulnerabilities, presents an evaluation methodology for the assessment of spoofing and countermeasures, and outlines research priorities for the future.",
"title": ""
},
{
"docid": "8552f08b2c98bcf201f623e95073f9e3",
"text": "The power sensitivity of passive Radio Frequency Identification (RFID) tags heavily affects the read reliability and range. Inventory tracking systems rely heavily on strong read reliability while animal tracking in large fields rely heavily on long read range. Power Optimized Waveforms (POWs) provide a solution to improving both read reliability and read range by increasing RFID tag RF to DC power conversion efficiency. This paper presents a survey of the increases and decreases to read range of common RFID tags from Alien and Impinj with Higgs, Higgs 2, Higgs 3, Monza 3, and Monza 4 RFICs. In addition, POWs are explained in detail with examples and methods of integration into a reader.",
"title": ""
},
{
"docid": "175eef49732b11de4c9542663fc9e93f",
"text": "In this paper we outline a method of procedurally generating maps using Markov Chains. Our method attempts to learn what makes a “good” map from a set of given human-authored maps, and then uses those learned patterns to generate new maps. We present an empirical evaluation using the game Super Mario Bros., showing encouraging results.",
"title": ""
},
{
"docid": "4b854c1c1ed2ece94e88b7300b1395fa",
"text": "Spam web pages intend to achieve higher-than-deserved ranking by various techniques. While human experts could easily identify spam web pages, the manual evaluating process of a large number of pages is still time consuming and cost consuming. To assist manual evaluation, we propose an algorithm to assign spam values to web pages and semi-automatically select potential spam web pages. We first manually select a small set of spam pages as seeds. Then, based on the link structure of the web, the initial R-SpamRank values assigned to the seed pages propagate through links and distribute among the whole web page set. After sorting the pages according to their R-SpamRank values, the pages with high values are selected. Our experiments and analyses show that the algorithm is highly successful in identifying spam pages, which gains a precision of 99.1% in the top 10,000 web pages with the highest R-SpamRank values.",
"title": ""
},
{
"docid": "ce2e955ef4fba68411cafab52d206b52",
"text": "Voice-enabled user interfaces have become a popular means of interaction with various kinds of applications and services. In addition to more traditional interaction paradigms such as keyword search, voice interaction can be a convenient means of communication for many groups of users. Amazon Alexa has become a valuable tool for building custom voice-enabled applications. In this demo paper we describe how we use Amazon Alexa technologies to build a Semantic Web applications able to answer factual questions using the Wikidata knowledge graph. We describe how the Amazon Alexa voice interface allows the user to communicate with the metaphactory knowledge graph management platform and a reusable procedure for producing the Alexa application configuration from semantic data in an automated way.",
"title": ""
},
{
"docid": "dc259f1208eac95817d067b9cd13fa7c",
"text": "This paper introduces a novel approach to texture synthesis based on generative adversarial networks (GAN) (Goodfellow et al., 2014). We extend the structure of the input noise distribution by constructing tensors with different types of dimensions. We call this technique Periodic Spatial GAN (PSGAN). The PSGAN has several novel abilities which surpass the current state of the art in texture synthesis. First, we can learn multiple textures from datasets of one or more complex large images. Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset. In addition, we can also accurately learn periodical textures. We make multiple experiments which show that PSGANs can flexibly handle diverse texture and image data sources. Our method is highly scalable and it can generate output images of arbitrary large size.",
"title": ""
},
{
"docid": "00fb9b734170743d958a79f12c4d529c",
"text": "Loneliness and perceived social support were examined in 39 adolescent boys with autism spectrum disorders (ASD) by means of a self-labeling loneliness measure, the UCLA Loneliness Scale (third version), and the Social Support Scale for Children. Twenty-one percent of the boys with ASD described themselves as often or always feeling lonely. Compared with 199 boys from regular schools in a national probability study, ASD was strongly associated with often or always feeling lonely (OR: 7.08, p < .0005), as well as with a higher degree of loneliness (F(1,229) = 11.1, p < .005). Perceived social support from classmates, parents, and a close friend correlated negatively with loneliness in ASD. The study, therefore, indicates a high occurrence of loneliness among adolescent boys with ASD and points at perceived social support as an important protective factor.",
"title": ""
},
{
"docid": "7f52960fb76c3c697ef66ffee91b13ee",
"text": "The aim of this work was to explore the feasibility of combining hot melt extrusion (HME) with 3D printing (3DP) technology, with a view to producing different shaped tablets which would be otherwise difficult to produce using traditional methods. A filament extruder was used to obtain approx. 4% paracetamol loaded filaments of polyvinyl alcohol with characteristics suitable for use in fused-deposition modelling 3DP. Five different tablet geometries were successfully 3D-printed-cube, pyramid, cylinder, sphere and torus. The printing process did not affect the stability of the drug. Drug release from the tablets was not dependent on the surface area but instead on surface area to volume ratio, indicating the influence that geometrical shape has on drug release. An erosion-mediated process controlled drug release. This work has demonstrated the potential of 3DP to manufacture tablet shapes of different geometries, many of which would be challenging to manufacture by powder compaction.",
"title": ""
},
{
"docid": "ce098e1e022235a2c322a231bff8da6c",
"text": "In recent years, due to the development of three-dimensional scanning technology, the opportunities for real objects to be three-dimensionally measured, taken into the PC as point cloud data, and used for various contents are increasing. However, the point cloud data obtained by three-dimensional scanning has many problems such as data loss due to occlusion or the material of the object to be measured, and occurrence of noise. Therefore, it is necessary to edit the point cloud data obtained by scanning. Particularly, since the point cloud data obtained by scanning contains many data missing, it takes much time to fill holes. Therefore, we propose a method to automatically filling hole obtained by three-dimensional scanning. In our method, a surface is generated from a point in the vicinity of a hole, and a hole region is filled by generating a point sequence on the surface. This method is suitable for processing to fill a large number of holes because point sequence interpolation can be performed automatically for hole regions without requiring user input.",
"title": ""
},
{
"docid": "dd9d776dbc470945154d460921005204",
"text": "The Ant Colony System (ACS) is, next to Ant Colony Optimization (ACO) and the MAX-MIN Ant System (MMAS), one of the most efficient metaheuristic algorithms inspired by the behavior of ants. In this article we present three novel parallel versions of the ACS for the graphics processing units (GPUs). To the best of our knowledge, this is the first such work on the ACS which shares many key elements of the ACO and the MMAS, but differences in the process of building solutions and updating the pheromone trails make obtaining an efficient parallel version for the GPUs a difficult task. The proposed parallel versions of the ACS differ mainly in their implementations of the pheromone memory. The first two use the standard pheromone matrix, and the third uses a novel selective pheromone memory. Computational experiments conducted on several Travelling Salesman Problem (TSP) instances of sizes ranging from 198 to 2392 cities showed that the parallel ACS on Nvidia Kepler GK104 GPU (1536 CUDA cores) is able to obtain a speedup up to 24.29x vs the sequential ACS running on a single core of Intel Xeon E5-2670 CPU. The parallel ACS with the selective pheromone memory achieved speedups up to 16.85x, but in most cases the obtained solutions were of significantly better quality than for the sequential ACS.",
"title": ""
},
{
"docid": "ff6a487e49d1fed033ad082ad7cd0524",
"text": "We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.",
"title": ""
},
{
"docid": "d2e6aa2ab48cdd1907f3f373e0627fa8",
"text": "We address the issue of speeding up the training of convolutional networks. Here we study a distributed method adapted to stochastic gradient descent (SGD). The parallel optimization setup uses several threads, each applying individual gradient descents on a local variable. We propose a new way to share information between different threads inspired by gossip algorithms and showing good consensus convergence properties. Our method called GoSGD has the advantage to be fully asynchronous and decentralized. We compared our method to the recent EASGD in [17] on CIFAR-10 show encouraging results.",
"title": ""
},
{
"docid": "055660c14bbfa430703ce8b1294a0a75",
"text": "String data is ubiquitous, and its management has taken on particular importance in the past few years. Approximate queries are very important on string data especially for more complex queries involving joins. This is due, for example, to the prevalence of typographical errors in data, and multiple conventions for recording attributes such as name and address. Commercial databases do not support approximate string joins directly, and it is a challenge to implement this functionality efficiently with user-defined functions (UDFs). In this paper, we develop a technique for building approximate string join capabilities on top of commercial databases by exploiting facilities already available in them. At the core, our technique relies on matching short substrings of length , called -grams, and taking into account both positions of individual matches and the total number of such matches. Our approach applies to both approximate full string matching and approximate substring matching, with a variety of possible edit distance functions. The approximate string match predicate, with a suitable edit distance threshold, can be mapped into a vanilla relational expression and optimized by conventional relational optimizers. We demonstrate experimentally the benefits of our technique over the direct use of UDFs, using commercial database systems and real data. To study the I/O and CPU behavior of approximate string join algorithms with variations in edit distance and -gram length, we also describe detailed experiments based on a prototype implementation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 27th VLDB Conference, Roma, Italy, 2001",
"title": ""
},
{
"docid": "1c3c5c01fbcb3a07274b49b8df0f2963",
"text": "E D P R Demand forecasts play a crucial role for supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Several forecasting techniques have been developed, each one with its particular advantages and disadvantages compared to other approaches. This motivates the development of hybrid systems combining different techniques and their respective strengths. In this paper, we present a hybrid intelligent system combining Autoregressive Integrated Moving Average (ARIMA) models and neural networks for demand forecasting. We show improvements in forecasting accuracy and propose a replenishment system for a Chilean supermarket, which leads simultaneously to fewer sales failures and lower inventory levels than the previous solution. # 2005 Published by Elsevier B.V.",
"title": ""
}
] |
scidocsrr
|
047e00e7538272aee2095920e129dbe8
|
Random Walks for Text Semantic Similarity
|
[
{
"docid": "a12769e78530516b382fbc18fe4ec052",
"text": "Roget’s Thesaurus has not been sufficiently appreciated in Natural Language Processing. We show that Roget's and WordNet are birds of a feather. In a few typical tests, we compare how the two resources help measure semantic similarity. One of the benchmarks is Miller and Charles’ list of 30 noun pairs to which human judges had assigned similarity measures. We correlate these measures with those computed by several NLP systems. The 30 pairs can be traced back to Rubenstein and Goodenough’s 65 pairs, which we have also studied. Our Roget’sbased system gets correlations of .878 for the smaller and .818 for the larger list of noun pairs; this is quite close to the .885 that Resnik obtained when he employed humans to replicate the Miller and Charles experiment. We further evaluate our measure by using Roget’s and WordNet to answer 80 TOEFL, 50 ESL and 300 Reader’s Digest questions: the correct synonym must be selected amongst a group of four words. Our system gets 78.75%, 82.00% and 74.33% of the questions respectively, better than any published results.",
"title": ""
},
{
"docid": "d8056ee6b9d1eed4bc25e302c737780c",
"text": "This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for Web pages independent of their textual content, solely based on the hyperlink structure of the Web. PageRank is typically used as a Web Search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much more complex challenge. Recently, significant effort has been invested in building sets of personalized PageRank vectors. PageRank is also used in many diverse applications other than ranking. Below we are interested in the theoretical foundations of the PageRank formulation, in accelerating of PageRank computing, in the effects of particular aspects of Web graph structure on optimal organization of computations, and in PageRank stability. We also review alternative models that lead to authority indices similar to PageRank and the role of such indices in applications other than Web Search. We also discuss link-based search personalization and outline some aspects of PageRank infrastructure from associated measures of convergence to link preprocessing. Content",
"title": ""
}
] |
[
{
"docid": "4d08bbbe59654c1e1140faebcc33701e",
"text": "Muenke Syndrome (FGFR3-Related Craniosynostosis): Expansion of the Phenotype and Review of the Literature Emily S. Doherty, Felicitas Lacbawan, Donald W. Hadley, Carmen Brewer, Christopher Zalewski, H. Jeff Kim, Beth Solomon, Kenneth Rosenbaum, Demetrio L. Domingo, Thomas C. Hart, Brian P. Brooks, LaDonna Immken, R. Brian Lowry, Virginia Kimonis, Alan L. Shanske, Fernanda Sarquis Jehee, Maria Rita Passos Bueno, Carol Knightly, Donna McDonald-McGinn, Elaine H. Zackai, and Maximilian Muenke* National Human Genome Research Institute, National Institutes of Health, Bethesda, Maryland National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland Warren Grant Magnuson Clinical Center, National Institutes of Health, Bethesda, Maryland Children’s National Medical Center, Washington, District of Columbia National Institute of Dental and Craniofacial Research, National Institutes of Health, Bethesda, Maryland National Eye Institute, National Institutes of Health, Bethesda, Maryland Specially for Children, Austin, Texas Department of Medical Genetics, Alberta Children’s Hospital and University of Calgary, Calgary, Alberta, Canada Children’s Hospital Boston, Boston, Massachusetts Children’s Hospital Montefiore, Bronx, New York University of São Paulo, São Paulo, Brazil The Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania Carilion Clinic, Roanoke, Virginia",
"title": ""
},
{
"docid": "2c2942905010e71cda5f8b0f41cf2dd0",
"text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .",
"title": ""
},
{
"docid": "6bbff9d65a0e80fcbf0a6f840266accf",
"text": "This paper presents a complete methodology for the design of AC permanent magnet motors for electric vehicle traction. Electromagnetic, thermal and mechanical performance aspects are considered and modern CAD tools are utilised throughout the methodology. A 36 slot 10 pole interior permanent magnet design example is used throughout the analysis.",
"title": ""
},
{
"docid": "b6cd09d268aa8e140bef9fc7890538c3",
"text": "XML is quickly becoming the de facto standard for data exchange over the Internet. This is creating a new set of data management requirements involving XML, such as the need to store and query XML documents. Researchers have proposed using relational database systems to satisfy these requirements by devising ways to \"shred\" XML documents into relations, and translate XML queries into SQL queries over these relations. However, a key issue with such an approach, which has largely been ignored in the research literature, is how (and whether) the ordered XML data model can be efficiently supported by the unordered relational data model. This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system. This is accomplished by encoding order as a data value. We propose three order encoding methods that can be used to represent XML order in the relational data model, and also propose algorithms for translating ordered XPath expressions into SQL using these encoding methods. Finally, we report the results of an experimental study that investigates the performance of the proposed order encoding methods on a workload of ordered XML queries and updates.",
"title": ""
},
{
"docid": "888de1004e212e1271758ac35ff9807d",
"text": "We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.",
"title": ""
},
{
"docid": "fae60b86d98a809f876117526106719d",
"text": "Big Data security analysis is commonly used for the analysis of large volume security data from an organisational perspective, requiring powerful IT infrastructure and expensive data analysis tools. Therefore, it can be considered to be inaccessible to the vast majority of desktop users and is difficult to apply to their rapidly growing data sets for security analysis. A number of commercial companies offer a desktop-oriented big data security analysis solution; however, most of them are prohibitive to ordinary desktop users with respect to cost and IT processing power. This paper presents an intuitive and inexpensive big data security analysis approach using Computational Intelligence (CI) techniques for Windows desktop users, where the combination of Windows batch programming, EmEditor and R are used for the security analysis. The simulation is performed on a real dataset with more than 10 million observations, which are collected from Windows Firewall logs to demonstrate how a desktop user can gain insight into their abundant and untouched data and extract useful information to prevent their system from current and future security threats. This CI-based big data security analysis approach can also be extended to other types of security logs such as event logs, application logs and web logs.",
"title": ""
},
{
"docid": "62686423e15ef0cac3a3bbe8f33e3367",
"text": "Most of the existing deep learning-based methods for 3D hand and human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of keypoints, such as hand or human body joints, via 2D convolutional neural networks (CNNs). The first weakness of this approach is the presence of perspective distortion in the 2D depth map. While the depth map is intrinsically 3D data, many previous methods treat depth maps as 2D images that can distort the shape of the actual object through projection from 3D to 2D space. This compels the network to perform perspective distortion-invariant estimation. The second weakness of the conventional approach is that directly regressing 3D coordinates from a 2D image is a highly nonlinear mapping, which causes difficulty in the learning procedure. To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint. We design our model as a 3D CNN that provides accurate estimates while running in real-time. Our system outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge. The code is available in1.",
"title": ""
},
{
"docid": "2f54746f666befe19af1391f1d90aca8",
"text": "The Internet of Things has drawn lots of research attention as the growing number of devices connected to the Internet. Long Term Evolution-Advanced (LTE-A) is a promising technology for wireless communication and it's also promising for IoT. The main challenge of incorporating IoT devices into LTE-A is a large number of IoT devices attempting to access the network in a short period which will greatly reduce the network performance. In order to improve the network utilization, we adopted a hierarchy architecture using a gateway for connecting the devices to the eNB and proposed a multiclass resource allocation algorithm for LTE based IoT communication. Simulation results show that the proposed algorithm can provide good performance both on data rate and latency for different QoS applications both in saturated and unsaturated environment.",
"title": ""
},
{
"docid": "dc94e340ceb76a0c9fda47bac4be9920",
"text": "Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.",
"title": ""
},
{
"docid": "a83931702879dc41a3d7007ac4c32716",
"text": "We propose a query-based generative model for solving both tasks of question generation (QG) and question answering (QA). The model follows the classic encoderdecoder framework. The encoder takes a passage and a query as input then performs query understanding by matching the query with the passage from multiple perspectives. The decoder is an attention-based Long Short Term Memory (LSTM) model with copy and coverage mechanisms. In the QG task, a question is generated from the system given the passage and the target answer, whereas in the QA task, the answer is generated given the question and the passage. During the training stage, we leverage a policy-gradient reinforcement learning algorithm to overcome exposure bias, a major problem resulted from sequence learning with cross-entropy loss. For the QG task, our experiments show higher performances than the state-of-the-art results. When used as additional training data, the automatically generated questions even improve the performance of a strong extractive QA system. In addition, our model shows better performance than the state-of-the-art baselines of the generative QA task.",
"title": ""
},
{
"docid": "d75d453181293c92ec9bab800029e366",
"text": "For a majority of applications implemented today, the Intermediate Bus Architecture (IBA) has been the preferred power architecture. This power architecture has led to the development of the isolated, semi-regulated DC/DC converter known as the Intermediate Bus Converter (IBC). Fixed ratio Bus Converters that employ a new power topology known as the Sine Amplitude Converter (SAC) offer dramatic improvements in power density, noise reduction, and efficiency over the existing IBC products. As electronic systems continue to trend toward lower voltages with higher currents and as the speed of contemporary loads - such as state-of-the-art processors and memory - continues to increase, the power systems designer is challenged to provide small, cost effective and efficient solutions that offer the requisite performance. Traditional power architectures cannot, in the long run, provide the required performance. Vicor's Factorized Power Architecture (FPA), and the implementation of V·I Chips, provides a revolutionary new and optimal power conversion solution that addresses the challenge in every respect. The technology behind these power conversion engines used in the IBC and V·I Chips is analyzed and contextualized in a system perspective.",
"title": ""
},
{
"docid": "eade87f676c023cd3024226b48131ffb",
"text": "Finding the dense regions of a graph and relations among them is a fundamental task in network analysis. Nucleus decomposition is a principled framework of algorithms that generalizes the k-core and k-truss decompositions. It can leverage the higher-order structures to locate the dense subgraphs with hierarchical relations. Computation of the nucleus decomposition is performed in multiple steps, known as the peeling process, and it requires global information about the graph at any time. This prevents the scalable parallelization of the computation. Also, it is not possible to compute approximate and fast results by the peeling process, because it does not produce the densest regions until the algorithm is complete. In a previous work, Lu et al. proposed to iteratively compute the h-indices of vertex degrees to obtain the core numbers and prove that the convergence is obtained after a finite number of iterations. In this work, we generalize the iterative h-index computation for any nucleus decomposition and prove convergence bounds. We present a framework of local algorithms to obtain the exact and approximate nucleus decompositions. Our algorithms are pleasingly parallel and can provide approximations to explore time and quality trade-offs. Our shared-memory implementation verifies the efficiency, scalability, and effectiveness of our algorithms on real-world networks. In particular, using 24 threads, we obtain up to 4.04x and 7.98x speedups for k-truss and (3, 4) nucleus decompositions.",
"title": ""
},
{
"docid": "0344917c6b44b85946313957a329bc9c",
"text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.",
"title": ""
},
{
"docid": "7965e8074a84c64c971e22995caaab6b",
"text": "Mechanical details as well as electrical models of FDR (frequency domain reflectometry) sensors for the measurement of the complex dielectric permittivity of porous materials are presented. The sensors are formed from two stainless steel parallel waveguides of various lengths. Using the data from VNA (vector network analyzer) with the connected FDR sensor and selected models of the applied sensor it was possible obtain the frequency spectrum of dielectric permittivity from 10 to 500 MHz of reference liquids and soil samples of various moisture and salinity. The performance of the analyzed sensors were compared with TDR (time domain reflectometry) ones of similar mechanical construction.",
"title": ""
},
{
"docid": "5fbb54e63158066198cdf59e1a8e9194",
"text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.",
"title": ""
},
{
"docid": "5b984d57ad0940838b703eadd7c733b3",
"text": "Neural sequence generation is commonly approached by using maximumlikelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α→ 0 and RL to α→ 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.",
"title": ""
},
{
"docid": "549d486d6ff362bc016c6ce449e29dc9",
"text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
},
{
"docid": "93e5ed1d67fe3d20c7b0177539e509c4",
"text": "Business models that rely on social media and user-generated content have shifted from the more traditional business model, where value for the organization is derived from the one-way delivery of products and/or services, to the provision of intangible value based on user engagement. This research builds a model that hypothesizes that the user experiences from social interactions among users, operationalized as personalization, transparency, access to social resources, critical mass of social acquaintances, and risk, as well as with the technical features of the social media platform, operationalized as the completeness, flexibility, integration, and evolvability, influence user engagement and subsequent usage behavior. Using survey responses from 408 social media users, findings suggest that both social and technical factors impact user engagement and ultimately usage with additional direct impacts on usage by perceptions of the critical mass of social acquaintances and risk. KEywORdS Social Interactions, Social Media, Social Networking, Technical Features, Use, User Engagement, User Experience",
"title": ""
}
] |
scidocsrr
|
ba57363cbbff05ab46ab5b39ac1b8369
|
Generating Different Story Tellings from Semantic Representations of Narrative
|
[
{
"docid": "d849c7a7fd0e475b4f0f64ffdeaf2790",
"text": "In this paper we provide a description of TimeML, a rich specification language for event and temporal expressions in natural language text, developed in the context of the AQUAINT program on Question Answering Systems. Unlike most previous work on event annotation, TimeML captures three distinct phenomena in temporal markup: (1) it systematically anchors event predicates to a broad range of temporally denotating expressions; (2) it orders event expressions in text relative to one another, both intrasententially and in discourse; and (3) it allows for a delayed (underspecified) interpretation of partially determined temporal expressions. We demonstrate the expressiveness of TimeML for a broad range of syntactic and semantic contexts, including aspectual predication, modal subordination, and an initial treatment of lexical and constructional causation in text.",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
}
] |
[
{
"docid": "5e68152b3577dfdb60a53c77086692f6",
"text": "We analyze trading opportunities that arise from differences between the bond and the CDS market. By simultaneously entering a position in a CDS contract and the underlying bond, traders can build a default-risk free position that allows them to repeatedly earn the difference between the bond asset swap spread and the CDS, known as the basis. We show that the basis size is closely related to measures of company-specific credit risk and liquidity, and to market conditions. In analyzing the aggregate profits of these basis trading strategies, we document that dissolving a position leads to significant profit variations, but that attractive risk-return characteristics still apply. The aggregate profits depend on the credit risk, liquidity, and market measures even more strongly than the basis itself, and we show which conditions make long and short basis trades more profitable. Finally, we document the impact of the financial crisis on the profits of long and short basis trades, and show that the formerly more profitable long basis trades experienced stronger profit decreases than short basis trades. JEL classification: C31, C32, G12, G13, G14, G32",
"title": ""
},
{
"docid": "3973a575bae986eb0410df18b0de8a5a",
"text": "The design and operation along with verifying measurements of a harmonic radar transceiver, or tag, developed for insect tracking are presented. A short length of wire formed the antenna while a beam lead Schottky diode across a resonant loop formed the frequency doubler circuit yielding a total tag mass of less than 3 mg. Simulators using the method-of-moments for the antenna, finite-integral time-domain for the loop, and harmonic balance for the nonlinear diode element were used to predict and optimize the transceiver performance. This performance is compared to the ideal case and to measurements performed using a pulsed magnetron source within an anechoic chamber. A method for analysis of the tag is presented and used to optimize the design by creating the largest possible return signal at the second harmonic frequency for a particular incident power density. These methods were verified through measurement of tags both in isolation and mounted on insects. For excitation at 9.41 GHz the optimum tag in isolation had an antenna length of 12 mm with a loop diameter of 1 mm which yielded a harmonic cross-section of 40 mm/sup 2/. For tags mounted on Colorado potato beetles, optimum performance was achieved with an 8 mm dipole fed 2 mm from the beetle attached end. A theory is developed that describes harmonic radar in a fashion similar to the conventional radar range equation but with harmonic cross-section replacing the conventional radar cross-section. This method provides a straightforward description of harmonic radar system performance as well as provides a means to describe harmonic radar tag performance.",
"title": ""
},
{
"docid": "b9e4a201050b379500e5e8a2bca81025",
"text": "On the basis of a longitudinal field study of domestic communication, we report some essential constituents of the user experience of awareness of others who are distant in space or time, i.e. presence-in-absence. We discuss presence-in-absence in terms of its social (Contact) and informational (Content) facets, and the circumstances of the experience (Context). The field evaluation of a prototype, 'The Cube', designed to support presence-in-absence, threw up issues in the interrelationships between contact, content and context; issues that the designers of similar social artifacts will need to address.",
"title": ""
},
{
"docid": "aead9a7a19551a445584064a669b191a",
"text": "The purpose of this paper is to study the impact of tourism marketing mix and how it affects tourism in Jordan, and to determine which element of the marketing mix has the strongest impact on Jordanian tourism and how it will be used to better satisfy tourists. The paper will focus on foreign tourists coming to Jordan; a field survey will be used by using questionnaires to collect data. Three hundred questionnaires will be collected from actual tourists who visited Jordan, the data will be collected from selected tourism sites like (Petra, Jarash,.... etc.) and classified from one to five stars hotels in Jordan. The questionnaire will be designed in different languages (English, French and Arabic) to meet all tourists from different countries. The study established that from all the marketing mix elements, the researcher studied, product & promotion had the strongest effect on foreign tourist's satisfaction, where price and distribution were also effective significant factors. The research recommends suitable marketing strategies for all elements especially product & promotion.",
"title": ""
},
{
"docid": "a798db9dfcfec4b8149de856c7e69b48",
"text": "Compared to scanned images, document pictures captured by camera can suffer from distortions due to perspective and page warping. It is necessary to restore a frontal planar view of the page before other OCR techniques can be applied. In this paper we describe a novel approach for flattening a curved document in a single picture captured by an uncalibrated camera. To our knowledge this is the first reported method able to process general curved documents in images without camera calibration. We propose to model the page surface by a developable surface, and exploit the properties (parallelism and equal line spacing) of the printed textual content on the page to recover the surface shape. Experiments show that the output images are much more OCR friendly than the original ones. While our method is designed to work with any general developable surfaces, it can be adapted for typical special cases including planar pages, scans of thick books, and opened books.",
"title": ""
},
{
"docid": "a7f4f3d03b69fc339b4908e247a36f30",
"text": "In this letter, we present a novel feature extraction method for sound event classification, based on the visual signature extracted from the sound's time-frequency representation. The motivation stems from the fact that spectrograms form recognisable images, that can be identified by a human reader, with perception enhanced by pseudo-coloration of the image. The signal processing in our method is as follows. 1) The spectrogram is normalised into greyscale with a fixed range. 2) The dynamic range is quantized into regions, each of which is then mapped to form a monochrome image. 3) The monochrome images are partitioned into blocks, and the distribution statistics in each block are extracted to form the feature. The robustness of the proposed method comes from the fact that the noise is normally more diffuse than the signal and therefore the effect of the noise is limited to a particular quantization region, leaving the other regions less changed. The method is tested on a database of 60 sound classes containing a mixture of collision, action and characteristic sounds and shows a significant improvement over other methods in mismatched conditions, without the need for noise reduction.",
"title": ""
},
{
"docid": "aabf75855e39682b353c46332bc218db",
"text": "Semantic Web Mining is the outcome of two new and fast developing domains: Semantic Web and Data Mining. The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. Data Mining is the nontrivial process of identifying valid, previously unknown, potentially useful patterns in data. Semantic Web Mining refers to the application of data mining techniques to extract knowledge from World Wide Web or the area of data mining that refers to the use of algorithms for extracting patterns from resources distributed over in the web. The aim of Semantic Web Mining is to discover and retrieve useful and interesting patterns from a huge set of web data. This web data consists of different kind of information, including web structure data, web log data and user profiles data. Semantic Web Mining is a relatively new area, broadly interdisciplinary, attracting researchers from: computer science, information retrieval specialists and experts from business studies fields. Web data mining includes web content mining, web structure mining and web usage mining. All of these approaches attempt to extract knowledge from the web, produce some useful results from the knowledge extracted and apply these results to the real world problems. To improve the internet service quality and increase the user click rate on a specific website, it is necessary for a web developer to know what the user really want to do, predict which pages the user is potentially interested in. In this paper, various techniques for Semantic Web mining like web content mining, web usage mining and web structure mining are discussed. Our main focus is on web usage mining and its application in web personalization. Study shows that the accuracy of recommendation system has improved significantly with the use of semantic web mining in web personalization.",
"title": ""
},
{
"docid": "084d376f9aa8d56d6dfec7d78e2c807f",
"text": "A comprehensive model is presented which enables the effects of ionizing radiation on bulk CMOS devices and integrated circuits to be simulated with closed form functions. The model adapts general equations for defect formation in uniform SiO2 films to facilitate analytical calculations of trapped charge and interface trap buildup in structurally irregular and radiation sensitive shallow trench isolation (STI) oxides. A new approach whereby non-uniform defect distributions along the STI sidewall are calculated, integrated into implicit surface potential equations, and ultimately used to model radiation-induced ldquoedgerdquo leakage currents in n-channel MOSFETs is described. The results of the modeling approach are compared to experimental data obtained on 130 nm and 90 nm devices. The features having the greatest impact on the increased radiation tolerance of advanced deep-submicron bulk CMOS technologies are also discussed. These features include increased doping levels along the STI sidewall.",
"title": ""
},
{
"docid": "fc6382579f90ffbc2e54498ad2034d3b",
"text": "Features extracted by deep networks have been popular in many visual search tasks. This article studies deep network structures and training schemes for mobile visual search. The goal is to learn an effective yet portable feature representation that is suitable for bridging the domain gap between mobile user photos and (mostly) professionally taken product images while keeping the computational cost acceptable for mobile-based applications. The technical contributions are twofold. First, we propose an alternative of the contrastive loss popularly used for training deep Siamese networks, namely robust contrastive loss, where we relax the penalty on some positive and negative pairs to alleviate overfitting. Second, a simple multitask fine-tuning scheme is leveraged to train the network, which not only utilizes knowledge from the provided training photo pairs but also harnesses additional information from the large ImageNet dataset to regularize the fine-tuning process. Extensive experiments on challenging real-world datasets demonstrate that both the robust contrastive loss and the multitask fine-tuning scheme are effective, leading to very promising results with a time cost suitable for mobile product search scenarios.",
"title": ""
},
{
"docid": "17b8bff80cf87fb7e3c6c729bb41c99e",
"text": "Off-policy reinforcement learning enables near-optimal policy from suboptimal experience, thereby provisions opportunity for artificial intelligence applications in healthcare. Previous works have mainly framed patient-clinician interactions as Markov decision processes, while true physiological states are not necessarily fully observable from clinical data. We capture this situation with partially observable Markov decision process, in which an agent optimises its actions in a belief represented as a distribution of patient states inferred from individual history trajectories. A Gaussian mixture model is fitted for the observed data. Moreover, we take into account the fact that nuance in pharmaceutical dosage could presumably result in significantly different effect by modelling a continuous policy through a Gaussian approximator directly in the policy space, i.e. the actor. To address the challenge of infinite number of possible belief states which renders exact value iteration intractable, we evaluate and plan for only every encountered belief, through heuristic search tree by tightly maintaining lower and upper bounds of the true value of belief. We further resort to function approximations to update value bounds estimation, i.e. the critic, so that the tree search can be improved through more compact bounds at the fringe nodes that will be back-propagated to the root. Both actor and critic parameters are learned via gradient-based approaches. Our proposed policy trained from real intensive care unit data is capable of dictating dosing on vasopressors and intravenous fluids for sepsis patients that lead to the best patient outcomes.",
"title": ""
},
{
"docid": "26c58183e71f916f37d67f1cf848f021",
"text": "With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect ® ) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 μg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, μg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 μg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 μg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 μg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, International Journal of Biomedical Engineering and Clinical Science 2016; 2(1): 8-17 9 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 μg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect ® Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.",
"title": ""
},
{
"docid": "f8aeaf04486bdbc7254846d95e3cab24",
"text": "In this paper, we present a novel wearable RGBD camera based navigation system for the visually impaired. The system is composed of a smartphone user interface, a glass-mounted RGBD camera device, a real-time navigation algorithm, and haptic feedback system. A smartphone interface provides an effective way to communicate to the system using audio and haptic feedback. In order to extract orientational information of the blind users, the navigation algorithm performs real-time 6-DOF feature based visual odometry using a glass-mounted RGBD camera as an input device. The navigation algorithm also builds a 3D voxel map of the environment and analyzes 3D traversability. A path planner of the navigation algorithm integrates information from the egomotion estimation and mapping and generates a safe and an efficient path to a waypoint delivered to the haptic feedback system. The haptic feedback system consisting of four micro-vibration motors is designed to guide the visually impaired user along the computed path and to minimize cognitive loads. The proposed system achieves real-time performance faster than 30Hz in average on a laptop, and helps the visually impaired extends the range of their activities and improve the mobility performance in a cluttered environment. The experiment results show that navigation in indoor environments with the proposed system avoids collisions successfully and improves mobility performance of the user compared to conventional and state-of-the-art mobility aid devices.",
"title": ""
},
{
"docid": "3f7d77aafcc5c256394bb97e0b1fdc77",
"text": "Ischiofemoral impingement (IFI) is the entrapment of the quadratus femoris muscle (QFM) between the trochanter minor of the femur and the ischium-hamstring tendon. Patients with IFI generally present with hip pain, which may radiate toward the knee. Although there is no specific diagnostic clinical test for this disorder, the presence of QFM edema/fatty replacement and narrowing of the ischiofemoral space and the quadratus femoris space on magnetic resonance imaging (MRI) are suggestive of IFI. The optimal treatment strategy of this syndrome remains obscure. Patients may benefit from a conservative treatment regimen that includes rest, activity restriction, nonsteroidal anti-inflammatory drugs, and rehabilitation procedures, just as with other impingement syndromes. Herein we report an 11-year-old girl with IFI who was successfully treated conservatively. To our knowledge, our case is the youngest patient reported in the English literature. MRI remains an important tool in the diagnosis of IFI, and radiologists should be aware of the specific features of this entity.",
"title": ""
},
{
"docid": "823d838471a475ec32d460711b9805b4",
"text": "Marketing has a tradition in conducting scientific research with cutting-edge techniques developed in management science, such as data envelopment analysis (DEA) (Charnes et al. 1985). Two decades ago, Kamakura, Ratchford, and Agrawal (1988) applied DEA to examine market efficiency and consumer welfare loss. In this review of three new books, my purpose is (1) to provide a background of DEA for marketing scholars and executives and (2) to motivate them with exciting DEA advances for marketing theory and practice. All three books provide brief descriptions of DEA’s history, origin, and basic models. For beginners, Ramanathan’s work, An Introduction to Data Envelopment Analysis, is a good source, offering basic concepts of DEA’s efficiency and programming formulations in a straightforward manner (with some illustrations) in the first three chapters. A unique feature of this book is that it dedicates a chapter to discussing as many as 11 DEA computer software programs and explaining some noncommercial DEA packages that are available free on the Internet for academic purposes. As Ramanathan states (p. 111), “[I]n this computer era, it is important that any management science technique has adequate software support so that potential users are encouraged to use it. Software harnesses the computing power of [personal computers] for use in practical decision-making situations. It can also expedite the implementation of a method.” EDITOR: Naveen Donthu",
"title": ""
},
{
"docid": "051d402ce90d7d326cc567e228c8411f",
"text": "CDM ESD event has become the main ESD reliability concern for integrated-circuits products using nanoscale CMOS technology. A novel CDM ESD protection design, using self-biased current trigger (SBCT) and source pumping, has been proposed and successfully verified in 0.13-lm CMOS technology to achieve 1-kV CDM ESD robustness. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "abb7dceb1bd532c31029b5030c9a12e3",
"text": "In this paper, we present a real time method based on some video and image processing algorithms for eye blink detection. The motivation of this research is the need of disabling who cannot control the calls with human mobile interaction directly without the need of hands. A Haar Cascade Classifier is applied for face and eye detection for getting eye and facial axis information. In addition, the same classifier is used based on Haarlike features to find out the relationship between the eyes and the facial axis for positioning the eyes. An efficient eye tracking method is proposed which uses the position of detected face. Finally, an eye blinking detection based on eyelids state (close or open) is used for controlling android mobile phones. The method is used with and without smoothing filter to show the improvement of detection accuracy. The application is used in real time for studying the effect of light and distance between the eyes and the mobile device in order to evaluate the accuracy detection and overall accuracy of the system. Test results show that our proposed method provides a 98% overall accuracy and 100% detection accuracy for a distance of 35 cm and an artificial light. Keywords—eye detection; eye tracking; eye blinking; smoothing filter; detection accuracy",
"title": ""
},
{
"docid": "7067fbd4d551320c9054b2b258ea4e8f",
"text": "Until the era of the information society, information was a concern mainly for organizations whose line of business demanded a high degree of security. However, the growing use of information technology is affecting the status of information security so that it is gradually becoming an area that plays an important role in our everyday lives. As a result, information security issues should now be regarded on a par with other security issues. Using this assertion as the point of departure, this paper outlines the dimensions of information security awareness, namely its organizational, general public, socio-political, computer ethical and institutional education dimensions, along with the categories (or target groups) within each dimension.",
"title": ""
},
{
"docid": "50f369f80405f7142e557c7f6bc405c8",
"text": "Microbes rely on diverse defense mechanisms that allow them to withstand viral predation and exposure to invading nucleic acid. In many Bacteria and most Archaea, clustered regularly interspaced short palindromic repeats (CRISPR) form peculiar genetic loci, which provide acquired immunity against viruses and plasmids by targeting nucleic acid in a sequence-specific manner. These hypervariable loci take up genetic material from invasive elements and build up inheritable DNA-encoded immunity over time. Conversely, viruses have devised mutational escape strategies that allow them to circumvent the CRISPR/Cas system, albeit at a cost. CRISPR features may be exploited for typing purposes, epidemiological studies, host-virus ecological surveys, building specific immunity against undesirable genetic elements, and enhancing viral resistance in domesticated microbes.",
"title": ""
}
] |
scidocsrr
|
83b3fc480d0f133d676e7037eb95e92a
|
UAV Depth Perception from Visual , Images using a Deep Convolutional Neural Network
|
[
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "7af26168ae1557d8633a062313d74b78",
"text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.",
"title": ""
},
{
"docid": "b0c62e2049ea4f8ada0d506e06adb4bb",
"text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"title": ""
}
] |
[
{
"docid": "61faedf71a99d41a22a19037f0106367",
"text": "We aimed to clarify the mechanical determinants of sprinting performance during acceleration and maximal speed phases of a single sprint, using ground reaction forces (GRFs). While 18 male athletes performed a 60-m sprint, GRF was measured at every step over a 50-m distance from the start. Variables during the entire acceleration phase were approximated with a fourth-order polynomial. Subsequently, accelerations at 55%, 65%, 75%, 85%, and 95% of maximal speed, and running speed during the maximal speed phase were determined as sprinting performance variables. Ground reaction impulses and mean GRFs during the acceleration and maximal speed phases were selected as independent variables. Stepwise multiple regression analysis selected propulsive and braking impulses as contributors to acceleration at 55%-95% (β > 0.72) and 75%-95% (β > 0.18), respectively, of maximal speed. Moreover, mean vertical force was a contributor to maximal running speed (β = 0.48). The current results demonstrate that exerting a large propulsive force during the entire acceleration phase, suppressing braking force when approaching maximal speed, and producing a large vertical force during the maximal speed phase are essential for achieving greater acceleration and maintaining higher maximal speed, respectively.",
"title": ""
},
{
"docid": "d4cf614c352b3bbef18d7f219a3da2d1",
"text": "In recent years there has been growing interest on the occurrence and the fate of pharmaceuticals in the aquatic environment. Nevertheless, few data are available covering the fate of the pharmaceuticals in the water/sediment compartment. In this study, the environmental fate of 10 selected pharmaceuticals and pharmaceutical metabolites was investigated in water/sediment systems including both the analysis of water and sediment. The experiments covered the application of four 14C-labeled pharmaceuticals (diazepam, ibuprofen, iopromide, and paracetamol) for which radio-TLC analysis was used as well as six nonlabeled compounds (carbamazepine, clofibric acid, 10,11-dihydro-10,11-dihydroxycarbamazepine, 2-hydroxyibuprofen, ivermectin, and oxazepam), which were analyzed via LC-tandem MS. Ibuprofen, 2-hydroxyibuprofen, and paracetamol displayed a low persistence with DT50 values in the water/sediment system < or =20 d. The sediment played a key role in the elimination of paracetamol due to the rapid and extensive formation of bound residues. A moderate persistence was found for ivermectin and oxazepam with DT50 values of 15 and 54 d, respectively. Lopromide, for which no corresponding DT50 values could be calculated, also exhibited a moderate persistence and was transformed into at least four transformation products. For diazepam, carbamazepine, 10,11-dihydro-10,11-dihydroxycarbamazepine, and clofibric acid, system DT90 values of >365 d were found, which exhibit their high persistence in the water/sediment system. An elevated level of sorption onto the sediment was observed for ivermectin, diazepam, oxazepam, and carbamazepine. Respective Koc values calculated from the experimental data ranged from 1172 L x kg(-1) for ivermectin down to 83 L x kg(-1) for carbamazepine.",
"title": ""
},
{
"docid": "a42f7e9efc4c0e2d56107397f98b15f1",
"text": "Recently, much advance has been made in image captioning, and an encoder-decoder framework has achieved outstanding performance for this task. In this paper, we propose an extension of the encoder-decoder framework by adding a component called guiding network. The guiding network models the attribute properties of input images, and its output is leveraged to compose the input of the decoder at each time step. The guiding network can be plugged into the current encoder-decoder framework and trained in an end-to-end manner. Hence, the guiding vector can be adaptively learned according to the signal from the decoder, making itself to embed information from both image and language. Additionally, discriminative supervision can be employed to further improve the quality of guidance. The advantages of our proposed approach are verified by experiments carried out on the MS COCO dataset.",
"title": ""
},
{
"docid": "64ed3c6997ed68894db5c30bc91e95cd",
"text": "Affine moment invariant (AMI) is a kind of hand-crafted image feature, which is invariant to affine transformations. This property is precisely what the standard convolution neural network (CNN) is difficult to achieve. In this letter, we present a kind of network architecture to introduce AMI into CNN, which is called AMI-Net. We achieved this by calculating AMI on the feature maps of the hidden layers. These AMIs will be concatenated with the standard CNN's FC layer to determine the network's final output. By calculating AMI on the feature maps, we can not only extend the dimension of AMIs, but also introduce affine transformation invariant into CNN. Two network architectures and training strategies of AMI-Net are illuminated, one is two-stage, and the other is end-to-end. To prove the effectiveness of the AMI-Net, several experiments have been conducted on common image datasets, MNIST, MNIST-rot, affNIST, SVHN, and CIFAR-10. By comparing with the corresponding standard CNN, respectively, we verify the validity of AMI-net.",
"title": ""
},
{
"docid": "c528ea5c333c63504b1221825597a382",
"text": "This paper introduces our domain independent approach to “free generation” from single RDF triples without using any domain dependent knowledge. Our approach is developed based on our argument that RDF representations carry rich linguistic information, which can be used to achieve readable domain independent generation. In order to examine to what extent our argument is realistic, we carry out an evaluation experiment, which is the first evaluation of this kind of domain independent generation in the field.",
"title": ""
},
{
"docid": "ce848a090d33763e4612aa04437b7ebd",
"text": "Loving-kindness meditation is a practice designed to enhance feelings of kindness and compassion for self and others. Loving-kindness meditation involves repetition of phrases of positive intention for self and others. We undertook an open pilot trial of loving-kindness meditation for veterans with posttraumatic stress disorder (PTSD). Measures of PTSD, depression, self-compassion, and mindfulness were obtained at baseline, after a 12-week loving-kindness meditation course, and 3 months later. Effect sizes were calculated from baseline to each follow-up point, and self-compassion was assessed as a mediator. Attendance was high; 74% attended 9-12 classes. Self-compassion increased with large effect sizes and mindfulness increased with medium to large effect sizes. A large effect size was found for PTSD symptoms at 3-month follow-up (d = -0.89), and a medium effect size was found for depression at 3-month follow-up (d = -0.49). There was evidence of mediation of reductions in PTSD symptoms and depression by enhanced self-compassion. Overall, loving-kindness meditation appeared safe and acceptable and was associated with reduced symptoms of PTSD and depression. Additional study of loving-kindness meditation for PTSD is warranted to determine whether the changes seen are due to the loving-kindness meditation intervention versus other influences, including concurrent receipt of other treatments.",
"title": ""
},
{
"docid": "4772fb61d2a967470bdd0e9b3f2ead07",
"text": "This study examined the relationships of three levels of reading fluency, the individual word, the syntactic unit, and the whole passage, to reading comprehension among 278 fifth graders heterogeneous in reading ability. Hierarchical regression analyses revealed that reading fluency at each level related uniquely to performance on a standardized reading comprehension test in a model including inferencing skill and background knowledge. The study supported an automaticity effect for word recognition speed and an automaticity-like effect related to syntactic processing skill. Additionally, hierarchical regressions using longitudinal data suggested that fluency and reading comprehension had a bidirectional relationship. The discussion emphasizes the theoretical expansion of reading fluency to three levels of cognitive processes and the relations of these processes to reading comprehension.",
"title": ""
},
{
"docid": "b91c387335e7f63b720525d0ee28dbd6",
"text": "Road condition acquisition and assessment are the key to guarantee their permanent availability. In order to maintain a country's whole road network, millions of high-resolution images have to be analyzed annually. Currently, this requires cost and time excessive manual labor. We aim to automate this process to a high degree by applying deep neural networks. Such networks need a lot of data to be trained successfully, which are not publicly available at the moment. In this paper, we present the GAPs dataset, which is the first freely available pavement distress dataset of a size, large enough to train high-performing deep neural networks. It provides high quality images, recorded by a standardized process fulfilling German federal regulations, and detailed distress annotations. For the first time, this enables a fair comparison of research in this field. Furthermore, we present a first evaluation of the state of the art in pavement distress detection and an analysis of the effectiveness of state of the art regularization techniques on this dataset.",
"title": ""
},
{
"docid": "bfc663107f88522f438bd173db2b85ce",
"text": "While much progress has been made in how to encode a text sequence into a sequence of vectors, less attention has been paid to how to aggregate these preceding vectors (outputs of RNN/CNN) into fixed-size encoding vector. Usually, a simple max or average pooling is used, which is a bottom-up and passive way of aggregation and lack of guidance by task information. In this paper, we propose an aggregation mechanism to obtain a fixed-size encoding with a dynamic routing policy. The dynamic routing policy is dynamically deciding that what and how much information need be transferred from each word to the final encoding of the text sequence. Following the work of Capsule Network, we design two dynamic routing policies to aggregate the outputs of RNN/CNN encoding layer into a final encoding vector. Compared to the other aggregation methods, dynamic routing can refine the messages according to the state of final encoding vector. Experimental results on five text classification tasks show that our method outperforms other aggregating models by a significant margin. Related source code is released on our github page1.",
"title": ""
},
{
"docid": "309f7b25ebf83f27a7f9c120e6e8bd27",
"text": "Human-robotinteractionis becominganincreasinglyimportant researcharea. In this paper , we presentour work on designinga human-robotsystemwith adjustableautonomy anddescribenotonly theprototypeinterfacebut alsothecorrespondingrobot behaviors. In our approach,we grant the humanmeta-level control over the level of robot autonomy, but we allow the robot a varying amountof self-direction with eachlevel. Within this framework of adjustableautonomy, we explore appropriateinterfaceconceptsfor controlling multiple robotsfrom multiple platforms.",
"title": ""
},
{
"docid": "96010bf04c08ace7932fb5c48b2f8798",
"text": "Spatio-temporal databases aim to support extensions to existing models of Spatial Information Systems (SIS) to include time in order to better describe our dynamic environment. Although interest into this area has increased in the past decade, a number of important issues remain to be investigated. With the advances made in temporal database research, we can expect a more uni®ed approach towards aspatial temporal data in SIS and a wider discussion on spatio-temporal data models. This paper provides an overview of previous achievements within the ®eld and highlights areas currently receiving or requiring further investigation.",
"title": ""
},
{
"docid": "2183a666f69591dc4c00962ba1f90ea6",
"text": "The accurate estimation of students’ grades in future courses is important as it can inform the selection of next term’s courses and create personalized degree pathways to facilitate successful and timely graduation. This paper presents future-course grade predictions methods based on sparse linear and low-rank matrix factorization models that are specific to each course or student-course tuple. These methods identify the predictive subsets of prior courses on a courseby-course basis and better address problems associated with the not-missing-at-random nature of the studentcourse historical grade data. The methods were evaluated on a dataset obtained from the University of Minnesota, for two different departments with different characteristics. This evaluation showed that focusing on course specific data improves the accuracy of grade prediction.",
"title": ""
},
{
"docid": "517a88f2aeb4d2884edfb6a9a64b1e8b",
"text": "Metformin (dimethylbiguanide) features as a current first-line pharmacological treatment for type 2 diabetes (T2D) in almost all guidelines and recommendations worldwide. It has been known that the antihyperglycemic effect of metformin is mainly due to the inhibition of hepatic glucose output, and therefore, the liver is presumably the primary site of metformin function.However, in this issue of Diabetes Care, Fineman and colleagues (1) demonstrate surprising results from their clinical trials that suggest the primary effect of metformin resides in the human gut. Metformin is an orally administered drug used for lowering blood glucose concentrations in patients with T2D, particularly in those overweight and obese as well as those with normal renal function. Pharmacologically, metformin belongs to the biguanide class of antidiabetes drugs. The history of biguanides can be traced from the use of Galega officinalis (commonly known as galega) for treating diabetes in medieval Europe (2). Guanidine, the active component of galega, is the parent compound used to synthesize the biguanides. Among three main biguanides introduced for diabetes therapy in late 1950s, metformin (Fig. 1A) has a superior safety profile and is well tolerated. The other two biguanides, phenformin and buformin, were withdrawn in the early 1970s due to the risk of lactic acidosis and increased cardiac mortality. The incidence of lactic acidosis with metformin at therapeutic doses is rare (less than three cases per 100,000 patient-years) and is not greater than with nonmetformin therapies (3). Major clinical advantages of metformin include specific reduction of hepatic glucose output, with subsequent improvement of peripheral insulin sensitivity, and remarkable cardiovascular safety, but without increasing islet insulin secretion, inducingweight gain, or posing a risk of hypoglycemia. Moreover, metformin has also shown benefits in reducing cancer risk and improving cancer prognosis (4,5), as well as counteracting the cardiovascular complications associated with diabetes (6). Although metformin has been widely prescribed to patients with T2D for over 50 years and has been found to be safe and efficacious both as monotherapy and in combination with other oral antidiabetes agents and insulin, the mechanism of metformin action is only partially explored and remains controversial. In mammals, oral bioavailability of metformin is;50% and is absorbed through the upper small intestine (duodenum and jejunum) (7) and then is delivered to the liver, circulates unbound essentially, and finally is eliminated by the kidneys. Note that metformin is not metabolized and so is unchanged throughout the journey in the body. The concentration of metformin in the liver is threeto fivefold higher than that in the portal vein (40–70 mmol/L) after single therapeutic dose (20 mg/kg/day in humans or 250 mg/kg/day in mice) (3,8), and metformin in general circulation is 10–40 mmol/L (8). As the antihyperglycemic effect of metformin is mainly due to the inhibition of hepatic glucose output and the concentration ofmetformin in the hepatocytes is much higher than in the blood, the liver is therefore presumed to be the primary site of metformin function. Indeed, the liver has been the focus of themajority of metformin research by far, and hepatic mechanisms of metformin that have been suggested include the activation of AMPK through liver kinase B1 and decreased energy charge (9,10), the inhibition of glucagon-induced cAMP production by blocking adenylyl cyclase (11), the increase of the AMP/ATP ratio by restricting NADHcoenzymeQ oxidoreductase (complex I) in themitochondrial electron transport chain (12) (albeit at high metformin concentrations,;5mmol/L), and,more recently, the reduction of lactate and glycerol metabolism to glucose through a redox change by inhibitingmitochondrial glycerophosphate dehydrogenase (13). It is noteworthy that the remaining ;50%ofmetformin,which is unabsorbed, accumulates in the gut mucosa of the distal small intestine at concentrations 30to",
"title": ""
},
{
"docid": "af48f00757d8e95d92facca57cd9d13c",
"text": "Remaining useful life (RUL) prediction allows for predictive maintenance of machinery, thus reducing costly unscheduled maintenance. Therefore, RUL prediction of machinery appears to be a hot issue attracting more and more attention as well as being of great challenge. This paper proposes a model-based method for predicting RUL of machinery. The method includes two modules, i.e., indicator construction and RUL prediction. In the first module, a new health indicator named weighted minimum quantization error is constructed, which fuses mutual information from multiple features and properly correlates to the degradation processes of machinery. In the second module, model parameters are initialized using the maximum-likelihood estimation algorithm and RUL is predicted using a particle filtering-based algorithm. The proposed method is demonstrated using vibration signals from accelerated degradation tests of rolling element bearings. The prediction result identifies the effectiveness of the proposed method in predicting RUL of machinery.",
"title": ""
},
{
"docid": "fa69a8a67ab695fd74e3bfc25206c94c",
"text": "Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.",
"title": ""
},
{
"docid": "c89ce2fb6180961cdfee8120b0c17dd8",
"text": "Anti-forensics (AF) is a multi-headed demon with a range of weapons in its arsenal. Sarah Hilley looks at a set of hell-raising attacks directed at prominent forensic tools. Major forensic programs have started to attract unwanted attention from hackers aka security researchers of a type that have plagued mainstream software developers for years. This report focuses on the development of the Metasploit Anti-Forensic Investigation Arsenal (MAFIA).",
"title": ""
},
{
"docid": "ba672ac758f45071c23ed53334add3ac",
"text": "Detecting objects in aerial images is challenged by variance of object colors, aspect ratios, cluttered backgrounds, and in particular, undetermined orientations. In this paper, we propose to use Deep Convolutional Neural Network (DCNN) features from combined layers to perform orientation robust aerial object detection. We explore the inherent characteristics of DC-NN as well as relate the extracted features to the principle of disentangling feature learning. An image segmentation based approach is used to localize ROIs of various aspect ratios, and ROIs are further classified into positives or negatives using an SVM classifier trained on DCNN features. With experiments on two datasets collected from Google Earth, we demonstrate that the proposed aerial object detection approach is simple but effective.",
"title": ""
},
{
"docid": "44e310ba974f371605f6b6b6cd0146aa",
"text": "This section is a collection of shorter “Issue and Opinions” pieces that address some of the critical challenges around the evolution of digital business strategy. These voices and visions are from thought leaders who, in addition to their scholarship, have a keen sense of practice. They outline through their opinion pieces a series of issues that will need attention from both research and practice. These issues have been identified through their observation of practice with the eye of a scholar. They provide fertile opportunities for scholars in information systems, strategic management, and organizational theory.",
"title": ""
},
{
"docid": "d88043824732d96340028c74489e01a0",
"text": "Removing perspective distortion from hand held camera captured document images is one of the primitive tasks in document analysis, but unfortunately no such method exists that can reliably remove the perspective distortion from document images automatically. In this paper, we propose a convolutional neural network based method for recovering homography from hand-held camera captured documents. Our proposed method works independent of document’s underlying content and is trained end-to-end in a fully automatic way. Specifically, this paper makes following three contributions: firstly, we introduce a large scale synthetic dataset for recovering homography from documents images captured under different geometric and photometric transformations; secondly, we show that a generic convolutional neural network based architecture can be successfully used for regressing the corners positions of documents captured under wild settings; thirdly, we show that L1 loss can be reliably used for corners regression. Our proposed method gives state-of-the-art performance on the tested datasets, and has potential to become an integral part of document analysis pipeline.",
"title": ""
},
{
"docid": "fefc18d1dacd441bd3be641a8ca4a56d",
"text": "This paper proposes a new residual convolutional neural network (CNN) architecture for single image depth estimation. Compared with existing deep CNN based methods, our method achieves much better results with fewer training examples and model parameters. The advantages of our method come from the usage of dilated convolution, skip connection architecture and soft-weight-sum inference. Experimental evaluation on the NYU Depth V2 dataset shows that our method outperforms other state-of-the-art methods by a margin.",
"title": ""
}
] |
scidocsrr
|
6152ef013710e91ee1b49d780ac4d16d
|
TR 09-004 Detecting Anomalies in a Time Series Database
|
[
{
"docid": "ba9122284ddc43eb3bc4dff89502aa9d",
"text": "Recent advancements in sensor technology have made it possible to collect enormous amounts of data in real time. However, because of the sheer volume of data most of it will never be inspected by an algorithm, much less a human being. One way to mitigate this problem is to perform some type of anomaly (novelty /interestingness/surprisingness) detection and flag unusual patterns for further inspection by humans or more CPU intensive algorithms. Most current solutions are “custom made” for particular domains, such as ECG monitoring, valve pressure monitoring, etc. This customization requires extensive effort by domain expert. Furthermore, hand-crafted systems tend to be very brittle to concept drift. In this demonstration, we will show an online anomaly detection system that does not need to be customized for individual domains, yet performs with exceptionally high precision/recall. The system is based on the recently introduced idea of time series bitmaps. To demonstrate the universality of our system, we will allow testing on independently annotated datasets from domains as diverse as ECGs, Space Shuttle telemetry monitoring, video surveillance, and respiratory data. In addition, we invite attendees to test our system with any dataset available on the web.",
"title": ""
}
] |
[
{
"docid": "21cde70c4255e706cb05ff38aec99406",
"text": "In this paper, a multiple classifier machine learning (ML) methodology for predictive maintenance (PdM) is presented. PdM is a prominent strategy for dealing with maintenance issues given the increasing need to minimize downtime and associated costs. One of the challenges with PdM is generating the so-called “health factors,” or quantitative indicators, of the status of a system associated with a given maintenance issue, and determining their relationship to operating costs and failure risk. The proposed PdM methodology allows dynamical decision rules to be adopted for maintenance management, and can be used with high-dimensional and censored data problems. This is achieved by training multiple classification modules with different prediction horizons to provide different performance tradeoffs in terms of frequency of unexpected breaks and unexploited lifetime, and then employing this information in an operating cost-based maintenance decision system to minimize expected costs. The effectiveness of the methodology is demonstrated using a simulated example and a benchmark semiconductor manufacturing maintenance problem.",
"title": ""
},
{
"docid": "0f49e229c08672dfba4026ec5ebca3bc",
"text": "A grid array antenna is presented in this paper with sub grid arrays and multiple feed points, showing enhanced radiation characteristics and sufficient design flexibility. For instance, the grid array antenna can be easily designed as a linearly- or circularly-polarized, unbalanced or balanced antenna. A design example is given for a linearly-polarized unbalanced grid array antenna in Ferro A6M low temperature co-fired ceramic technology for 60-GHz radios to operate from 57 to 66 GHz (≈ 14.6% at 61.5 GHz ). It consists of 4 sub grid arrays and 4 feed points that are connected to a single-ended 50-Ω source by a quarter-wave matched T-junction network. The simulated results indicate that the grid array antenna has the maximum gain of 17.7 dBi at 59 GHz , an impedance bandwidth (|S11| ≤ -10 dB) nearly from 56 to 67.5 GHz (or 18.7%), a 3-dB gain bandwidth from 55.4 to 66 GHz (or 17.2%), and a vertical beam bandwidth in the broadside direction from 57 to 66 GHz (14.6%). The measured results are compared with the simulated ones. Discrepancies and their causes are identified with a tolerance analysis on the fabrication process.",
"title": ""
},
{
"docid": "a1d300bd5ac779e1b21a7ed20b3b01ad",
"text": "a r t i c l e i n f o Keywords: Luxury brands Perceived social media marketing (SMM) activities Value equity Relationship equity Brand equity Customer equity Purchase intention In light of a growing interest in the use of social media marketing (SMM) among luxury fashion brands, this study set out to identify attributes of SMM activities and examine the relationships among those perceived activities, value equity, relationship equity, brand equity, customer equity, and purchase intention through a structural equation model. Five constructs of perceived SSM activities of luxury fashion brands are entertainment , interaction, trendiness, customization, and word of mouth. Their effects on value equity, relationship equity, and brand equity are significantly positive. For the relationship between customer equity drivers and customer equity, brand equity has significant negative effect on customer equity while value equity and relationship equity show no significant effect. As for purchase intention, value equity and relationship equity had significant positive effects, while relationship equity had no significant influence. Finally, the relationship between purchase intention and customer equity has significance. The findings of this study can enable luxury brands to forecast the future purchasing behavior of their customers more accurately and provide a guide to managing their assets and marketing activities as well. The luxury market has attained maturity, along with the gradual expansion of the scope of its market and a rapid growth in the number of customers. Luxury market is a high value-added industry basing on high brand assets. Due to the increased demand for luxury in emerging markets such as China, India, and the Middle East, opportunities abound to expand the business more than ever. In the past, luxury fashion brands could rely on strong brand assets and secure regular customers. However, the recent entrance of numerous fashion brands into the luxury market, followed by heated competition, signals unforeseen changes in the market. A decrease in sales related to a global economic downturn drives luxury businesses to change. Now they can no longer depend solely on their brand symbol but must focus on brand legacy, quality, esthetic value, and trustworthy customer relationships in order to succeed. A key element to luxury industry becomes providing values to customers in every way possible. As a means to constitute customer assets through effective communication with consumers, luxury brands have tilted their eyes toward social media. Marketing communication using social media such as Twitter, Facebook, and …",
"title": ""
},
{
"docid": "765e766515c9c241ffd2d84572fd887f",
"text": "The cost of reconciling consistency and state management with high availability is highly magnified by the unprecedented scale and robustness requirements of today’s Internet applications. We propose two strategies for improving overall availability using simple mechanisms that scale over large applications whose output behavior tolerates graceful degradation. We characterize this degradation in terms of harvest and yield, and map it directly onto engineering mechanisms that enhance availability by improving fault isolation, and in some cases also simplify programming. By collecting examples of related techniques in the literature and illustrating the surprising range of applications that can benefit from these approaches, we hope to motivate a broader research program in this area. 1. Motivation, Hypothesis, Relevance Increasingly, infrastructure services comprise not only routing, but also application-level resources such as search engines [15], adaptation proxies [8], and Web caches [20]. These applications must confront the same operational expectations and exponentially-growing user loads as the routing infrastructure, and consequently are absorbing comparable amounts of hardware and software. The current trend of harnessing commodity-PC clusters for scalability and availability [9] is reflected in the largest web server installations. These sites use tens to hundreds of PC’s to deliver 100M or more read-mostly page views per day, primarily using simple replication or relatively small data sets to increase throughput. The scale of these applications is bringing the wellknown tradeoff between consistency and availability [4] into very sharp relief. In this paper we propose two general directions for future work in building large-scale robust systems. Our approaches tolerate partial failures by emphasizing simple composition mechanisms that promote fault containment, and by translating possible partial failure modes into engineering mechanisms that provide smoothlydegrading functionality rather than lack of availability of the service as a whole. The approaches were developed in the context of cluster computing, where it is well accepted [22] that one of the major challenges is the nontrivial software engineering required to automate partial-failure handling in order to keep system management tractable. 2. Related Work and the CAP Principle In this discussion, strong consistency means singlecopy ACID [13] consistency; by assumption a stronglyconsistent system provides the ability to perform updates, otherwise discussing consistency is irrelevant. High availability is assumed to be provided through redundancy, e.g. data replication; data is considered highly available if a given consumer of the data can always reach some replica. Partition-resilience means that the system as whole can survive a partition between data replicas. Strong CAP Principle. Strong Consistency, High Availability, Partition-resilience: Pick at most 2. The CAP formulation makes explicit the trade-offs in designing distributed infrastructure applications. It is easy to identify examples of each pairing of CAP, outlining the proof by exhaustive example of the Strong CAP Principle: CA without P: Databases that provide distributed transactional semantics can only do so in the absence of a network partition separating server peers. CP without A: In the event of a partition, further transactions to an ACID database may be blocked until the partition heals, to avoid the risk of introducing merge conflicts (and thus inconsistency). AP without C: HTTP Web caching provides clientserver partition resilience by replicating documents, but a client-server partition prevents verification of the freshness of an expired replica. In general, any distributed database problem can be solved with either expiration-based caching to get AP, or replicas and majority voting to get PC (the minority is unavailable). In practice, many applications are best described in terms of reduced consistency or availability. For example, weakly-consistent distributed databases such as Bayou [5] provide specific models with well-defined consistency/availability tradeoffs; disconnected filesystems such as Coda [16] explicitly argued for availability over strong consistency; and expiration-based consistency mechanisms such as leases [12] provide fault-tolerant consistency management. These examples suggest that there is a Weak CAP Principle which we have yet to characterize precisely: The stronger the guarantees made about any two of strong consistency, high availability, or resilience to partitions, the weaker the guarantees that can be made about the third. 3. Harvest, Yield, and the CAP Principle Both strategies we propose for improving availability with simple mechanisms rely on the ability to broaden our notion of “correct behavior” for the target application, and then exploit the tradeoffs in the CAP principle to improve availability at large scale. We assume that clients make queries to servers, in which case there are at least two metrics for correct behavior: yield, which is the probability of completing a request, and harvest, which measures the fraction of the data reflected in the response, i.e. the completeness of the answer to the query. Yield is the common metric and is typically measured in “nines”: “four-nines availability” means a completion probability of . In practice, good HA systems aim for four or five nines. In the presence of faults there is typically a tradeoff between providing no answer (reducing yield) and providing an imperfect answer (maintaining yield, but reducing harvest). Some applications do not tolerate harvest degradation because any deviation from the single well-defined correct behavior renders the result useless. For example, a sensor application that must provide a binary sensor reading (presence/absence) does not tolerate degradation of the output.1 On the other hand, some applications tolerate graceful degradation of harvest: online aggregation [14] allows a user to explicitly trade running time for precision and confidence in performing arithmetic aggregation queries over a large dataset, thereby smoothly trading harvest for response time, which is particularly useful for approximate answers and for avoiding work that looks unlikely to be worthwhile based on preliminary results. At first glance, it would appear that this kind of degradation applies only to queries and not to updates. However, the model can be applied in the case of “single-location” updates: those changes that are localized to a single node (or technically a single partition). In this case, updates that 1This is consistent with the use of the term yield in semiconductor manufacturing: typically, each die on a wafer is intolerant to harvest degradation, and yield is defined as the fraction of working dice on a wafer. affect reachable nodes occur correctly but have limited visibility (a form of reduced harvest), while those that require unreachable nodes fail (reducing yield). These localized changes are consistent exactly because the new values are not available everywhere. This model of updates fails for global changes, but it is still quite useful for many practical applications, including personalization databases and collaborative filtering. 4. Strategy 1: Trading Harvest for Yield— Probabilistic Availability Nearly all systems are probabilistic whether they realize it or not. In particular, any system that is 100% available under single faults is probabilistically available overall (since there is a non-zero probability of multiple failures), and Internet-based servers are dependent on the best-effort Internet for true availability. Therefore availability maps naturally to probabilistic approaches, and it is worth addressing probabilistic systems directly, so that we can understand and limit the impact of faults. This requires some basic decisions about what needs to be available and the expected nature of faults. For example, node faults in the Inktomi search engine remove a proportional fraction of the search database. Thus in a 100-node cluster a single-node fault reduces the harvest by 1% during the duration of the fault (the overall harvest is usually measured over a longer interval). Implicit in this approach is graceful degradation under multiple node faults, specifically, linear degradation in harvest. By randomly placing data on nodes, we can ensure that the 1% lost is a random 1%, which makes the average-case and worstcase fault behavior the same. In addition, by replicating a high-priority subset of data, we reduce the probability of losing that data. This gives us more precise control of harvest, both increasing it and reducing the practical impact of missing data. Of course, it is possible to replicate all data, but doing so may have relatively little impact on harvest and yield despite significant cost, and in any case can never ensure 100% harvest or yield because of the best-effort Internet protocols the service relies on. As a similar example, transformation proxies for thin clients [8] also trade harvest for yield, by degrading results on demand to match the capabilities of clients that might otherwise be unable to get results at all. Even when the 100%-harvest answer is useful to the client, it may still be preferable to trade response time for harvest when clientto-server bandwidth is limited, for example, by intelligent degradation to low-bandwidth formats [7]. 5. Strategy 2: Application Decomposition and Orthogonal Mechanisms Some large applications can be decomposed into subsystems that are independently intolerant to harvest degradation (i.e. they fail by reducing yield), but whose independent failure allows the overall application to continue functioning with reduced utility. The application as a whole is then tolerant of harvest degradation. A good decomposition has at least one actual benefit and one potential benefit. The actual benefi",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "88ff3300dafab6b87d770549a1dc4f0e",
"text": "Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. This paper elaborates on the problem of constrained novelty search and proposes two novelty search algorithms which search within both the feasible and the infeasible space. Inspired by the FI-2pop genetic algorithm, both algorithms maintain and evolve two separate populations, one with feasible and one with infeasible individuals, while each population can use its own selection method. The proposed algorithms are applied to the problem of generating diverse but playable game levels, which is representative of the larger problem of procedural game content generation. Results show that the two-population constrained novelty search methods can create, under certain conditions, larger and more diverse sets of feasible game levels than current methods of novelty search, whether constrained or unconstrained. However, the best algorithm is contingent on the particularities of the search space and the genetic operators used. Additionally, the proposed enhancement of offspring boosting is shown to enhance performance in all cases of two-population novelty search.",
"title": ""
},
{
"docid": "3899c40009ac15e213e74bd08392ecec",
"text": "In the past decade, research in person re-identification (re-id) has exploded due to its broad use in security and surveillance applications. Issues such as inter-camera viewpoint, illumination and pose variations make it an extremely difficult problem. Consequently, many algorithms have been proposed to tackle these issues. To validate the efficacy of re-id algorithms, numerous benchmarking datasets have been constructed. While early datasets contained relatively few identities and images, several large-scale datasets have recently been proposed, motivated by data-driven machine learning. In this paper, we introduce a new large-scale real-world re-id dataset, DukeMTMC4ReID, using 8 disjoint surveillance camera views covering parts of the Duke University campus. The dataset was created from the recently proposed fully annotated multi-target multi-camera tracking dataset DukeMTMC[36]. A benchmark summarizing extensive experiments with many combinations of existing re-id algorithms on this dataset is also provided for an up-to-date performance analysis.",
"title": ""
},
{
"docid": "17752f2b561d81643b35b6d2d10e4e46",
"text": "This randomised controlled trial was undertaken to evaluate the effectiveness of acupuncture as a treatment for frozen shoulder. Thirty-five patients with a diagnosis of frozen shoulder were randomly allocated to an exercise group or an exercise plus acupuncture group and treated for a period of 6 weeks. Functional mobility, power, and pain were assessed by a blinded assessor using the Constant Shoulder Assessment, at baseline, 6 weeks and 20 weeks. Analysis was based on the intention-to-treat principle. Compared with the exercise group, the exercise plus acupuncture group experienced significantly greater improvement with treatment. Improvements in scores by 39.8% (standard deviation, 27.1) and 76.4% (55.0) were seen for the exercise and the exercise plus acupuncture groups, respectively at 6 weeks (P=0.048), and were sustained at the 20-week re-assessment (40.3% [26.7] and 77.2% [54.0], respectively; P=0.025). We conclude that the combination of acupuncture with shoulder exercise may offer effective treatment for frozen shoulder.",
"title": ""
},
{
"docid": "f3641aadeaf2ccd31f96e2db8d33f936",
"text": "This paper proposes a novel approach to dynamically manage the traffic lights cycles and phases in an isolated intersection. The target of the work is a system that, comparing with previous solutions, offers improved performance, is flexible and can be implemented on off-the-shelf components. The challenge here is to find an effective design that achieves the target while avoiding complex and computationally expensive solutions, which would not be appropriate for the problem at hand and would impair the practical applicability of the approach in real scenarios. The proposed solution is a traffic lights dynamic control system that combines an IEEE 802.15.4 Wireless Sensor Network (WSN) for real-time traffic monitoring with multiple fuzzy logic controllers, one for each phase, that work in parallel. Each fuzzy controller addresses vehicles turning movements and dynamically manages both the phase and the green time of traffic lights. The proposed system combines the advantages of the WSN, such as easy deployment and maintenance, flexibility, low cost, noninvasiveness, and scalability, with the benefits of using four parallel fuzzy controllers, i.e., better performance, fault-tolerance, and support for phase-specific management. Simulation results show that the proposed system outperforms other solutions in the literature, significantly reducing the vehicles waiting times. A proof-of-concept implementation on an off-the-shelf device proves that the proposed controller does not require powerful hardware and can be easily implemented on a low-cost device, thus paving the way for extensive usage in practice. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c1b34059a896564df02ef984085b93a0",
"text": "Robotics has become a standard tool in outreaching to grades K-12 and attracting students to the STEM disciplines. Performing these activities in the class room usually requires substantial time commitment by the teacher and integration into the curriculum requires major effort, which makes spontaneous and short-term engagements difficult. This paper studies using “Cubelets”, a modular robotic construction kit, which requires virtually no setup time and allows substantial engagement and change of perception of STEM in as little as a 1-hour session. This paper describes the constructivist curriculum and provides qualitative and quantitative results on perception changes with respect to STEM and computer science in particular as a field of study.",
"title": ""
},
{
"docid": "5093e3d152d053a9f3322b34096d3e4e",
"text": "To create conversational systems working in actual situations, it is crucial to assume that they interact with multiple agents. In this work, we tackle addressee and response selection for multi-party conversation, in which systems are expected to select whom they address as well as what they say. The key challenge of this task is to jointly model who is talking about what in a previous context. For the joint modeling, we propose two modeling frameworks: 1) static modeling and 2) dynamic modeling. To show benchmark results of our frameworks, we created a multi-party conversation corpus. Our experiments on the dataset show that the recurrent neural network based models of our frameworks robustly predict addressees and responses in conversations with a large number of agents.",
"title": ""
},
{
"docid": "7e4b634b7b16d152fefe476d264c6726",
"text": "We introduce openXBOW, an open-source toolkit for the generation of bag-of-words (BoW) representations from multimodal input. In the BoW principle, word histograms were first used as features in document classification, but the idea was and can easily be adapted to, e. g., acoustic or visual low-level descriptors, introducing a prior step of vector quantisation. The openXBOW toolkit supports arbitrary numeric input features and text input and concatenates computed subbags to a final bag. It provides a variety of extensions and options. To our knowledge, openXBOW is the first publicly available toolkit for the generation of crossmodal bags-of-words. The capabilities of the tool are exemplified in two sample scenarios: time-continuous speech-based emotion recognition and sentiment analysis in tweets where improved results over other feature representation forms were observed.",
"title": ""
},
{
"docid": "3e0741fb69ee9bdd3cc455577aab4409",
"text": "Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training times comparable to standard feed-forward DNNs. The network uses sub-sampling to reduce computation during training. On the Switchboard task we show a relative improvement of 6% over the baseline DNN model. We present results on several LVCSR tasks with training data ranging from 3 to 1800 hours to show the effectiveness of the TDNN architecture in learning wider temporal dependencies in both small and large data scenarios.",
"title": ""
},
{
"docid": "a8cad81570a7391175acdcf82bc9040b",
"text": "A model of Convolutional Fuzzy Neural Network for real world objects and scenes images classification is proposed. The Convolutional Fuzzy Neural Network consists of convolutional, pooling and fully-connected layers and a Fuzzy Self Organization Layer. The model combines the power of convolutional neural networks and fuzzy logic and is capable of handling uncertainty and impreciseness in the input pattern representation. The Training of The Convolutional Fuzzy Neural Network consists of three independent steps for three components of the net.",
"title": ""
},
{
"docid": "2462af24189262b0145a6559d4aa6b3d",
"text": "A 30-MHz voltage-mode buck converter using a delay-line-based pulse-width-modulation controller is proposed in this brief. Two voltage-to-delay cells are used to convert the voltage difference to delay-time difference. A charge pump is used to charge or discharge the loop filter, depending on whether the feedback voltage is larger or smaller than the reference voltage. A delay-line-based voltage-to-duty-cycle (V2D) controller is used to replace the classical ramp-comparator-based V2D controller to achieve wide duty cycle. A type-II compensator is implemented in this design with a capacitor and resistor in the loop filter. The prototype buck converter was fabricated using a 0.18-<inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{m}$ </tex-math></inline-formula> CMOS process. It occupies an active area of 0.834 mm<sup>2</sup> including the testing PADs. The tunable duty cycle ranges from 11.9%–86.3%, corresponding to 0.4 V–2.8 V output voltage with 3.3 V input. With a step of 400 mA in the load current, the settling time is around 3 <inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{s}$ </tex-math></inline-formula>. The peak efficiency is as high as 90.2% with 2.4 V output and the maximum load current is 800 mA.",
"title": ""
},
{
"docid": "ab33dcd4172dec6cc88e13af867fed88",
"text": "It is necessary to understand the content of articles and user preferences to make effective news recommendations. While ID-based methods, such as collaborative filtering and low-rank factorization, are well known for making recommendations, they are not suitable for news recommendations because candidate articles expire quickly and are replaced with new ones within short spans of time. Word-based methods, which are often used in information retrieval settings, are good candidates in terms of system performance but have issues such as their ability to cope with synonyms and orthographical variants and define \"queries\" from users' historical activities. This paper proposes an embedding-based method to use distributed representations in a three step end-to-end manner: (i) start with distributed representations of articles based on a variant of a denoising autoencoder, (ii) generate user representations by using a recurrent neural network (RNN) with browsing histories as input sequences, and (iii) match and list articles for users based on inner-product operations by taking system performance into consideration. The proposed method performed well in an experimental offline evaluation using past access data on Yahoo! JAPAN's homepage. We implemented it on our actual news distribution system based on these experimental results and compared its online performance with a method that was conventionally incorporated into the system. As a result, the click-through rate (CTR) improved by 23% and the total duration improved by 10%, compared with the conventionally incorporated method. Services that incorporated the method we propose are already open to all users and provide recommendations to over ten million individual users per day who make billions of accesses per month.",
"title": ""
},
{
"docid": "ef53fb4fa95575c6472173db51d77a65",
"text": "I review existing knowledge, unanswered questions, and new directions in research on stress, coping resource, coping strategies, and social support processes. New directions in research on stressors include examining the differing impacts of stress across a range of physical and mental health outcomes, the \"carry-overs\" of stress from one role domain or stage of life into another, the benefits derived from negative experiences, and the determinants of the meaning of stressors. Although a sense of personal control and perceived social support influence health and mental health both directly and as stress buffers, the theoretical mechanisms through which they do so still require elaboration and testing. New work suggests that coping flexibility and structural constraints on individuals' coping efforts may be important to pursue. Promising new directions in social support research include studies of the negative effects of social relationships and of support giving, mutual coping and support-giving dynamics, optimal \"matches\" between individuals' needs and support received, and properties of groups which can provide a sense of social support. Qualitative comparative analysis, optimal matching analysis, and event-structure analysis are new techniques which may help advance research in these broad topic areas. To enhance the effectiveness of coping and social support interventions, intervening mechanisms need to be better understood. Nevertheless, the policy implications of stress research are clear and are important given current interest in health care reform in the United States.",
"title": ""
},
{
"docid": "12bdec4e6f70a7fe2bd4c750752287c3",
"text": "Rapid growth in the Internet of Things (IoT) has resulted in a massive growth of data generated by these devices and sensors put on the Internet. Physical-cyber-social (PCS) big data consist of this IoT data, complemented by relevant Web-based and social data of various modalities. Smart data is about exploiting this PCS big data to get deep insights and make it actionable, and making it possible to facilitate building intelligent systems and applications. This article discusses key AI research in semantic computing, cognitive computing, and perceptual computing. Their synergistic use is expected to power future progress in building intelligent systems and applications for rapidly expanding markets in multiple industries. Over the next two years, this column on IoT will explore many challenges and technologies on intelligent use and applications of IoT data.",
"title": ""
},
{
"docid": "940b907c28adeaddc2515f304b1d885e",
"text": "In this study, we intend to identify the evolutionary footprints of the South Iberian population focusing on the Berber and Arab influence, which has received little attention in the literature. Analysis of the Y-chromosome variation represents a convenient way to assess the genetic contribution of North African populations to the present-day South Iberian genetic pool and could help to reconstruct other demographic events that could have influenced on that region. A total of 26 Y-SNPs and 17 Y-STRs were genotyped in 144 samples from 26 different districts of South Iberia in order to assess the male genetic composition and the level of substructure of male lineages in this area. To obtain a more comprehensive picture of the genetic structure of the South Iberian region as a whole, our data were compared with published data on neighboring populations. Our analyses allow us to confirm the specific impact of the Arab and Berber expansion and dominion of the Peninsula. Nevertheless, our results suggest that this influence is not bigger in Andalusia than in other Iberian populations.",
"title": ""
},
{
"docid": "eb71ba791776ddfe0c1ddb3dc66f6e06",
"text": "An enterprise resource planning (ERP) is an enterprise-wide application software package that integrates all necessary business functions into a single system with a common database. In order to implement an ERP project successfully in an organization, it is necessary to select a suitable ERP system. This paper presents a new model, which is based on linguistic information processing, for dealing with such a problem. In the study, a similarity degree based algorithm is proposed to aggregate the objective information about ERP systems from some external professional organizations, which may be expressed by different linguistic term sets. The consistency and inconsistency indices are defined by considering the subject information obtained from internal interviews with ERP vendors, and then a linear programming model is established for selecting the most suitable ERP system. Finally, a numerical example is given to demonstrate the application of the",
"title": ""
}
] |
scidocsrr
|
1f49b5eb014ac3afa920d4b6e4f9347c
|
Blind Navigation Support System based on Microsoft Kinect
|
[
{
"docid": "889e3d786a27a3e75972573e30f02e9b",
"text": "We present part of a vision system for blind and visually impaired people. It detects obstacles on sidewalks and provides guidance to avoid them. Obstacles are trees, light poles, trash cans, holes, branches, stones and other objects at a distance of 3 to 5 meters from the camera position. The system first detects the sidewalk borders, using edge information in combination with a tracking mask, to obtain straight lines with their slopes and the vanishing point. Once the borders are found, a rectangular window is defined within which two obstacle detection methods are applied. The first determines the variation of the maxima and minima of the gray levels of the pixels. The second uses the binary edge image and searches in the vertical and horizontal histograms for discrepancies of the number of edge points. Together, these methods allow to detect possible obstacles with their position and size, such that the user can be alerted and informed about the best way to avoid them. The system works in realtime and complements normal navigation with the cane.",
"title": ""
}
] |
[
{
"docid": "66e43ce62fd7e9cf78c4ff90b82afb8d",
"text": "BACKGROUND\nConcern over the frequency of unintended harm to patients has focused attention on the importance of teamwork and communication in avoiding errors. This has led to experiments with teamwork training programmes for clinical staff, mostly based on aviation models. These are widely assumed to be effective in improving patient safety, but the extent to which this assumption is justified by evidence remains unclear.\n\n\nMETHODS\nA systematic literature review on the effects of teamwork training for clinical staff was performed. Information was sought on outcomes including staff attitudes, teamwork skills, technical performance, efficiency and clinical outcomes.\n\n\nRESULTS\nOf 1036 relevant abstracts identified, 14 articles were analysed in detail: four randomized trials and ten non-randomized studies. Overall study quality was poor, with particular problems over blinding, subjective measures and Hawthorne effects. Few studies reported on every outcome category. Most reported improved staff attitudes, and six of eight reported significantly better teamwork after training. Five of eight studies reported improved technical performance, improved efficiency or reduced errors. Three studies reported evidence of clinical benefit, but this was modest or of borderline significance in each case. Studies with a stronger intervention were more likely to report benefits than those providing less training. None of the randomized trials found evidence of technical or clinical benefit.\n\n\nCONCLUSION\nThe evidence for technical or clinical benefit from teamwork training in medicine is weak. There is some evidence of benefit from studies with more intensive training programmes, but better quality research and cost-benefit analysis are needed.",
"title": ""
},
{
"docid": "b8bee026b35868b62ef2ffe5029bfb7b",
"text": "In this paper, we propose a novel network architecture, a recurrent convolutional neural network, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection of multispectral images. To this end, we bring together a convolutional neural network (CNN) and a recurrent neural network (RNN) into one end-to-end network. The former is able to generate rich spectral-spatial feature representations while the latter effectively analyzes temporal dependency in bi-temporal images. Although both CNN and RNN are well-established techniques for remote sensing applications, to the best of our knowledge, we are the first to combine them for multitemporal data analysis in the remote sensing community. Both visual and quantitative analysis of experimental results demonstrates competitive performance in the proposed mode.",
"title": ""
},
{
"docid": "ed75192dcb1356820fdb6411593dd233",
"text": "We introduce QVEC-CCA—an intrinsic evaluation metric for word vector representations based on correlations of learned vectors with features extracted from linguistic resources. We show that QVECCCA scores are an effective proxy for a range of extrinsic semantic and syntactic tasks. We also show that the proposed evaluation obtains higher and more consistent correlations with downstream tasks, compared to existing approaches to intrinsic evaluation of word vectors that are based on word similarity.",
"title": ""
},
{
"docid": "a169848d576e25967f7a223dca4737ea",
"text": "Reengineering of the aircraft structural life prediction process to fully exploit advances in very high performance digital computing is proposed. The proposed process utilizes an ultrahigh fidelity model of individual aircraft by tail number, a Digital Twin, to integrate computation of structural deflections and temperatures in response to flight conditions, with resulting local damage and material state evolution. A conceptual model of how the Digital Twin can be used for predicting the life of aircraft structure and assuring its structural integrity is presented. The technical challenges to developing and deploying a Digital Twin are discussed in detail.",
"title": ""
},
{
"docid": "6eeeb343309fc24326ed42b62d5524b1",
"text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.",
"title": ""
},
{
"docid": "c91e966b803826908ae4dd82cc4a483e",
"text": "Many shallow natural language understanding tasks use dependency trees to extract relations between content words. However, strict surface-structure dependency trees tend to follow the linguistic structure of sentences too closely and frequently fail to provide direct relations between content words. To mitigate this problem, the original Stanford Dependencies representation also defines two dependency graph representations which contain additional and augmented relations that explicitly capture otherwise implicit relations between content words. In this paper, we revisit and extend these dependency graph representations in light of the recent Universal Dependencies (UD) initiative and provide a detailed account of an enhanced and an enhanced++ English UD representation. We further present a converter from constituency to basic, i.e., strict surface structure, UD trees, and a converter from basic UD trees to enhanced and enhanced++ English UD graphs. We release both converters as part of Stanford CoreNLP and the Stanford Parser.",
"title": ""
},
{
"docid": "46a775db27136b8f2c2b39fa14a3e22b",
"text": "Many defect prediction techniques have been proposed. While they often take the author of the code into consideration, none of these techniques build a separate prediction model for each developer. Different developers have different coding styles, commit frequencies, and experience levels, causing different defect patterns. When the defects of different developers are combined, such differences are obscured, hurting prediction performance. This paper proposes personalized defect prediction-building a separate prediction model for each developer to predict software defects. As a proof of concept, we apply our personalized defect prediction to classify defects at the file change level. We evaluate our personalized change classification technique on six large software projects written in C and Java-the Linux kernel, PostgreSQL, Xorg, Eclipse, Lucene and Jackrabbit. Our personalized approach can discover up to 155 more bugs than the traditional change classification (210 versus 55) if developers inspect the top 20% lines of code that are predicted buggy. In addition, our approach improves the F1-score by 0.01-0.06 compared to the traditional change classification.",
"title": ""
},
{
"docid": "d31cd5f7dbdbd3dd7e5d8895d359a958",
"text": "AIM\nThe aim of this cross-sectional descriptive study was to compare the different leadership styles based on perceptions of nurse managers and their staff.\n\n\nBACKGROUND\nNurse managers' styles are fundamental to improving subordinates' performance and achieving goals at health-care institutions.\n\n\nMETHODS\nThis was a cross-sectional study. A questionnaire developed by Ekvall & Arvonen, considering three leadership domains (Change, Production and Employee relations), was administered to all nurse managers and to their subordinates at a city hospital in north-east Italy.\n\n\nRESULTS\nThe comparison between the leadership styles actually adopted and those preferred by the nurse managers showed that the preferred style always scored higher than the style adopted, the difference reaching statistical significance for Change and Production. The leadership styles preferred by subordinates always scored higher than the styles their nurse managers actually adopted; in the subordinates' opinion, the differences being statistically significant in all three leadership domains.\n\n\nIMPLICATION FOR NURSING MANAGEMENT\nThe study showed that nurse managers' expectations in relation to their leadership differ from those of their subordinates. These findings should be borne in mind when selecting and training nurse managers and other personnel, and they should influence the hospital's strategic management of nurses.",
"title": ""
},
{
"docid": "1c3a87fd2e10a9799e7c0a79be635816",
"text": "According to Network Effect literature network externalities lead to market failure due to Pareto-inferior coordination results. We show that the assumptions and simplifications implicitly used for modeling standardization processes fail to explain the real-world variety of diffusion courses in today’s dynamic IT markets and derive requirements for a more general model of network effects. We argue that Agent-based Computational Economics provides a solid basis for meeting these requirements by integrating evolutionary models from Game Theory and Institutional Economics.",
"title": ""
},
{
"docid": "0f17262293f98685383c71381ca10bd9",
"text": "This paper presents the application of frequency selective surfaces in antenna arrays as an alternative to improve radiation parameters of the array. A microstrip antenna array between two FSS was proposed for application in WLAN and LTE 4G systems. Several parameters have been significantly improved, in particular the bandwidth, gain and radiation efficiency, compared with a conventional array. Numerical and measured results are presented.",
"title": ""
},
{
"docid": "03cd77a9a08e4a0f7836815f71bfbb89",
"text": "We consider the problem of learning deep neural networks (DNNs) for object category segmentation, where the goal is to label each pixel in an image as being part of a given object (foreground) or not (background). Deep neural networks are usually trained with simple loss functions (e.g., softmax loss). These loss functions are appropriate for standard classification problems where the performance is measured by the overall classification accuracy. For object category segmentation, the two classes (foreground and background) are very imbalanced. The intersectionover-union (IoU) is usually used to measure the performance of any object category segmentation method. In this paper, we propose an approach for directly optimizing this IoU measure in deep neural networks. Our experimental results on two object category segmentation datasets demonstrate that our approach outperforms DNNs trained with standard softmax loss.",
"title": ""
},
{
"docid": "5ded801b3c778d012a78aa467e01bd89",
"text": "To overcome limitations of fusion welding of the AA7050-T7451aluminum alloy friction stir welding (FSW) has become a prominent process which uses a non-consumable FSW tool to weld the two abutting plates of the workpiece. The FSW produces a joint with advantages of high joint strength, lower distortion and absence of metallurgical defects. Process parameters such as tool rotational speed, tool traverse speed and axial force and tool dimensions play an important role in obtaining a specific temperature distribution and subsequent flow stresses within the material being welded. Friction stir welding of AA7050-T7451 aluminum alloy has been simulated to obtain the temperature profiles & flow stresses using a recent FEA software called HyperWorks.; the former controlling the microstruture and in turn, mechanical properties and later, the flow of material which depends up on the peak temperatures obtained during FSW. A software based study has been carried out to avoid the difficulty in measuring the temperatures directly and explore the capabilities of the same to provide a basis for further research work related to the said aluminum alloy.",
"title": ""
},
{
"docid": "837803a140450d594d5693a06ba3be4b",
"text": "Allocation of very scarce medical interventions such as organs and vaccines is a persistent ethical challenge. We evaluate eight simple allocation principles that can be classified into four categories: treating people equally, favouring the worst-off, maximising total benefits, and promoting and rewarding social usefulness. No single principle is sufficient to incorporate all morally relevant considerations and therefore individual principles must be combined into multiprinciple allocation systems. We evaluate three systems: the United Network for Organ Sharing points systems, quality-adjusted life-years, and disability-adjusted life-years. We recommend an alternative system-the complete lives system-which prioritises younger people who have not yet lived a complete life, and also incorporates prognosis, save the most lives, lottery, and instrumental value principles.",
"title": ""
},
{
"docid": "d6edac3a6675c9edb2b36e75ac356ebd",
"text": "Ranking web pages for presenting the most relevant web pages to user’s queries is one of the main issues in any search engine. In this paper, two new ranking algorithms are offered, using Reinforcement Learning (RL) concepts. RL is a powerful technique of modern artificial intelligence that tunes agent’s parameters, interactively. In the first step, with formulation of ranking as an RL problem, a new connectivity-based ranking algorithm, called RL Rank, is proposed. In RL Rank, agent is considered as a surfer who travels between web pages by clicking randomly on a link in the current page. Each web page is considered as a",
"title": ""
},
{
"docid": "aba7cb0f5f50a062c42b6b51457eb363",
"text": "Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin’s role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student’s predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.",
"title": ""
},
{
"docid": "3ade96c73db1f06d7e0c1f48a0b33387",
"text": "To achieve enduring retention, people must usually study information on multiple occasions. How does the timing of study events affect retention? Prior research has examined this issue only in a spotty fashion, usually with very short time intervals. In a study aimed at characterizing spacing effects over significant durations, more than 1,350 individuals were taught a set of facts and--after a gap of up to 3.5 months--given a review. A final test was administered at a further delay of up to 1 year. At any given test delay, an increase in the interstudy gap at first increased, and then gradually reduced, final test performance. The optimal gap increased as test delay increased. However, when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. The interaction of gap and test delay implies that many educational practices are highly inefficient.",
"title": ""
},
{
"docid": "1b3d6129c05d8880cf7cba98b78bc720",
"text": "The underlying paradigm of big data-driven machine learning reflects the desire of deriving better conclusions from simply analyzing more data, without the necessity of looking at theory and models. Is having simply more data always helpful? In 1936, The Literary Digest collected 2.3M filled in questionnaires to predict the outcome of that year’s US presidential election. The outcome of this big data prediction proved to be entirely wrong, whereas George Gallup only needed 3K handpicked people to make an accurate prediction. Generally, biases occur in machine learning whenever the distributions of training set and test set are different. In this work, we provide a review of different sorts of biases in (big) data sets in machine learning. We provide definitions and discussions of the most commonly appearing biases in machine learning: class imbalance and covariate shift. We also show how these biases can be quantified and corrected. This work is an introductory text for both researchers and practitioners to become more aware of this topic and thus to derive more reliable models for their learning problems.",
"title": ""
},
{
"docid": "24544d2f92f8b736969dbcce0e6fd5d6",
"text": "This paper reports emotion recognition results from speech signals, with particular focus on extracting emotion features from the short utterances typical of Interactive Voice Response (IVR) applications. We focus on distinguishing anger versus neutral speech, which is salient to call center applications. We report on classification of other types of emotions such as sadness, boredom, happy, and cold anger. We compare results from using neural networks, Support Vector Machines (SVM), K-Nearest Neighbors, and decision trees. We use a database from the Linguistic Data Consortium at University of Pennsylvania, which is recorded by 8 actors expressing 15 emotions. Results indicate that hot anger and neutral utterances can be distinguished with over 90% accuracy. We show results from recognizing other emotions. We also illustrate which emotions can be clustered together using the selected prosodic features.",
"title": ""
},
{
"docid": "7adffc2dd1d6412b4bb01b38ced51c24",
"text": "With the popularity of the Internet and mobile intelligent terminals, the number of mobile applications is exploding. Mobile intelligent terminals trend to be the mainstream way of people's work and daily life online in place of PC terminals. Mobile application system brings some security problems inevitably while it provides convenience for people, and becomes a main target of hackers. Therefore, it is imminent to strengthen the security detection of mobile applications. This paper divides mobile application security detection into client security detection and server security detection. We propose a combining static and dynamic security detection method to detect client-side. We provide a method to get network information of server by capturing and analyzing mobile application traffic, and propose a fuzzy testing method based on HTTP protocol to detect server-side security vulnerabilities. Finally, on the basis of this, an automated platform for security detection of mobile application system is developed. Experiments show that the platform can detect the vulnerabilities of mobile application client and server effectively, and realize the automation of mobile application security detection. It can also reduce the cost of mobile security detection and enhance the security of mobile applications.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
}
] |
scidocsrr
|
a78be6c9a0927113b9fa7925014fab58
|
End-to-end visual speech recognition with LSTMS
|
[
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "7d78ca30853ed8a84bbb56fe82e3b9ba",
"text": "Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21% relative over a baseline multi-stream audio-visual GMM/HMM system.",
"title": ""
}
] |
[
{
"docid": "14b48440dd0b797cec04bbc249ee9940",
"text": "T cells use integrins in essentially all of their functions. They use integrins to migrate in and out of lymph nodes and, following infection, to migrate into other tissues. At the beginning of an immune response, integrins also participate in the immunological synapse formed between T cells and antigen-presenting cells. Because the ligands for integrins are widely expressed, integrin activity on T cells must be tightly controlled. Integrins become active following signalling through other membrane receptors, which cause both affinity alteration and an increase in integrin clustering. Lipid raft localization may increase integrin activity. Signalling pathways involving ADAP, Vav-1 and SKAP-55, as well as Rap1 and RAPL, cause clustering of leukocyte function-associated antigen-1 (LFA-1; integrin alphaLbeta2). T-cell integrins can also signal, and the pathways dedicated to the migratory activity of T cells have been the most investigated so far. Active LFA-1 causes T-cell attachment and lamellipodial movement induced by myosin light chain kinase at the leading edge, whereas RhoA and ROCK cause T-cell detachment at the trailing edge. Another important signalling pathway acts through CasL/Crk, which might regulate the activity of the GTPases Rac and Rap1 that have important roles in T-cell migration.",
"title": ""
},
{
"docid": "541075ddb29dd0acdf1f0cf3784c220a",
"text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the stateof-the-arts performance. 1",
"title": ""
},
{
"docid": "c071d5a7ff1dbfd775e9ffdee1b07662",
"text": "OBJECTIVES\nComplete root coverage is the primary objective to be accomplished when treating gingival recessions in patients with aesthetic demands. Furthermore, in order to satisfy patient demands fully, root coverage should be accomplished by soft tissue, the thickness and colour of which should not be distinguishable from those of adjacent soft tissue. The aim of the present split-mouth study was to compare the treatment outcome of two surgical approaches of the bilaminar procedure in terms of (i) root coverage and (ii) aesthetic appearance of the surgically treated sites.\n\n\nMATERIAL AND METHODS\nFifteen young systemically and periodontally healthy subjects with two recession-type defects of similar depth affecting contralateral teeth in the aesthetic zone of the maxilla were enrolled in the study. All recessions fall into Miller class I or II. Randomization for test and control treatment was performed by coin toss immediately prior to surgery. All defects were treated with a bilaminar surgical technique: differences between test and control sites resided in the size, thickness and positioning of the connective tissue graft. The clinical re-evaluation was made 1 year after surgery.\n\n\nRESULTS\nThe two bilaminar techniques resulted in a high percentage of root coverage (97.3% in the test and 94.7% in the control group) and complete root coverage (gingival margin at the cemento-enamel junction (CEJ)) (86.7% in the test and 80% in the control teeth), with no statistically significant difference between them. Conversely, better aesthetic outcome and post-operative course were indicated by the patients for test compared to control sites.\n\n\nCONCLUSIONS\nThe proposed modification of the bilaminar technique improved the aesthetic outcome. The reduced size and minimal thickness of connective tissue graft, together with its positioning apical to the CEJ, facilitated graft coverage by means of the coronally advanced flap.",
"title": ""
},
{
"docid": "ab50f458d919ba3ac3548205418eea62",
"text": "Department of Microbiology, School of Life Sciences, Bharathidasan University, Tiruchirappali 620 024, Tamilnadu, India. Department of Medical Biotechnology, Sri Ramachandra University, Porur, Chennai 600 116, Tamilnadu, India. CAS Marine Biology, Annamalai University, Parangipettai 608 502, Tamilnadu, India. Department of Zoology, DDE, Annamalai University, Annamalai Nagar 608 002, Tamilnadu, India Asian Pacific Journal of Tropical Disease (2012)S291-S295",
"title": ""
},
{
"docid": "531ac7d6500373005bae464c49715288",
"text": "We have used acceleration sensors to monitor the heart motion during surgery. A three-axis accelerometer was made from two commercially available two-axis sensors, and was used to measure the heart motion in anesthetized pigs. The heart moves due to both respiration and heart beating. The heart beating was isolated from respiration by high-pass filtering at 1.0 Hz, and heart wall velocity and position were calculated by numerically integrating the filtered acceleration traces. The resulting curves reproduced the heart motion in great detail, noise was hardly visible. Events that occurred during the measurements, e.g. arrhythmias and fibrillation, were recognized in the curves, and confirmed by comparison with synchronously recorded ECG data. We conclude that acceleration sensors are able to measure heart motion with good resolution, and that such measurements can reveal patterns that may be an indication of heart circulation failure.",
"title": ""
},
{
"docid": "b5eafe60989c0c4265fa910c79bbce41",
"text": "Little research has addressed IT professionals’ script debugging strategies, or considered whether there may be gender differences in these strategies. What strategies do male and female scripters use and what kinds of mechanisms do they employ to successfully fix bugs? Also, are scripters’ debugging strategies similar to or different from those of spreadsheet debuggers? Without the answers to these questions, tool designers do not have a target to aim at for supporting how male and female scripters want to go about debugging. We conducted a think-aloud study to bridge this gap. Our results include (1) a generalized understanding of debugging strategies used by spreadsheet users and scripters, (2) identification of the multiple mechanisms scripters employed to carry out the strategies, and (3) detailed examples of how these debugging strategies were employed by males and females to successfully fix bugs.",
"title": ""
},
{
"docid": "8505afb27c5ef73baeaa53dfe1c337ae",
"text": "The Osprey (Pandion haliaetus) is one of only six bird species with an almost world-wide distribution. We aimed at clarifying its phylogeographic structure and elucidating its taxonomic status (as it is currently separated into four subspecies). We tested six biogeographical scenarios to explain how the species’ distribution and differentiation took place in the past and how such a specialized raptor was able to colonize most of the globe. Using two mitochondrial genes (cyt b and ND2), the Osprey appeared structured into four genetic groups representing quasi non-overlapping geographical regions. The group Indo-Australasia corresponds to the cristatus ssp, as well as the group Europe-Africa to the haliaetus ssp. In the Americas, we found a single lineage for both carolinensis and ridgwayi ssp, whereas in north-east Asia (Siberia and Japan), we discovered a fourth new lineage. The four lineages are well differentiated, contrasting with the low genetic variability observed within each clade. Historical demographic reconstructions suggested that three of the four lineages experienced stable trends or slight demographic increases. Molecular dating estimates the initial split between lineages at about 1.16 Ma ago, in the Early Pleistocene. Our biogeographical inference suggests a pattern of colonization from the American continent towards the Old World. Populations of the Palearctic would represent the last outcomes of this colonization. At a global scale the Osprey complex may be composed of four different Evolutionary Significant Units, which should be treated as specific management units. Our study brought essential genetic clarifications, which have implications for conservation strategies in identifying distinct lineages across which birds should not be artificially moved through exchange/reintroduction schemes.",
"title": ""
},
{
"docid": "eb0ec729796a93f36d348e70e3fa9793",
"text": "This paper proposes a novel approach to measure the object size using a regular digital camera. Nowadays, the remote object-size measurement is very crucial to many multimedia applications. Our proposed computer-aided automatic object-size measurement technique is based on a new depth-information extraction (range finding) scheme using a regular digital camera. The conventional range finders are often carried out using the passive method such as stereo cameras or the active method such as ultrasonic and infrared equipment. They either require the cumbersome set-up or deal with point targets only. The proposed approach requires only a digital camera with certain image processing techniques and relies on the basic principles of visible light. Experiments are conducted to evaluate the performance of our proposed new object-size measurement mechanism. The average error-percentage of this method is below 2%. It demonstrates the striking effectiveness of our proposed new method.",
"title": ""
},
{
"docid": "21961041e3bf66d7e3f004c65ddc5da2",
"text": "A novel high step-up converter is proposed for a front-end photovoltaic system. Through a voltage multiplier module, an asymmetrical interleaved high step-up converter obtains high step-up gain without operating at an extreme duty ratio. The voltage multiplier module is composed of a conventional boost converter and coupled inductors. An extra conventional boost converter is integrated into the first phase to achieve a considerably higher voltage conversion ratio. The two-phase configuration not only reduces the current stress through each power switch, but also constrains the input current ripple, which decreases the conduction losses of metal-oxide-semiconductor field-effect transistors (MOSFETs). In addition, the proposed converter functions as an active clamp circuit, which alleviates large voltage spikes across the power switches. Thus, the low-voltage-rated MOSFETs can be adopted for reductions of conduction losses and cost. Efficiency improves because the energy stored in leakage inductances is recycled to the output terminal. Finally, the prototype circuit with a 40-V input voltage, 380-V output, and 1000- W output power is operated to verify its performance. The highest efficiency is 96.8%.",
"title": ""
},
{
"docid": "2a818337c472caa1e693edb05722954b",
"text": "UNLABELLED\nThis study focuses on the relationship between classroom ventilation rates and academic achievement. One hundred elementary schools of two school districts in the southwest United States were included in the study. Ventilation rates were estimated from fifth-grade classrooms (one per school) using CO(2) concentrations measured during occupied school days. In addition, standardized test scores and background data related to students in the classrooms studied were obtained from the districts. Of 100 classrooms, 87 had ventilation rates below recommended guidelines based on ASHRAE Standard 62 as of 2004. There is a linear association between classroom ventilation rates and students' academic achievement within the range of 0.9-7.1 l/s per person. For every unit (1 l/s per person) increase in the ventilation rate within that range, the proportion of students passing standardized test (i.e., scoring satisfactory or above) is expected to increase by 2.9% (95%CI 0.9-4.8%) for math and 2.7% (0.5-4.9%) for reading. The linear relationship observed may level off or change direction with higher ventilation rates, but given the limited number of observations, we were unable to test this hypothesis. A larger sample size is needed for estimating the effect of classroom ventilation rates higher than 7.1 l/s per person on academic achievement.\n\n\nPRACTICAL IMPLICATIONS\nThe results of this study suggest that increasing the ventilation rates toward recommended guideline ventilation rates in classrooms should translate into improved academic achievement of students. More studies are needed to fully understand the relationships between ventilation rate, other indoor environmental quality parameters, and their effects on students' health and achievement. Achieving the recommended guidelines and pursuing better understanding of the underlying relationships would ultimately support both sustainable and productive school environments for students and personnel.",
"title": ""
},
{
"docid": "bcab7b2f12f72c6db03446046586381e",
"text": "The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy are actively researched, there is still little focus on detective controls related to cloud accountability and audit ability. The complexity resulting from large-scale virtualization and data distribution carried out in current clouds has revealed an urgent research agenda for cloud accountability, as has the shift in focus of customer concerns from servers to data. This paper discusses key issues and challenges in achieving a trusted cloud through the use of detective controls, and presents the Trust Cloud framework, which addresses accountability in cloud computing via technical and policy-based approaches.",
"title": ""
},
{
"docid": "8f449e62b300c4c8ff62306d02f2f820",
"text": "The effects of adrenal corticosteroids on subsequent adrenocorticotropin secretion are complex. Acutely (within hours), glucocorticoids (GCs) directly inhibit further activity in the hypothalamo-pituitary-adrenal axis, but the chronic actions (across days) of these steroids on brain are directly excitatory. Chronically high concentrations of GCs act in three ways that are functionally congruent. (i) GCs increase the expression of corticotropin-releasing factor (CRF) mRNA in the central nucleus of the amygdala, a critical node in the emotional brain. CRF enables recruitment of a chronic stress-response network. (ii) GCs increase the salience of pleasurable or compulsive activities (ingesting sucrose, fat, and drugs, or wheel-running). This motivates ingestion of \"comfort food.\" (iii) GCs act systemically to increase abdominal fat depots. This allows an increased signal of abdominal energy stores to inhibit catecholamines in the brainstem and CRF expression in hypothalamic neurons regulating adrenocorticotropin. Chronic stress, together with high GC concentrations, usually decreases body weight gain in rats; by contrast, in stressed or depressed humans chronic stress induces either increased comfort food intake and body weight gain or decreased intake and body weight loss. Comfort food ingestion that produces abdominal obesity, decreases CRF mRNA in the hypothalamus of rats. Depressed people who overeat have decreased cerebrospinal CRF, catecholamine concentrations, and hypothalamo-pituitary-adrenal activity. We propose that people eat comfort food in an attempt to reduce the activity in the chronic stress-response network with its attendant anxiety. These mechanisms, determined in rats, may explain some of the epidemic of obesity occurring in our society.",
"title": ""
},
{
"docid": "3e691cf6055eb564dedca955b816a654",
"text": "Many Internet-based services have already been ported to the mobile-based environment, embracing the new services is therefore critical to deriving revenue for services providers. Based on a valence framework and trust transfer theory, we developed a trust-based customer decision-making model of the non-independent, third-party mobile payment services context. We empirically investigated whether a customer’s established trust in Internet payment services is likely to influence his or her initial trust in mobile payment services. We also examined how these trust beliefs might interact with both positive and negative valence factors and affect a customer’s adoption of mobile payment services. Our SEM analysis indicated that trust indeed had a substantial impact on the cross-environment relationship and, further, that trust in combination with the positive and negative valence determinants directly and indirectly influenced behavioral intention. In addition, the magnitudes of these effects on workers and students were significantly different from each other. 2011 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +86 27 8755 8100; fax: +86 27 8755 6437. E-mail addresses: [email protected] (Y. Lu), [email protected] (S. Yang), [email protected] (Patrick Y.K. Chau), [email protected] (Y. Cao). 1 Tel.: +86 27 8755 6448. 2 Tel.: +852 2859 1025. 3 Tel.: +86 27 8755 8100.",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
},
{
"docid": "d0c940a651b1231c6ef4f620e7acfdcc",
"text": "Harvard Business School Working Paper Number 05-016. Working papers are distributed in draft form for purposes of comment and discussion only. They may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author(s). Abstract Much recent research has pointed to the critical role of architecture in the development of a firm's products, services and technical capabilities. A common theme in these studies is the notion that specific characteristics of a product's design – for example, the degree of modularity it exhibits – can have a profound effect on among other things, its performance, the flexibility of the process used to produce it, the value captured by its producer, and the potential for value creation at the industry level. Unfortunately, this stream of work has been limited by the lack of appropriate tools, metrics and terminology for characterizing key attributes of a product's architecture in a robust fashion. As a result, there is little empirical evidence that the constructs emerging in the literature have power in predicting the phenomena with which they are associated. This paper reports data from a research project which seeks to characterize the differences in design structure between complex software products. In particular, we adopt a technique based upon Design Structure Matrices (DSMs) to map the dependencies between different elements of a design then develop metrics that allow us to compare the structures of these different DSMs. We demonstrate the power of this approach in two ways: First, we compare the design structures of two complex software products – the Linux operating system and the Mozilla web browser – that were developed via contrasting modes of organization: specifically, open source versus proprietary development. We find significant differences in their designs, consistent with an interpretation that Linux possesses a more \" modular \" architecture. We then track the evolution of Mozilla, paying particular attention to a major \" redesign \" effort that took place several months after its release as an open source product. We show that this effort resulted in a design structure that was significantly more modular than its predecessor, and indeed, more modular than that of a comparable version of Linux. Our findings demonstrate that it is possible to characterize the structure of complex product designs and draw meaningful conclusions about the precise ways in which they differ. We provide a description of a set of tools …",
"title": ""
},
{
"docid": "0dbca0a2aec1b27542463ff80fc4f59d",
"text": "An emerging research area named Learning-to-Rank (LtR) has shown that effective solutions to the ranking problem can leverage machine learning techniques applied to a large set of features capturing the relevance of a candidate document for the user query. Large-scale search systems must however answer user queries very fast, and the computation of the features for candidate documents must comply with strict back-end latency constraints. The number of features cannot thus grow beyond a given limit, and Feature Selection (FS) techniques have to be exploited to find a subset of features that both meets latency requirements and leads to high effectiveness of the trained models. In this paper, we propose three new algorithms for FS specifically designed for the LtR context where hundreds of continuous or categorical features can be involved. We present a comprehensive experimental analysis conducted on publicly available LtR datasets and we show that the proposed strategies outperform a well-known state-of-the-art competitor.",
"title": ""
},
{
"docid": "5757d96fce3e0b3b3303983b15d0030d",
"text": "Malicious applications pose a threat to the security of the Android platform. The growing amount and diversity of these applications render conventional defenses largely ineffective and thus Android smartphones often remain unprotected from novel malware. In this paper, we propose DREBIN, a lightweight method for detection of Android malware that enables identifying malicious applications directly on the smartphone. As the limited resources impede monitoring applications at run-time, DREBIN performs a broad static analysis, gathering as many features of an application as possible. These features are embedded in a joint vector space, such that typical patterns indicative for malware can be automatically identified and used for explaining the decisions of our method. In an evaluation with 123,453 applications and 5,560 malware samples DREBIN outperforms several related approaches and detects 94% of the malware with few false alarms, where the explanations provided for each detection reveal relevant properties of the detected malware. On five popular smartphones, the method requires 10 seconds for an analysis on average, rendering it suitable for checking downloaded applications directly on the device.",
"title": ""
},
{
"docid": "3038afba11844c31fefc30a8245bc61c",
"text": "Frame duplication is to duplicate a sequence of consecutive frames and insert or replace to conceal or imitate a specific event/content in the same source video. To automatically detect the duplicated frames in a manipulated video, we propose a coarse-to-fine deep convolutional neural network framework to detect and localize the frame duplications. We first run an I3D network [2] to obtain the most candidate duplicated frame sequences and selected frame sequences, and then run a Siamese network with ResNet network [6] to identify each pair of a duplicated frame and the corresponding selected frame. We also propose a heuristic strategy to formulate the video-level score. We then apply our inconsistency detector fine-tuned on the I3D network to distinguish duplicated frames from selected frames. With the experimental evaluation conducted on two video datasets, we strongly demonstrate that our proposed method outperforms the current state-of-the-art methods.",
"title": ""
},
{
"docid": "af5fe4ecd02d320477e2772d63b775dd",
"text": "Background: Blockchain technology is recently receiving a lot of attention from researchers as well as from many different industries. There are promising application areas for the logistics sector like digital document exchange and tracking of goods, but there is no existing research on these topics. This thesis aims to contribute to the research of information systems in logistics in combination with Blockchain technology. Purpose: The purpose of this research is to explore the capabilities of Blockchain technology regarding the concepts of privacy, transparency and trust. In addition, the requirements of information systems in logistics regarding the mentioned concepts are studied and brought in relation to the capabilities of Blockchain technology. The goal is to contribute to a theoretical discussion on the role of Blockchain technology in improving the flow of goods and the flow of information in logistics. Method: The research is carried out in the form of an explorative case study. Blockchain technology has not been studied previously in a logistics setting and therefore, an inductive research approach is chosen by using thematic analysis. The case study is based on a pilot test which had the goal to facilitate a Blockchain to exchange documents and track shipments. Conclusion: The findings reflect that the research on Blockchain technology is still in its infancy and that it still takes several years to facilitate the technology in a productive environment. The Blockchain has the capabilities to meet the requirements of information systems in logistics due to the ability to create trust and establish an organisation overarching platform to exchange information.",
"title": ""
}
] |
scidocsrr
|
92a6ff6616ba7c6622b8b1510ef7f142
|
Interactive whiteboards: Interactive or just whiteboards?
|
[
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
}
] |
[
{
"docid": "8a3dba8aa5aa8cf69da21079f7e36de6",
"text": "This letter presents a novel technique for synthesis of coupled-resonator filters with inter-resonator couplings varying linearly with frequency. The values of non-zero elements of the coupling matrix are found by solving a nonlinear least squares problem involving eigenvalues of matrix pencils derived from the coupling matrix and reference zeros and poles of scattering parameters. The proposed method was verified by numerical tests carried out for various coupling schemes including triplets and quadruplets for which the frequency-dependent coupling was found to produce an extra zero.",
"title": ""
},
{
"docid": "e55ad28c68a422ec959e8b247aade1b9",
"text": "Developing reliable methods for representing and managing information uncertainty remains a persistent and relevant challenge to GIScience. Information uncertainty is an intricate idea, and recent examinations of this concept have generated many perspectives on its representation and visualization, with perspectives emerging from a wide range of disciplines and application contexts. In this paper, we review and assess progress toward visual tools and methods to help analysts manage and understand information uncertainty. Specifically, we report on efforts to conceptualize uncertainty, decision making with uncertainty, frameworks for representing uncertainty, visual representation and user control of displays of information uncertainty, and evaluative efforts to assess the use and usability of visual methods of uncertainty. We conclude by identifying seven key research challenges in visualizing information uncertainty, particularly as it applies to decision making and analysis.",
"title": ""
},
{
"docid": "c8911f38bfd68baa54b49b9126c2ad22",
"text": "This document presents a performance comparison of three 2D SLAM techniques available in ROS: Gmapping, Hec-torSLAM and CRSM SLAM. These algorithms were evaluated using a Roomba 645 robotic platform with differential drive and a RGB-D Kinect sensor as an emulator of a scanner lasser. All tests were realized in static indoor environments. To improve the quality of the maps, some rosbag files were generated and used to build the maps in an off-line way.",
"title": ""
},
{
"docid": "baa0bf8fe429c4fe8bfb7ebf78a1ed94",
"text": "The weakly supervised object localization (WSOL) is to locate the objects in an image while only image-level labels are available during the training procedure. In this work, the Selective Feature Category Mapping (SFCM) method is proposed, which introduces the Feature Category Mapping (FCM) and the widely-used selective search method to solve the WSOL task. Our FCM replaces layers after the specific layer in the state-of-the-art CNNs with a set of kernels and learns the weighted pooling for previous feature maps. It is trained with only image-level labels and then map the feature maps to their corresponding categories in the test phase. Together with selective search method, the location of each object is finally obtained. Extensive experimental evaluation on ILSVRC2012 and PASCAL VOC2007 benchmarks shows that SFCM is simple but very effective, and it is able to achieve outstanding classification performance and outperform the state-of-the-art methods in the WSOL task.",
"title": ""
},
{
"docid": "b305e3504e3a99a5cd026e7845d98dab",
"text": "This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST and the backwards-smoothing extended Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A twostep approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, Associate Professor, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Associate Fellow AIAA. Aerospace Engineer, Guidance, Navigation and Control Systems Engineering Branch. Email: [email protected]. Fellow AIAA. Postdoctoral Research Fellow, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Member AIAA.",
"title": ""
},
{
"docid": "fc79bfdb7fbbfa42d2e1614964113101",
"text": "Probability Theory, 2nd ed. Princeton, N. J.: 960. Van Nostrand, 1 121 T. T. Kadota, “Optimum reception of binary gaussian signals,” Bell Sys. Tech. J., vol. 43, pp. 2767-2810, November 1964. 131 T. T. Kadota. “Ootrmum recention of binarv sure and Gaussian signals,” Bell Sys. ?‘ech: J., vol. 44;~~. 1621-1658, October 1965. 141 U. Grenander, ‘Stochastic processes and statistical inference,” Arkiv fiir Matematik, vol. 17, pp. 195-277, 1950. 151 L. A. Zadeh and J. R. Ragazzini, “Optimum filters for the detection of signals in noise,” Proc. IRE, vol. 40, pp. 1223-1231, O,+nhm 1 a.63 161 J. H. Laning and R. H. Battin, Random Processes in Automatic Control. New York: McGraw-Hill. 1956. nn. 269-358. 171 C.. W. Helstrom, “ Solution of the dete&on integral equation for stationary filtered white noise,” IEEE Trans. on Information Theory, vol. IT-II, pp. 335-339, July 1965. 181 T. Kailath, “The detection of known signals in colored Gaussian noise,” Stanford Electronics Labs., Stanford Univ., Stanford, Calif. Tech. Rept. 7050-4, July 1965. 191 T. T. Kadota, “Optimum reception of nf-ary Gaussian signals in Gaussian noise,” Bell. Sys. Tech. J., vol. 44, pp. 2187-2197, November 1965. [lOI T. T. Kadota, “Term-by-term differentiability of Mercer’s expansion,” Proc. of Am. Math. Sot., vol. 18, pp. 69-72, February 1967.",
"title": ""
},
{
"docid": "ecea888d3b2d6b9ce0a26a4af6382db8",
"text": "Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provides an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications, and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.",
"title": ""
},
{
"docid": "03ba329de93f763ff6f0a8c4c6e18056",
"text": "Nowadays, with the availability of massive amount of trade data collected, the dynamics of the financial markets pose both a challenge and an opportunity for high frequency traders. In order to take advantage of the rapid, subtle movement of assets in High Frequency Trading (HFT), an automatic algorithm to analyze and detect patterns of price change based on transaction records must be available. The multichannel, time-series representation of financial data naturally suggests tensor-based learning algorithms. In this work, we investigate the effectiveness of two multilinear methods for the mid-price prediction problem against other existing methods. The experiments in a large scale dataset which contains more than 4 millions limit orders show that by utilizing tensor representation, multilinear models outperform vector-based approaches and other competing ones.",
"title": ""
},
{
"docid": "4f631769d8267c81ea568c9eed71ac09",
"text": "To study a phenomenon scientifically, it must be appropriately described and measured. How mindfulness is conceptualized and assessed has considerable importance for mindfulness science, and perhaps in part because of this, these two issues have been among the most contentious in the field. In recognition of the growing scientific and clinical interest in",
"title": ""
},
{
"docid": "f1f72a6d5d2ab8862b514983ac63480b",
"text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.",
"title": ""
},
{
"docid": "68865e653e94d3366961434cc012363f",
"text": "Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called \"contrastive analysis\"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.",
"title": ""
},
{
"docid": "c224cc83b4c58001dbbd3e0ea44a768a",
"text": "We review the current status of research in dorsal-ventral (D-V) patterning in vertebrates. Emphasis is placed on recent work on Xenopus, which provides a paradigm for vertebrate development based on a rich heritage of experimental embryology. D-V patterning starts much earlier than previously thought, under the influence of a dorsal nuclear -Catenin signal. At mid-blastula two signaling centers are present on the dorsal side: The prospective neuroectoderm expresses bone morphogenetic protein (BMP) antagonists, and the future dorsal endoderm secretes Nodal-related mesoderm-inducing factors. When dorsal mesoderm is formed at gastrula, a cocktail of growth factor antagonists is secreted by the Spemann organizer and further patterns the embryo. A ventral gastrula signaling center opposes the actions of the dorsal organizer, and another set of secreted antagonists is produced ventrally under the control of BMP4. The early dorsal -Catenin signal inhibits BMP expression at the transcriptional level and promotes expression of secreted BMP antagonists in the prospective central nervous system (CNS). In the absence of mesoderm, expression of Chordin and Noggin in ectoderm is required for anterior CNS formation. FGF (fibroblast growth factor) and IGF (insulin-like growth factor) signals are also potent neural inducers. Neural induction by anti-BMPs such as Chordin requires mitogen-activated protein kinase (MAPK) activation mediated by FGF and IGF. These multiple signals can be integrated at the level of Smad1. Phosphorylation by BMP receptor stimulates Smad1 transcriptional activity, whereas phosphorylation by MAPK has the opposite effect. Neural tissue is formed only at very low levels of activity of BMP-transducing Smads, which require the combination of both low BMP levels and high MAPK signals. Many of the molecular players that regulate D-V patterning via regulation of BMP signaling have been conserved between Drosophila and the vertebrates.",
"title": ""
},
{
"docid": "aad34b3e8acc311d0d32964c6607a6e1",
"text": "This paper looks at the performance of photovoltaic modules in nonideal conditions and proposes topologies to minimize the degradation of performance caused by these conditions. It is found that the peak power point of a module is significantly decreased due to only the slightest shading of the module, and that this effect is propagated through other nonshaded modules connected in series with the shaded one. Based on this result, two topologies for parallel module connections have been outlined. In addition, dc/dc converter technologies, which are necessary to the design, are compared by way of their dynamic models, frequency characteristics, and component cost. Out of this comparison, a recommendation has been made",
"title": ""
},
{
"docid": "1ad06e5eee4d4f29dd2f0e8f0dd62370",
"text": "Recent research on map matching algorithms for land vehicle navigation has been based on either a conventional topological analysis or a probabilistic approach. The input to these algorithms normally comes from the global positioning system and digital map data. Although the performance of some of these algorithms is good in relatively sparse road networks, they are not always reliable for complex roundabouts, merging or diverging sections of motorways and complex urban road networks. In high road density areas where the average distance between roads is less than 100 metres, there may be many road patterns matching the trajectory of the vehicle reported by the positioning system at any given moment. Consequently, it may be difficult to precisely identify the road on which the vehicle is travelling. Therefore, techniques for dealing with qualitative terms such as likeliness are essential for map matching algorithms to identify a correct link. Fuzzy logic is one technique that is an effective way to deal with qualitative terms, linguistic vagueness, and human intervention. This paper develops a map matching algorithm based on fuzzy logic theory. The inputs to the proposed algorithm are from the global positioning system augmented with data from deduced reckoning sensors to provide continuous navigation. The algorithm is tested on different road networks of varying complexity. The validation of this algorithm is carried out using high precision positioning data obtained from GPS carrier phase observables. The performance of the developed map matching algorithm is evaluated against the performance of several well-accepted existing map matching algorithms. The results show that the fuzzy logic-based map matching algorithm provides a significant improvement over existing map matching algorithms both in terms of identifying correct links and estimating the vehicle position on the links.",
"title": ""
},
{
"docid": "39eaf3ad7373d36404e903a822a3d416",
"text": "We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.",
"title": ""
},
{
"docid": "b0c5c8e88e9988b6548acb1c8ebb5edd",
"text": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “ a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms.",
"title": ""
},
{
"docid": "c1b5b1dcbb3e7ff17ea6ad125bbc4b4b",
"text": "This article focuses on a new type of wireless devices in the domain between RFIDs and sensor networks—Energy-Harvesting Active Networked Tags (EnHANTs). Future EnHANTs will be small, flexible, and self-powered devices that can be attached to objects that are traditionally not networked (e.g., books, furniture, toys, produce, and clothing). Therefore, they will provide the infrastructure for various tracking applications and can serve as one of the enablers for the Internet of Things. We present the design considerations for the EnHANT prototypes, developed over the past 4 years. The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. We describe a small-scale testbed that uniquely allows evaluating different algorithms with trace-based light energy inputs. Then, we experimentally evaluate the performance of different energy-harvesting adaptive policies with organic solar cells and UWB-IR transceivers. Finally, we discuss the lessons learned during the prototype and testbed design process.",
"title": ""
},
{
"docid": "c44ef4f4242147affdbe613c70ec4a85",
"text": "The physical and generalized sensor models are two widely used imaging geometry models in the photogrammetry and remote sensing. Utilizing the rational function model (RFM) to replace physical sensor models in photogrammetric mapping is becoming a standard way for economical and fast mapping from high-resolution images. The RFM is accepted for imagery exploitation since high accuracies have been achieved in all stages of the photogrammetric process just as performed by rigorous sensor models. Thus it is likely to become a passkey in complex sensor modeling. Nowadays, commercial off-the-shelf (COTS) digital photogrammetric workstations have incorporated the RFM and related techniques. Following the increasing number of RFM related publications in recent years, this paper reviews the methods and key applications reported mainly over the past five years, and summarizes the essential progresses and address the future research directions in this field. These methods include the RFM solution, the terrainindependent and terrain-dependent computational scenarios, the direct and indirect RFM refinement methods, the photogrammetric exploitation techniques, and photogrammetric interoperability for cross sensor/platform imagery integration. Finally, several open questions regarding some aspects worth of further study are addressed.",
"title": ""
},
{
"docid": "d5b004af32bd747c2b5ad175975f8c06",
"text": "This paper presents a design of a quasi-millimeter wave wideband antenna array consisting of a leaf-shaped bowtie antenna (LSBA) and series-parallel feed networks in which parallel strip and microstrip lines are employed. A 16-element LSBA array is designed such that the antenna array operates over the frequency band of 22-30GHz. In order to demonstrate the effective performance of the presented configuration, characteristics of the designed LSBA array are evaluated by the finite-difference time domain (FDTD) analysis and measurements. Over the frequency range from 22GHz to 30GHz, the simulated reflection coefficient is observed to be less than -8dB, and the actual gain of 12.3-19.4dBi is obtained.",
"title": ""
},
{
"docid": "c117a5fc0118f3ea6c576bb334759d59",
"text": "While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.",
"title": ""
}
] |
scidocsrr
|
a04245add9a1b1f59b8f46260db49621
|
Supplementary material for “ Masked Autoregressive Flow for Density Estimation ”
|
[
{
"docid": "b6a8f45bd10c30040ed476b9d11aa908",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
},
{
"docid": "4c21ec3a600d773ea16ce6c45df8fe9d",
"text": "The efficacy of particle identification is compared using artificial neutral networks and boosted decision trees. The comparison is performed in the context of the MiniBooNE, an experiment at Fermilab searching for neutrino oscillations. Based on studies of Monte Carlo samples of simulated data, particle identification with boosting algorithms has better performance than that with artificial neural networks for the MiniBooNE experiment. Although the tests in this paper were for one experiment, it is expected that boosting algorithms will find wide application in physics. r 2005 Elsevier B.V. All rights reserved. PACS: 29.85.+c; 02.70.Uu; 07.05.Mh; 14.60.Pq",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
}
] |
[
{
"docid": "2a5710aeaba7e39c5e08c1a5310c89f6",
"text": "We present an augmented reality system that supports human workers in a rapidly changing production environment. By providing spatially registered information on the task directly in the user's field of view the system can guide the user through unfamiliar tasks (e.g. assembly of new products) and visualize information directly in the spatial context were it is relevant. In the first version we present the user with picking and assembly instructions in an assembly application. In this paper we present the initial experience with this system, which has already been used successfully by several hundred users who had no previous experience in the assembly task.",
"title": ""
},
{
"docid": "527c4c17aadb23a991d85511004a7c4f",
"text": "Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.",
"title": ""
},
{
"docid": "08c26a40328648cf6a6d0a7efc3917a5",
"text": "Person re-identification (ReID) is an important task in video surveillance and has various applications. It is non-trivial due to complex background clutters, varying illumination conditions, and uncontrollable camera settings. Moreover, the person body misalignment caused by detectors or pose variations is sometimes too severe for feature matching across images. In this study, we propose a novel Convolutional Neural Network (CNN), called Spindle Net, based on human body region guided multi-stage feature decomposition and tree-structured competitive feature fusion. It is the first time human body structure information is considered in a CNN framework to facilitate feature learning. The proposed Spindle Net brings unique advantages: 1) it separately captures semantic features from different body regions thus the macro-and micro-body features can be well aligned across images, 2) the learned region features from different semantic regions are merged with a competitive scheme and discriminative features can be well preserved. State of the art performance can be achieved on multiple datasets by large margins. We further demonstrate the robustness and effectiveness of the proposed Spindle Net on our proposed dataset SenseReID without fine-tuning.",
"title": ""
},
{
"docid": "b2f0b5ef76d9e98e93e6c5ed64642584",
"text": "The yeast and fungal prions determine heritable and infectious traits, and are thus genes composed of protein. Most prions are inactive forms of a normal protein as it forms a self-propagating filamentous β-sheet-rich polymer structure called amyloid. Remarkably, a single prion protein sequence can form two or more faithfully inherited prion variants, in effect alleles of these genes. What protein structure explains this protein-based inheritance? Using solid-state nuclear magnetic resonance, we showed that the infectious amyloids of the prion domains of Ure2p, Sup35p and Rnq1p have an in-register parallel architecture. This structure explains how the amyloid filament ends can template the structure of a new protein as it joins the filament. The yeast prions [PSI(+)] and [URE3] are not found in wild strains, indicating that they are a disadvantage to the cell. Moreover, the prion domains of Ure2p and Sup35p have functions unrelated to prion formation, indicating that these domains are not present for the purpose of forming prions. Indeed, prion-forming ability is not conserved, even within Saccharomyces cerevisiae, suggesting that the rare formation of prions is a disease. The prion domain sequences generally vary more rapidly in evolution than does the remainder of the molecule, producing a barrier to prion transmission, perhaps selected in evolution by this protection.",
"title": ""
},
{
"docid": "7b88e651bf87e3a780fd1cf31b997bc5",
"text": "While the use of the internet and social media as a tool for extremists and terrorists has been well documented, understanding the mechanisms at work has been much more elusive. This paper begins with a grounded theory approach guided by a new theoretical approach to power that utilizes both terrorism cases and extremist social media groups to develop an explanatory model of radicalization. Preliminary hypotheses are developed, explored and refined in order to develop a comprehensive model which is then presented. This model utilizes and applies concepts from social theorist Michel Foucault, including the use of discourse and networked power relations in order to normalize and modify thoughts and behaviors. The internet is conceptualized as a type of institution in which this framework of power operates and seeks to recruit and radicalize. Overall, findings suggest that the explanatory model presented is a well suited, yet still incomplete in explaining the process of online radicalization.",
"title": ""
},
{
"docid": "d1ebf47c1f0b1d8572d526e9260dbd32",
"text": "In this paper, mortality in the immediate aftermath of an earthquake is studied on a worldwide scale using multivariate analysis. A statistical method is presented that analyzes reported earthquake fatalities as a function of a heterogeneous set of parameters selected on the basis of their presumed influence on earthquake mortality. The ensemble was compiled from demographic, seismic, and reported fatality data culled from available records of past earthquakes organized in a geographic information system. The authors consider the statistical relation between earthquake mortality and the available data ensemble, analyze the validity of the results in view of the parametric uncertainties, and propose a multivariate mortality analysis prediction method. The analysis reveals that, although the highest mortality rates are expected in poorly developed rural areas, high fatality counts can result from a wide range of mortality ratios that depend on the effective population size.",
"title": ""
},
{
"docid": "2e812c0a44832721fcbd7272f9f6a465",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "2f4cfa040664d08b1540677c8d72f962",
"text": "We study the problem of modeling spatiotemporal trajectories over long time horizons using expert demonstrations. For instance, in sports, agents often choose action sequences with long-term goals in mind, such as achieving a certain strategic position. Conventional policy learning approaches, such as those based on Markov decision processes, generally fail at learning cohesive long-term behavior in such high-dimensional state spaces, and are only effective when fairly myopic decisionmaking yields the desired behavior. The key difficulty is that conventional models are “single-scale” and only learn a single state-action policy. We instead propose a hierarchical policy class that automatically reasons about both long-term and shortterm goals, which we instantiate as a hierarchical neural network. We showcase our approach in a case study on learning to imitate demonstrated basketball trajectories, and show that it generates significantly more realistic trajectories compared to non-hierarchical baselines as judged by professional sports analysts.",
"title": ""
},
{
"docid": "83413682f018ae5aec9ec415679de940",
"text": "An 18-year-old female patient arrived at the emergency department complaining of abdominal pain and fullness after a heavy meal. Physical examination revealed she was filthy and cover in feces, and she experienced severe abdominal distension. She died in ED and a diagnostic autopsy examination was requested. At external examination, the pathologist observed a significant dilation of the anal sphincter and suspected sexual assault, thus alerting the Judicial Authority who assigned the case to our department for a forensic autopsy. During the autopsy, we observed anal orifice expansion without signs of violence; food was found in the pleural cavity. The stomach was hyper-distended and perforated at three different points as well as the diaphragm. The patient was suffering from anorexia nervosa with episodes of overeating followed by manual voiding of her feces from the anal cavity (thus explaining the anal dilatation). The forensic pathologists closed the case as an accidental death.",
"title": ""
},
{
"docid": "b692e35c404da653d27dc33c01867b6e",
"text": "We demonstrate that it is possible to perform automatic sentiment classification in the very noisy domain of customer feedback data. We show that by using large feature vectors in combination with feature reduction, we can train linear support vector machines that achieve high classification accuracy on data that present classification challenges even for a human annotator. We also show that, surprisingly, the addition of deep linguistic analysis features to a set of surface level word n-gram features contributes consistently to classification accuracy in this domain.",
"title": ""
},
{
"docid": "1701da2aed094fdcbfaca6c2252d2e53",
"text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras.",
"title": ""
},
{
"docid": "1c79bf1b4dcad01f9afc54f467d8067f",
"text": "With the rapid growth of network bandwidth, increases in CPU cores on a single machine, and application API models demanding more short-lived connections, a scalable TCP stack is performance-critical. Although many clean-state designs have been proposed, production environments still call for a bottom-up parallel TCP stack design that is backward-compatible with existing applications.\n We present Fastsocket, a BSD Socket-compatible and scalable kernel socket design, which achieves table-level connection partition in TCP stack and guarantees connection locality for both passive and active connections. Fastsocket architecture is a ground up partition design, from NIC interrupts all the way up to applications, which naturally eliminates various lock contentions in the entire stack. Moreover, Fastsocket maintains the full functionality of the kernel TCP stack and BSD-socket-compatible API, and thus applications need no modifications.\n Our evaluations show that Fastsocket achieves a speedup of 20.4x on a 24-core machine under a workload of short-lived connections, outperforming the state-of-the-art Linux kernel TCP implementations. When scaling up to 24 CPU cores, Fastsocket increases the throughput of Nginx and HAProxy by 267% and 621% respectively compared with the base Linux kernel. We also demonstrate that Fastsocket can achieve scalability and preserve BSD socket API at the same time. Fastsocket is already deployed in the production environment of Sina WeiBo, serving 50 million daily active users and billions of requests per day.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "caf88f7fd5ec7f3a3499f46f541b985b",
"text": "Photo-based question answering is a useful way of finding information about physical objects. Current question answering (QA) systems are text-based and can be difficult to use when a question involves an object with distinct visual features. A photo-based QA system allows direct use of a photo to refer to the object. We develop a three-layer system architecture for photo-based QA that brings together recent technical achievements in question answering and image matching. The first, template-based QA layer matches a query photo to online images and extracts structured data from multimedia databases to answer questions about the photo. To simplify image matching, it exploits the question text to filter images based on categories and keywords. The second, information retrieval QA layer searches an internal repository of resolved photo-based questions to retrieve relevant answers. The third, human-computation QA layer leverages community experts to handle the most difficult cases. A series of experiments performed on a pilot dataset of 30,000 images of books, movie DVD covers, grocery items, and landmarks demonstrate the technical feasibility of this architecture. We present three prototypes to show how photo-based QA can be built into an online album, a text-based QA, and a mobile application.",
"title": ""
},
{
"docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04",
"text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.",
"title": ""
},
{
"docid": "e8055c37b0082cff57e02389949fb7ca",
"text": "Distributed SDN controllers have been proposed to address performance and resilience issues. While approaches for datacenters are built on strongly-consistent state sharing among controllers, others for WAN and constrained networks rely on a loosely-consistent distributed state. In this paper, we address the problem of failover for distributed SDN controllers by proposing two strategies for neighbor active controllers to take over the control of orphan OpenFlow switches: (1) a greedy incorporation and (2) a pre-partitioning among controllers. We built a prototype with distributed Floodlight controllers to evaluate these strategies. The results show that the failover duration with the greedy approach is proportional to the quantity of orphan switches while the pre-partitioning approach, introducing a very small additional control traffic, enables to react quicker in less than 200ms.",
"title": ""
},
{
"docid": "7eed5e11e47807a3ff0af21461e88385",
"text": "We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs). Each kernel learns a location of specialization along with its weights through standard backpropagation. A differentiable attention mechanism requiring no additional supervision is used to optimize the ROIs. Traditional CNNs of different types and structures can be modified with this idea into equivalent Targeted Kernel Networks (TKNs), while keeping the network size nearly identical. By restricting kernel ROIs, we reduce the number of sliding convolutional operations performed throughout the network in its forward pass, speeding up both training and inference. We evaluate our proposed architecture on both synthetic and natural tasks across multiple domains. TKNs obtain significant improvements over baselines, requiring less computation (around an order of magnitude) while achieving superior performance.",
"title": ""
},
{
"docid": "5e4ef99cd48e385984509613b3697e37",
"text": "RC4 has been the most popular stream cipher in the history of symmetric key cryptography. Its internal state contains a permutation over all possible bytes from 0 to 255, and it attempts to generate a pseudo-random sequence of bytes (called keystream) by extracting elements of this permutation. Over the last twenty years, numerous cryptanalytic results on RC4 stream cipher have been published, many of which are based on non-random (biased) events involving the secret key, the state variables, and the keystream of the cipher. Though biases based on the secret key are common in RC4 literature, none of the existing ones depends on the length of the secret key. In the first part of this paper, we investigate the effect of RC4 keylength on its keystream, and report significant biases involving the length of the secret key. In the process, we prove the two known empirical biases that were experimentally reported and used in recent attacks against WEP and WPA by Sepehrdad, Vaudenay and Vuagnoux in EUROCRYPT 2011. After our current work, there remains no bias in the literature of WEP and WPA attacks without a proof. In the second part of the paper, we present theoretical proofs of some significant initial-round empirical biases observed by Sepehrdad, Vaudenay and Vuagnoux in SAC 2010. In the third part, we present the derivation of the complete probability distribution of the first byte of RC4 keystream, a problem left open for a decade since the observation by Mironov in CRYPTO 2002. Further, the existence of positive biases towards zero for all the initial bytes 3 to 255 is proved and exploited towards a generalized broadcast attack on RC4. We also investigate for long-term non-randomness in the keystream, and prove a new long-term bias of RC4.",
"title": ""
},
{
"docid": "95a376ec68ac3c4bd6b0fd236dca5bcd",
"text": "Long-term suppression of postprandial glucose concentration is an important dietary strategy for the prevention and treatment of type 2 diabetes. Because previous reports have suggested that seaweed may exert anti-diabetic effects in animals, the effects of Wakame or Mekabu intake with 200 g white rice, 50 g boiled soybeans, 60 g potatoes, and 40 g broccoli on postprandial glucose, insulin and free fatty acid levels were investigated in healthy subjects. Plasma glucose levels at 30 min and glucose area under the curve (AUC) at 0-30 min after the Mekabu meal were significantly lower than that after the control meal. Plasma glucose and glucose AUC were not different between the Wakame and control meals. Postprandial serum insulin and its AUC and free fatty acid concentration were not different among the three meals. In addition, fullness, satisfaction, and wellness scores were not different among the three meals. Thus, consumption of 70 g Mekabu with a white rice-based breakfast reduces postprandial glucose concentration.",
"title": ""
},
{
"docid": "c18903fad6b70086de9be9bafffb2b65",
"text": "In this work we determine how well the common objective image quality measures (Mean Squared Error (MSE), local MSE, Signalto-Noise Ratio (SNR), Structural Similarity Index (SSIM), Visual Signalto-Noise Ratio (VSNR) and Visual Information Fidelity (VIF)) predict subjective radiologists’ assessments for brain and body computed tomography (CT) images. A subjective experiment was designed where radiologists were asked to rate the quality of compressed medical images in a setting similar to clinical. We propose a modified Receiver Operating Characteristic (ROC) analysis method for comparison of the image quality measures where the “ground truth” is considered to be given by subjective scores. The best performance was achieved by the SSIM index and VIF for brain and body CT images. The worst results were observed for VSNR. We have utilized a logistic curve model which can be used to predict the subjective assessments with an objective criteria. This is a practical tool that can be used to determine the quality of medical images.",
"title": ""
}
] |
scidocsrr
|
b36e98ef15c9b9e28e08bba5a7f06d42
|
A New Trust Reputation System for E-Commerce Applications
|
[
{
"docid": "7a180e503a0b159d545047443524a05a",
"text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.",
"title": ""
},
{
"docid": "02ac566cb1b11c3a3fe0edfde7181c32",
"text": "During the last decade text mining has become a widely used discipline utilizing statistical and machine learning methods. We present the tm package which provides a framework for text mining applications within R. We give a survey on text mining facilities in R and explain how typical application tasks can be carried out using our framework. We present techniques for count-based analysis methods, text clustering, text classification and string kernels.",
"title": ""
},
{
"docid": "c1c3b9393dd375b241f69f3f3cbf5acd",
"text": "The purpose of trust and reputation systems is to strengthen the quality of markets and communities by providing an incentive for good behaviour and quality services, and by sanctioning bad behaviour and low quality services. However, trust and reputation systems will only be able to produce this effect when they are sufficiently robust against strategic manipulation or direct attacks. Currently, robustness analysis of TRSs is mostly done through simple simulated scenarios implemented by the TRS designers themselves, and this can not be considered as reliable evidence for how these systems would perform in a realistic environment. In order to set robustness requirements it is important to know how important robustness really is in a particular community or market. This paper discusses research challenges for trust and reputation systems, and proposes a research agenda for developing sound and reliable robustness principles and mechanisms for trust and reputation systems.",
"title": ""
}
] |
[
{
"docid": "b4622c9a168cd6e6f852bcc640afb4b3",
"text": "New developments in osteotomy techniques and methods of fixation have caused a revival of interest of osteotomies around the knee. The current consensus on the indications, patient selection and the factors influencing the outcome after high tibial osteotomy is presented. This paper highlights recent research aimed at joint pressure redistribution, fixation stability and bone healing that has led to improved surgical techniques and a decrease of post-operative time to full weight-bearing.",
"title": ""
},
{
"docid": "80e4748abbb22d2bfefa5e5cbd78fb86",
"text": "A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file system clusters data that is sequentially accessed and provides tw o block sizes to allo w fast access to lar ge files while not wasting large amounts of space for small files. File access rates of up to ten times f aster than the traditional UNIX file system are e xperienced. Longneeded enhancements to the programmers’ interface are discussed. These include a mechanism to place advisory locks on files, extensions of the name space across file systems, the ability to use long file names, and provisions for administrati ve control of resource usage. Revised February 18, 1984 CR",
"title": ""
},
{
"docid": "2891ce3327617e9e957488ea21e9a20c",
"text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.",
"title": ""
},
{
"docid": "d26ce319db7b1583347d34ff8251fbc0",
"text": "The study of metacognition can shed light on some fundamental issues about consciousness and its role in behavior. Metacognition research concerns the processes by which people self reflect on their own cognitive and memory processes (monitoring), and how they put their metaknowledge to use in regulating their information processing and behavior (control). Experimental research on metacognition has addressed the following questions: First, what are the bases of metacognitive judgments that people make in monitoring their learning, remembering, and performance? Second, how valid are such judgments and what are the factors that affect the correspondence between subjective and objective indexes of knowing? Third, what are the processes that underlie the accuracy and inaccuracy of metacognitive judgments? Fourth, how does the output of metacognitive monitoring contribute to the strategic regulation of learning and remembering? Finally, how do the metacognitive processes of monitoring and control affect actual performance? Research addressing these questions is reviewed, emphasizing its implication for issues concerning consciousness, in particular, the genesis of subjective experience, the function of self-reflective consciousness, and the cause-and-effect relation between subjective experience and behavior.",
"title": ""
},
{
"docid": "8b1bd5243d4512324e451a780c1ec7d3",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this fundamentals of computer security by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "9e11598c99b0345525e9df897e108e9c",
"text": "A new shielding scheme, active shielding, is proposed for reducing delays on interconnects. As opposed to conventional (passive) shielding, the active shielding approach helps to speed up signal propagation on a wire by ensuring in-phase switching of adjacent nets. Results show that the active shielding scheme improves performance by up to 16% compared to passive shields and up to 29% compared to unshielded wires. When signal slopes at the end of the line are compared, savings of up to 38% and 27% can be achieved when compared to passive shields and unshielded wires, respectively.",
"title": ""
},
{
"docid": "2a811ac141a9c5fb0cea4b644b406234",
"text": "Leadership is a process influence between leaders and subordinates where a leader attempts to influence the behaviour of subordinates to achieve the organizational goals. Organizational success in achieving its goals and objectives depends on the leaders of the organization and their leadership styles. By adopting the appropriate leadership styles, leaders can affect employee job satisfaction, commitment and productivity. Two hundred Malaysian executives working in public sectors voluntarily participated in this study. Two types of leadership styles, namely, transactional and transformational were found to have direct relationships with employees’ job satisfaction. The results showed that transformational leadership style has a stronger relationship with job satisfaction. This implies that transformational leadership is deemed suitable for managing government organizations. Implications of the findings were discussed further.",
"title": ""
},
{
"docid": "808ee7b581a0b359e1e43346dddb3751",
"text": "In timeseries classification, shapelets are subsequences of timeseries with high discriminative power. Existing methods perform a combinatorial search for shapelet discovery. Even with speedup heuristics such as pruning, clustering, and dimensionality reduction, the search remains computationally expensive. In this paper, we take an entirely different approach and reformulate the shapelet discovery task as a numerical optimization problem. In particular, the shapelet positions are learned by combining the generalized eigenvector method and fused lasso regularizer to encourage a sparse and blocky solution. Extensive experimental results show that the proposed method is orders of magnitudes faster than the state-of-the-art shapelet-based methods, while achieving comparable or even better classification accuracy.",
"title": ""
},
{
"docid": "a219afda822413bbed34a21145807b47",
"text": "In this work, the author implemented a NOVEL technique of multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) based on space frequency block coding (SF-BC). Where, the implemented code is designed based on the QOC using the techniques of the reconfigurable antennas. The proposed system is implemented using MATLAB program, and the results showing best performance of a wireless communications system of higher coding gain and diversity.",
"title": ""
},
{
"docid": "ecf8cd68405d9c1c9c741ebbbc374faa",
"text": "Rapid development has promoted the 3D land use in the urban environment with the increasing requirements of the complex interweaving, such as dwellings, commercial areas, public transportation and urban utilities and infrastructure.3D cadastral object are features which concern unban planning, land resource and real estate of municipalities, whether they are on, obove or below the earth’s surface. In essence, the 3D cadastral objects are content and partition of urban geographic space, and they aim at registering legal status and property rights associated with land and other real estates or properties (Guo, Ying, 2010). It has become a huge challenge for governments to manage 3D land space breaking through the traditional 2D information system, which needs the support of 3D techniques.",
"title": ""
},
{
"docid": "4e55d02fdd8ff4c5739cc433f4f15e9b",
"text": "muchine, \" a progrum f o r uutomuticully generating syntacticully correct progrums (test cusrs> f o r checking compiler front ends. The notion of \" clynumic grammur \" is introduced und is used in a syntax-defining notution thut procides f o r context-sensitiuity. Exurnples demonstrute use of the syntax machine. The \" syntax machine \" discussed here automatically generates random test cases for any suitably defined programming language.' The test cases it produces are syntactically valid programs. But they are not \" meaningful, \" and if an attempt is made to execute them, the results are unpredictable and uncheckable. For this reason, they are less valuable than handwritten test cases. However, as an inexhaustible source of new test material, the syntax machine has shown itself to be a valuable tool. In the following sections, we characterize the use of this tool in testing different types of language processors, introduce the concept of \" dynamic grammar \" of a programming language, outline the structure of the system, and show what the syntax machine does by means of some examples. Test cases Test cases for a language processor are programs written following the rules of the language, as documented. The test cases, when processed, should give known results. If this does not happen, then either the processor or its documentation is in error. We can distinguish three categories of language processors and assess the usefulness of the syntax machine for testing them. For an interpreter, the syntax machine test cases are virtually useless,",
"title": ""
},
{
"docid": "96c14e4c9082920edb835e85ce99dc21",
"text": "When filling out privacy-related forms in public places such as hospitals or clinics, people usually are not aware that the sound of their handwriting leaks personal information. In this paper, we explore the possibility of eavesdropping on handwriting via nearby mobile devices based on audio signal processing and machine learning. By presenting a proof-of-concept system, WritingHacker, we show the usage of mobile devices to collect the sound of victims' handwriting, and to extract handwriting-specific features for machine learning based analysis. WritingHacker focuses on the situation where the victim's handwriting follows certain print style. An attacker can keep a mobile device, such as a common smart-phone, touching the desk used by the victim to record the audio signals of handwriting. Then the system can provide a word-level estimate for the content of the handwriting. To reduce the impacts of various writing habits and writing locations, the system utilizes the methods of letter clustering and dictionary filtering. Our prototype system's experimental results show that the accuracy of word recognition reaches around 50% - 60% under certain conditions, which reveals the danger of privacy leakage through the sound of handwriting.",
"title": ""
},
{
"docid": "e6dd43c6e5143c519b40ab423b403193",
"text": "Tables and forms are a very common way to organize information in structured documents. Their recognition is fundamental for the recognition of the documents. Indeed, the physical organization of a table or a form gives a lot of information concerning the logical meaning of the content. This chapter presents the different tasks that are related to the recognition of tables and forms and the associated well-known methods and remaining B. Coüasnon ( ) IRISA/INSA de Rennes, Rennes Cedex, France e-mail: [email protected] A. Lemaitre IRISA/Université Rennes 2, Rennes Cedex, France e-mail:[email protected] D. Doermann, K. Tombre (eds.), Handbook of Document Image Processing and Recognition, DOI 10.1007/978-0-85729-859-1 20, © Springer-Verlag London 2014 647 648 B. Coüasnon and A. Lemaitre challenges. Three main tasks are pointed out: the detection of tables in heterogeneous documents; the classification of tables and forms, according to predefined models; and the recognition of table and form contents. The complexity of these three tasks is related to the kind of studied document: image-based document or digital-born documents. At last, this chapter will introduce some existing systems for table and form analysis.",
"title": ""
},
{
"docid": "ca9be32beeb516e8b62a35550d63a399",
"text": "This paper examines the complex relationship that exists between poverty and natural resource degradation in developing countries. The rural poor are often concentrated in fragile, or less favorable, environmental areas. Consequently, their livelihoods can be intimately dependent on natural resource use and ecosystem services. The relationship between poverty and natural resource degradation may depend on a complex range of choices and tradeoffs available to the poor, which in the absence of capital, labor, and land markets, is affected by their access to outside employment and any natural resource endowments. The paper develops a poverty–environment model to characterize some of these linkages, and concludes by discussing policy implications and avenues for further research.",
"title": ""
},
{
"docid": "17f719b2bfe2057141e367afe39d7b28",
"text": "Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.",
"title": ""
},
{
"docid": "a1b20560bbd6124db8fc8b418cd1342c",
"text": "Feature selection is often an essential data processing step prior to applying a learning algorithm The re moval of irrelevant and redundant information often improves the performance of machine learning algo rithms There are two common approaches a wrapper uses the intended learning algorithm itself to evaluate the usefulness of features while a lter evaluates fea tures according to heuristics based on general charac teristics of the data The wrapper approach is generally considered to produce better feature subsets but runs much more slowly than a lter This paper describes a new lter approach to feature selection that uses a correlation based heuristic to evaluate the worth of fea ture subsets When applied as a data preprocessing step for two common machine learning algorithms the new method compares favourably with the wrapper but re quires much less computation",
"title": ""
},
{
"docid": "63ce782345f9814e57b801db773e77ba",
"text": "Depression is a serious illness that affects millions of people globally. In recent years, the task of automatic depression detection from speech has gained popularity. However, several challenges remain, including which features provide the best discrimination between classes or depression levels. Thus far, most research has focused on extracting features from the speech signal. However, the speech production system is complex and depression has been shown to affect many linguistic properties, including phonetics, semantics, and syntax. Therefore, we argue that researchers should look beyond the acoustic properties of speech by building features that capture syntactic structure and semantic content. We provide a comparative analyses of various features for depression detection. Using the same corpus, we evaluate how a system built on text-based features compares to a speech-based system. We find that a combination of features drawn from both speech and text lead to the best system performance.",
"title": ""
},
{
"docid": "5b382b27257cdb333b7e709c8138580f",
"text": "Proton++ is a declarative multitouch framework that allows developers to describe multitouch gestures as regular expressions of touch event symbols. It builds on the Proton framework by allowing developers to incorporate custom touch attributes directly into the gesture description. These custom attributes increase the expressivity of the gestures, while preserving the benefits of Proton: automatic gesture matching, static analysis of conflict detection, and graphical gesture creation. We demonstrate Proton++'s flexibility with several examples: a direction attribute for describing trajectory, a pinch attribute for detecting when touches move towards one another, a touch area attribute for simulating pressure, an orientation attribute for selecting menu items, and a screen location attribute for simulating hand ID. We also use screen location to simulate user ID and enable simultaneous recognition of gestures by multiple users. In addition, we show how to incorporate timing into Proton++ gestures by reporting touch events at a regular time interval. Finally, we present a user study that suggests that users are roughly four times faster at interpreting gestures written using Proton++ than those written in procedural event-handling code commonly used today.",
"title": ""
},
{
"docid": "0909789d0f2ad990ec7f530546cf56b1",
"text": "The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are high dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms use concepts of proximity in order to find outliers based on their relationship to the rest of the data. However, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. In fact, the sparsity of high dimensional data implies that every point is an almost equally good outlier from the perspective of proximity-based definitions. Consequently, for high dimensional data, the notion of finding meaningful outliers becomes substantially more complex and non-obvious. In this paper, we discuss new techniques for outlier detection which find the outliers by studying the behavior of projections from the data set.",
"title": ""
},
{
"docid": "49f1d3ebaf3bb3e575ac3e40101494d9",
"text": "This paper discusses the current status of research on fraud detection undertaken a.s part of the European Commissionfunded ACTS ASPECT (Advanced Security for Personal Communications Technologies) project, by Royal Holloway University of London. Using a recurrent neural network technique, we uniformly distribute prototypes over Toll Tickets. sampled from the U.K. network operator, Vodafone. The prototypes, which continue to adapt to cater for seasonal or long term trends, are used to classify incoming Toll Tickets to form statistical behaviour proFdes covering both the short and long-term past. These behaviour profiles, maintained as probability distributions, comprise the input to a differential analysis utilising a measure known as the HeUinger distance[5] between them as an alarm criteria. Fine tuning the system to minimise the number of false alarms poses a significant ask due to the low fraudulent/non fraudulent activity ratio. We benefit from using unsupervised learning in that no fraudulent examples ate requited for training. This is very relevant considering the currently secure nature of GSM where fraud scenarios, other than Subscription Fraud, have yet to manifest themselves. It is the aim of ASPECT to be prepared for the would-be fraudster for both GSM and UMTS, Introduction When a mobile originated phone call is made or various inter-call criteria are met he cells or switches that a mobile phone is communicating with produce information pertaining to the call attempt. These data records, for billing purposes, are referred to as Toll Tickets. Toll Tickets contain a wealth of information about the call so that charges can be made to the subscriber. By considering well studied fraud indicators these records can also be used to detect fraudulent activity. By this we mean i terrogating a series of recent Toll Tickets and comparing a function of the various fields with fixed criteria, known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to investigation by the network operator. Some xample fraud indicators are that of a new subscriber making long back-to-back international calls being indicative of direct call selling or short back-to-back calls to a single land number indicating an attack on a PABX system. Sometimes geographical information deduced from the cell sites visited in a call can indicate cloning. This can be detected through setting a velocity trap. Fixed trigger criteria can be set to catch such extremes of activity, but these absolute usage criteria cannot trap all types of fraud. An alternative approach to the problem is to perform a differential analysis. Here we develop behaviour profiles relating to the mobile phone’s activity and compare its most recent activities with a longer history of its usage. Techniques can then be derived to determine when the mobile phone’s behaviour changes ignificantly. One of the most common indicators of fraud is a significant change in behaviour. The performance expectations of such a system must be of prime concern when developing any fraud detection strategy. To implement a real time fraud detection tool on the Vodafone network in the U.K, it was estimated that, on average, the system would need to be able to process around 38 Toll Tickets per second. This figure varied with peak and off-peak usage and also had seasonal trends. The distribution of the times that calls are made and the duration of each call is highly skewed. Considering all calls that are made in the U.K., including the use of supplementary services, we found the average call duration to be less than eight seconds, hardly time to order a pizza. In this paper we present one of the methods developed under ASPECT that tackles the problem of skewed distributions and seasonal trends using a recurrent neural network technique that is based around unsupervised learning. We envisage this technique would form part of a larger fraud detection suite that also comprises a rule based fraud detection tool and a neural network fraud detection tool that uses supervised learning on a multi-layer perceptron. Each of the systems has its strengths and weaknesses but we anticipate that the hybrid system will combine their strengths. 9 From: AAAI Technical Report WS-97-07. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.",
"title": ""
}
] |
scidocsrr
|
2cc2e925a6c9e27a96631a977fe00740
|
Modular Architecture for StarCraft II with Deep Reinforcement Learning
|
[
{
"docid": "a9dfddc3812be19de67fc4ffbc2cad77",
"text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.",
"title": ""
},
{
"docid": "d4a0b5558045245a55efbf9b71a84bc3",
"text": "A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.",
"title": ""
},
{
"docid": "e45e49fb299659e2e71f5c4eb825aff6",
"text": "We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledgebase. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et al. 2015) in sub-domains of Minecraft.",
"title": ""
}
] |
[
{
"docid": "0a7558a172509707b33fcdfaafe0b732",
"text": "Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.",
"title": ""
},
{
"docid": "f941c1f5e5acd9865e210b738ff1745a",
"text": "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.",
"title": ""
},
{
"docid": "548f43f2193cffc6711d8a15c00e8c3d",
"text": "Dither signals provide an effective way to compensate for nonlinearities in control systems. The seminal works by Zames and Shneydor, and more recently, by Mossaheb, present rigorous tools for systematic design of dithered systems. Their results rely, however, on a Lipschitz assumption relating to nonlinearity, and thus, do not cover important applications with discontinuities. This paper presents initial results on how to analyze and design dither in nonsmooth systems. In particular, it is shown that a dithered relay feedback system can be approximated by a smoothed system. Guidelines are given for tuning the amplitude and the period time of the dither signal, in order to stabilize the nonsmooth system.",
"title": ""
},
{
"docid": "48b2d263a0f547c5c284c25a9e43828e",
"text": "This paper presents hierarchical topic models for integrating sentiment analysis with collaborative filtering. Our goal is to automatically predict future reviews to a given author from previous reviews. For this goal, we focus on differentiating author's preference, while previous sentiment analysis models process these review articles without this difference. Therefore, we propose a Latent Evaluation Topic model (LET) that infer each author's preference by introducing novel latent variables into author and his/her document layer. Because these variables distinguish the variety of words in each article by merging similar word distributions, LET incorporates the difference of writers' preferences into sentiment analysis. Consequently LET can determine the attitude of writers, and predict their reviews based on like-minded writers' reviews in the collaborative filtering approach. Experiments on review articles show that the proposed model can reduce the dimensionality of reviews to the low-dimensional set of these latent variables, and is a significant improvement over standard sentiment analysis models and collaborative filtering algorithms.",
"title": ""
},
{
"docid": "8a80b9306082f3cf373e2e638c0ecd0b",
"text": "We propose a maximal figure-of-merit (MFoM) learning framework to directly maximize mean average precision (MAP) which is a key performance metric in many multi-class classification tasks. Conventional classifiers based on support vector machines cannot be easily adopted to optimize the MAP metric. On the other hand, classifiers based on deep neural networks (DNNs) have recently been shown to deliver a great discrimination capability in automatic speech recognition and image classification as well. However, DNNs are usually optimized with the minimum cross entropy criterion. In contrast to most conventional classification methods, our proposed approach can be formulated to embed DNNs and MAP into the objective function to be optimized during training. The combination of the proposed maximum MAP (MMAP) technique and DNNs introduces nonlinearity to the linear discriminant function (LDF) in order to increase the flexibility and discriminant power of the original MFoM-trained LDF based classifiers. Tested on both automatic image annotation and audio event classification, the experimental results show consistent improvements of MAP on both datasets when compared with other state-of-the-art classifiers without using MMAP.",
"title": ""
},
{
"docid": "e7a86eeb576d4aca3b5e98dc53fcb52d",
"text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.",
"title": ""
},
{
"docid": "1530571213fb98e163cb3cf45cfe9cc6",
"text": "We explore the properties of byte-level recurrent language models. When given sufficient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Specifically, we find a single unit which performs sentiment analysis. These representations, learned in an unsupervised manner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efficient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct influence on the generative process of the model. Simply fixing its value to be positive or negative generates samples with the corresponding positive or negative sentiment.",
"title": ""
},
{
"docid": "065417a0c2e82cbd33798de1be98042f",
"text": "Deep neural networks usually require large labeled datasets to construct accurate models; however, in many real-world scenarios, such as medical image segmentation, labeling data are a time-consuming and costly human (expert) intelligent task. Semi-supervised methods leverage this issue by making use of a small labeled dataset and a larger set of unlabeled data. In this paper, we present a flexible framework for semi-supervised learning that combines the power of supervised methods that learn feature representations using state-of-the-art deep convolutional neural networks with the deeply embedded clustering algorithm that assigns data points to clusters based on their probability distributions and feature representations learned by the networks. Our proposed semi-supervised learning algorithm based on deeply embedded clustering (SSLDEC) learns feature representations via iterations by alternatively using labeled and unlabeled data points and computing target distributions from predictions. During this iterative procedure, the algorithm uses labeled samples to keep the model consistent and tuned with labeling, as it simultaneously learns to improve feature representation and predictions. The SSLDEC requires a few hyper-parameters and thus does not need large labeled validation sets, which addresses one of the main limitations of many semi-supervised learning algorithms. It is also flexible and can be used with many state-of-the-art deep neural network configurations for image classification and segmentation tasks. To this end, we implemented and tested our approach on benchmark image classification tasks as well as in a challenging medical image segmentation scenario. In benchmark classification tasks, the SSLDEC outperformed several state-of-the-art semi-supervised learning methods, achieving 0.46% error on MNIST with 1000 labeled points and 4.43% error on SVHN with 500 labeled points. In the iso-intense infant brain MRI tissue segmentation task, we implemented SSLDEC on a 3D densely connected fully convolutional neural network where we achieved significant improvement over supervised-only training as well as a semi-supervised method based on pseudo-labeling. Our results show that the SSLDEC can be effectively used to reduce the need for costly expert annotations, enhancing applications, such as automatic medical image segmentation.",
"title": ""
},
{
"docid": "63a548ee4f8857823e4bcc7ccbc31d36",
"text": "The growing amounts of textual data require automatic methods for structuring relevant information so that it can be further processed by computers and systematically accessed by humans. The scenario dealt with in this dissertation is known as Knowledge Base Population (KBP), where relational information about entities is retrieved from a large text collection and stored in a database, structured according to a prespecified schema. Most of the research in this dissertation is placed in the context of the KBP benchmark of the Text Analysis Conference (TAC KBP), which provides a test-bed to examine all steps in a complex end-to-end relation extraction setting. In this dissertation a new state of the art for the TAC KBP benchmark was achieved by focussing on the following research problems: (1) The KBP task was broken down into a modular pipeline of sub-problems, and the most pressing issues were identified and quantified at all steps. (2) The quality of semi-automatically generated training data was increased by developing noise-reduction methods, decreasing the influence of false-positive training examples. (3) A focus was laid on fine-grained entity type modelling, entity expansion, entity matching and tagging, to maintain as much recall as possible on the relational argument level. (4) A new set of effective methods for generating training data, encoding features and training relational classifiers was developed and compared with previous state-of-the-art methods.",
"title": ""
},
{
"docid": "3b26f9c91ee0eb76768403fcb9579003",
"text": "The major task of network embedding is to learn low-dimensional vector representations of social-network nodes. It facilitates many analytical tasks such as link prediction and node clustering and thus has attracted increasing attention. The majority of existing embedding algorithms are designed for unsigned social networks. However, many social media networks have both positive and negative links, for which unsigned algorithms have little utility. Recent findings in signed network analysis suggest that negative links have distinct properties and added value over positive links. This brings about both challenges and opportunities for signed network embedding. In addition, user attributes, which encode properties and interests of users, provide complementary information to network structures and have the potential to improve signed network embedding. Therefore, in this paper, we study the novel problem of signed social network embedding with attributes. We propose a novel framework SNEA, which exploits the network structure and user attributes simultaneously for network representation learning. Experimental results on link prediction and node clustering with real-world datasets demonstrate the effectiveness of SNEA.",
"title": ""
},
{
"docid": "ea5357c6a936ae63f1660d1d3a9501e7",
"text": "DESCARTES’ REDUCTIONIST PRINCIPLE HAS HAD A PROfound influence on medicine. Similar to repairing a clock in which each broken part is fixed in order, investigators have attempted to discover causal relationships among key components of an individual and to treat those components accordingly. For example, if most of the morbidity in patients with diabetes is caused by high blood glucose levels, then control of those levels should return the system to normal and the patient’s health problems should disappear. However, in one recent study this strategy of more intensive glucose control resulted in increased risk of death. Likewise, chemotherapy often initially reduces tumor size but also produces severe adverse effects leading to other complications, including the promotion of secondary tumors. Most important, little evidence exists that more aggressive chemotherapies prolong life for many patients. In fact, chemotherapies may have overall negative effects for some patients. Most medical treatments make sense based on research of specific molecular pathways, so why do unexpected consequences occur after years of treatment? More simply, does the treatment that addresses a specific disease-related component harm the individual as a whole? To address these questions, the conflict between reductionism and complex systems must be analyzed. With increasing technological capabilities, these systems can be examined in continuously smaller components, from organs to cells, cells to chromosomes, and from chromosomes to genes. Paradoxically, the success of science also leads to blind spots in thinking as scientists become increasingly reductionist and determinist. The expectation is that as the resolution of the analysis increases, so too will the quantity and quality of information. High-resolution studies focusing on the building blocks of a biological system provide specific targets on which molecular cures can be based. While the DNA sequence of the human gene set is known, the functions of these genes are not understood in the context ofadynamicnetworkandtheresultant functional relationship tohumandiseases.Mutations inmanygenesareknowntocontribute to cancers in experimental systems, but the common mutationsthatactuallycausecancercannotyetbedetermined. Many therapies such as antibiotics, pacemakers, blood transfusions, and organ transplantation have worked well using classic approaches. In these cases, interventions were successful in treating a specific part of a complex system without triggering system chaos in many patients. However, even for these relatively safe interventions, unpredictable risk factors still exist. For every intervention that works well there are many others that do not, most of which involve complicated pathways and multiple levels of interaction. Even apparent major successes of the past have developed problems, such as the emergence and potential spread of super pathogens resistant to available antibiotic arrays. One common feature of a complex system is its emergent properties—thecollectiveresultofdistinctandinteractiveproperties generated by the interaction of individual components. When parts change, the behavior of a system can sometimes be predicted—but often cannot be if the system exists on the “edge of chaos.” For example, a disconnect exists between the status of the parts (such as tumor response) and the systems behavior(suchasoverall survivalof thepatient).Furthermore, nonlinear responsesof a complexsystemcanundergosudden massive and stochastic changes in response to what may seem minor perturbations. This may occur despite the same system displaying regular and predictable behavior under other conditions. For example, patients can be harmed by an uncommonadverseeffectofacommonlyusedtreatmentwhenthesystemdisplayschaoticbehaviorundersomecircumstances.This stochastic effect is what causes surprise. Given that any medical intervention is a stress to the system and that multiple system levels can respond differently, researchers must consider the stochastic response of the entire human system to drug therapyrather thanfocusingsolelyonthetargetedorganorcell oroneparticularmolecularpathwayorspecificgene.Thesame approachisnecessaryformonitoringtheclinicalsafetyofadrug. Other challenging questions await consideration. Once an entire systemisalteredbydiseaseprogression,howshould the system be restored following replacement of a defective part? If a system is altered, should it be brought back to the previous status, or is there a new standard defining a new stable system?Thedevelopmentofmanydiseasescantakeyears,during which time the system has adapted to function in the altered environment. These changes are not restricted to a few clinicallymonitored factorsbut can involve thewhole system, which now has adapted a new homeostasis with new dynamic interactions. Restoring only a few factors without considering the entire system can often result in further stress to the system, which might trigger a decline in system chaos. For many disease conditions resulting from years of adaptation, gradual",
"title": ""
},
{
"docid": "d7d808e948467a1bb241143233bf8ee2",
"text": "We discuss and predict the evolution of Simultaneous Localisation and Mapping (SLAM) into a general geometric and semantic ‘Spatial AI’ perception capability for intelligent embodied devices. A big gap remains between the visual perception performance that devices such as augmented reality eyewear or comsumer robots will require and what is possible within the constraints imposed by real products. Co-design of algorithms, processors and sensors will be needed. We explore the computational structure of current and future Spatial AI algorithms and consider this within the landscape of ongoing hardware developments.",
"title": ""
},
{
"docid": "a6959cc988542a077058e57a5d2c2eff",
"text": "A green and reliable method using supercritical fluid extraction (SFE) and molecular distillation (MD) was optimized for the separation and purification of standardized typical volatile components fraction (STVCF) from turmeric to solve the shortage of reference compounds in quality control (QC) of volatile components. A high quality essential oil with 76.0% typical components of turmeric was extracted by SFE. A sequential distillation strategy was performed by MD. The total recovery and purity of prepared STVCF were 97.3% and 90.3%, respectively. Additionally, a strategy, i.e., STVCF-based qualification and quantitative evaluation of major bioactive analytes by multiple calibrated components, was proposed to easily and effectively control the quality of turmeric. Compared with the individual calibration curve method, the STVCF-based quantification method was demonstrated to be credible and was effectively adapted for solving the shortage of reference volatile compounds and improving the QC of typical volatile components in turmeric, especially its functional products.",
"title": ""
},
{
"docid": "f4617250b5654a673219d779952db35f",
"text": "Convolutional neural network (CNN) models have achieved tremendous success in many visual detection and recognition tasks. Unfortunately, visual tracking, a fundamental computer vision problem, is not handled well using the existing CNN models, because most object trackers implemented with CNN do not effectively leverage temporal and contextual information among consecutive frames. Recurrent neural network (RNN) models, on the other hand, are often used to process text and voice data due to their ability to learn intrinsic representations of sequential and temporal data. Here, we propose a novel neural network tracking model that is capable of integrating information over time and tracking a selected target in video. It comprises three components: a CNN extracting best tracking features in each video frame, an RNN constructing video memory state, and a reinforcement learning (RL) agent making target location decisions. The tracking problem is formulated as a decision-making process, and our model can be trained with RL algorithms to learn good tracking policies that pay attention to continuous, inter-frame correlation and maximize tracking performance in the long run. We compare our model with an existing neural-network based tracking method and show that the proposed tracking approach works well in various scenarios by performing rigorous validation experiments on artificial video sequences with ground truth. To the best of our knowledge, our tracker is the first neural-network tracker that combines convolutional and recurrent networks with RL algorithms.",
"title": ""
},
{
"docid": "01a4b2be52e379db6ace7fa8ed501805",
"text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.",
"title": ""
},
{
"docid": "71022e2197bfb99bd081928cf162f58a",
"text": "Ophthalmology and visual health research have received relatively limited attention from the personalized medicine community, but this trend is rapidly changing. Postgenomics technologies such as proteomics are being utilized to establish a baseline biological variation map of the human eye and related tissues. In this context, the choroid is the vascular layer situated between the outer sclera and the inner retina. The choroidal circulation serves the photoreceptors and retinal pigment epithelium (RPE). The RPE is a layer of cuboidal epithelial cells adjacent to the neurosensory retina and maintains the outer limit of the blood-retina barrier. Abnormal changes in choroid-RPE layers have been associated with age-related macular degeneration. We report here the proteome of the healthy human choroid-RPE complex, using reverse phase liquid chromatography and mass spectrometry-based proteomics. A total of 5309 nonredundant proteins were identified. Functional analysis of the identified proteins further pointed to molecular targets related to protein metabolism, regulation of nucleic acid metabolism, transport, cell growth, and/or maintenance and immune response. The top canonical pathways in which the choroid proteins participated were integrin signaling, mitochondrial dysfunction, regulation of eIF4 and p70S6K signaling, and clathrin-mediated endocytosis signaling. This study illustrates the largest number of proteins identified in human choroid-RPE complex to date and might serve as a valuable resource for future investigations and biomarker discovery in support of postgenomics ophthalmology and precision medicine.",
"title": ""
},
{
"docid": "86fdb9b60508f87c0210623879185c8c",
"text": "This paper proposes a novel Hierarchical Parsing Net (HPN) for semantic scene parsing. Unlike previous methods, which separately classify each object, HPN leverages global scene semantic information and the context among multiple objects to enhance scene parsing. On the one hand, HPN uses the global scene category to constrain the semantic consistency between the scene and each object. On the other hand, the context among all objects is also modeled to avoid incompatible object predictions. Specifically, HPN consists of four steps. In the first step, we extract scene and local appearance features. Based on these appearance features, the second step is to encode a contextual feature for each object, which models both the scene-object context (the context between the scene and each object) and the interobject context (the context among different objects). In the third step, we classify the global scene and then use the scene classification loss and a backpropagation algorithm to constrain the scene feature encoding. In the fourth step, a label map for scene parsing is generated from the local appearance and contextual features. Our model outperforms many state-of-the-art deep scene parsing networks on five scene parsing databases.",
"title": ""
},
{
"docid": "e685a22b6f7b20fb1289923e86e467c5",
"text": "Nowadays, with the growth in the use of search engines, the extension of spying programs and anti -terrorism prevention, several researches focused on text analysis. In this sense, lemmatization and stemming are two common requirements of these researches. They include reducing different grammatical forms of a word and bring them to a common base form. In what follows, we will discuss these treatment methods on arabic text, especially the Khoja Stemmer, show their limits and provide new tools to improve it.",
"title": ""
},
{
"docid": "8a73a42bed30751cbb6798398b81571d",
"text": "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io/CleanNetProject.",
"title": ""
}
] |
scidocsrr
|
3a89727104542a4a01df5ffb1bb1fc17
|
Graph-Preserving Sparse Nonnegative Matrix Factorization With Application to Facial Expression Recognition
|
[
{
"docid": "1c846a6fc89142f96707e8fd71e86818",
"text": "In this paper, we report our experiments on feature-based facial expression recognition within an architecture based on a two-layer perceptron. We investigate the use of two types of features extracted from face images: the geometric positions of a set of fiducial points on a face, and a set of multi-scale and multi-orientation Gabor wavelet coefficients at these points. They can be used either independently or jointly. The recognition performance with different types of features has been compared, which shows that Gabor wavelet coefficients are much more powerful than geometric positions. Furthermore, since the first layer of the perceptron actually performs a nonlinear reduction of the dimensionality of the feature space, we have also studied the desired number of hidden units, i.e., the appropriate dimension to represent a facial expression in order to achieve a good recognition rate. It turns out that five to seven hidden units are probably enough to represent the space of feature expressions. Then, we have investigated the importance of each individual fiducial point to facial expression recognition. Sensitivity analysis reveals that points on cheeks and on forehead carry little useful information. After discarding them, not only the computational efficiency increases, but also the generalization performance slightly improves. Finally, we have studied the significance of image scales. Experiments show that facial expression recognition is mainly a low frequency process, and a spatial resolution of 64 pixels 64 pixels is probably enough.",
"title": ""
},
{
"docid": "c30fe4a7563090638a3bcc943c1cb328",
"text": "In order to investigate the role of facial movement in the recognition of emotions, faces were covered with black makeup and white spots. Video recordings of such faces were played back so that only the white spots were visible. The results demonstrated that moving displays of happiness, sadness, fear, surprise, anger and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions. This indicated that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions. Normally illuminated dynamic displays of these expressions, however, were recognized more accurately than displays of moving spots. The relative effectiveness of upper and lower facial areas for the recognition of these six emotions was also investigated using normally illuminated and spots-only displays. In both instances the results indicated that different facial regions are more informative for different emitions. The movement patterns characterizing the various emotional expressions as well as common confusions between emotions are also discussed.",
"title": ""
},
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
},
{
"docid": "1e2768be2148ff1fd102c6621e8da14d",
"text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.",
"title": ""
},
{
"docid": "570bc6b72db11c32292f705378042089",
"text": "In this paper, we propose a novel method, called local nonnegative matrix factorization (LNMF), for learning spatially localized, parts-based subspace representation of visual patterns. An objective function is defined to impose localization constraint, in addition to the non-negativity constraint in the standard NMF [1]. This gives a set of bases which not only allows a non-subtractive (part-based) representation of images but also manifests localized features. An algorithm is presented for the learning of such basis components. Experimental results are presented to compare LNMF with the NMF and PCA methods for face representation and recognition, which demonstrates advantages of LNMF.",
"title": ""
}
] |
[
{
"docid": "3e4bc9dd99d094292b493687ccccab09",
"text": "With an increasing number of technologies supporting transactions over distance and replacing traditional forms of interaction, designing for trust in mediated interactions has become a key concern for researchers in HCI. While much of this research focuses on increasing users’ trust, we present a framework that shifts the perspective towards factors that support trustworthy behavior. In a second step, we analyze how the presence of these factors can be signaled. We argue that it is essential to take a systemic perspective for enabling well-placed trust and trustworthy behavior in the long term. For our analysis we draw on relevant research from sociology, economics, and psychology, as well as HCI. We identify contextual properties (motivation based on temporal, social, and institutional embeddedness) and the actor’s intrinsic properties (ability, and motivation based on internalized norms and benevolence) that form the basis of trustworthy behavior. Our analysis provides a frame of reference for the design of studies on trust in technologymediated interactions, as well as a guide for identifying trust requirements in design processes. We demonstrate the application of the framework in three scenarios: call centre interactions, B2C e-commerce, and voice-enabled online gaming.",
"title": ""
},
{
"docid": "d704917077795fbe16e52ea2385e19ef",
"text": "The objectives of this review were to summarize the evidence from randomized controlled trials (RCTs) on the effects of animal-assisted therapy (AAT). Studies were eligible if they were RCTs. Studies included one treatment group in which AAT was applied. We searched the following databases from 1990 up to October 31, 2012: MEDLINE via PubMed, CINAHL, Web of Science, Ichushi Web, GHL, WPRIM, and PsycINFO. We also searched all Cochrane Database up to October 31, 2012. Eleven RCTs were identified, and seven studies were about \"Mental and behavioral disorders\". Types of animal intervention were dog, cat, dolphin, bird, cow, rabbit, ferret, and guinea pig. The RCTs conducted have been of relatively low quality. We could not perform meta-analysis because of heterogeneity. In a study environment limited to the people who like animals, AAT may be an effective treatment for mental and behavioral disorders such as depression, schizophrenia, and alcohol/drug addictions, and is based on a holistic approach through interaction with animals in nature. To most effectively assess the potential benefits for AAT, it will be important for further research to utilize and describe (1) RCT methodology when appropriate, (2) reasons for non-participation, (3) intervention dose, (4) adverse effects and withdrawals, and (5) cost.",
"title": ""
},
{
"docid": "d9a87325efbd29520c37ec46531c6062",
"text": "Predicting the risk of potential diseases from Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Compared with traditional machine learning models, deep learning based approaches achieve superior performance on risk prediction task. However, none of existing work explicitly takes prior medical knowledge (such as the relationships between diseases and corresponding risk factors) into account. In medical domain, knowledge is usually represented by discrete and arbitrary rules. Thus, how to integrate such medical rules into existing risk prediction models to improve the performance is a challenge. To tackle this challenge, we propose a novel and general framework called PRIME for risk prediction task, which can successfully incorporate discrete prior medical knowledge into all of the state-of-the-art predictive models using posterior regularization technique. Different from traditional posterior regularization, we do not need to manually set a bound for each piece of prior medical knowledge when modeling desired distribution of the target disease on patients. Moreover, the proposed PRIME can automatically learn the importance of different prior knowledge with a log-linear model.Experimental results on three real medical datasets demonstrate the effectiveness of the proposed framework for the task of risk prediction",
"title": ""
},
{
"docid": "2f6c2a4e83bf86b29fcff77d7937eded",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.10.027 * Corresponding author. E-mail addresses: [email protected] (C (B. Diri). This paper provides a systematic review of previous software fault prediction studies with a specific focus on metrics, methods, and datasets. The review uses 74 software fault prediction papers in 11 journals and several conference proceedings. According to the review results, the usage percentage of public datasets increased significantly and the usage percentage of machine learning algorithms increased slightly since 2005. In addition, method-level metrics are still the most dominant metrics in fault prediction research area and machine learning algorithms are still the most popular methods for fault prediction. Researchers working on software fault prediction area should continue to use public datasets and machine learning algorithms to build better fault predictors. The usage percentage of class-level is beyond acceptable levels and they should be used much more than they are now in order to predict the faults earlier in design phase of software life cycle. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f873e55f76905f465e17778f25ba2a79",
"text": "PURPOSE\nThe purpose of this study is to develop an automatic human movement classification system for the elderly using single waist-mounted tri-axial accelerometer.\n\n\nMETHODS\nReal-time movement classification algorithm was developed using a hierarchical binary tree, which can classify activities of daily living into four general states: (1) resting state such as sitting, lying, and standing; (2) locomotion state such as walking and running; (3) emergency state such as fall and (4) transition state such as sit to stand, stand to sit, stand to lie, lie to stand, sit to lie, and lie to sit. To evaluate the proposed algorithm, experiments were performed on five healthy young subjects with several activities, such as falls, walking, running, etc.\n\n\nRESULTS\nThe results of experiment showed that successful detection rate of the system for all activities were about 96%. To evaluate long-term monitoring, 3 h experiment in home environment was performed on one healthy subject and 98% of the movement was successfully classified.\n\n\nCONCLUSIONS\nThe results of experiment showed a possible use of this system which can monitor and classify the activities of daily living. For further improvement of the system, it is necessary to include more detailed classification algorithm to distinguish several daily activities.",
"title": ""
},
{
"docid": "f45d6d572325e20bad1eaffe5330f077",
"text": "Ongoing brain activity can be recorded as electroen-cephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% ± 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.",
"title": ""
},
{
"docid": "8813c7c18f0629680f537bdd0afcb1ba",
"text": "A fault-tolerant (FT) control approach for four-wheel independently-driven (4WID) electric vehicles is presented. An adaptive control based passive fault-tolerant controller is designed to ensure the system stability when an in-wheel motor/motor driver fault happens. As an over-actuated system, it is challenging to isolate the faulty wheel and accurately estimate the control gain of the faulty in-wheel motor for 4WID electric vehicles. An active fault diagnosis approach is thus proposed to isolate and evaluate the fault. Based on the estimated control gain of the faulty in-wheel motor, the control efforts of all the four wheels are redistributed to relieve the torque demand on the faulty wheel. Simulations using a high-fidelity, CarSim, full-vehicle model show the effectiveness of the proposed in-wheel motor/motor driver fault diagnosis and fault-tolerant control approach.",
"title": ""
},
{
"docid": "5a9c448b9ea2ee797250bbd156805f50",
"text": "Clothing segmentation is a challenging field of research which is rapidly gaining attention. This paper presents a system for semantic segmentation of primarily monochromatic clothing and printed/stitched textures in single images or live video. This is especially appealing to emerging augmented reality applications such as retexturing sports players’ shirts with localized adverts or statistics in TV/internet broadcasting. We initialise points on the upper body clothing by body fiducials rather than by applying distance metrics to a detected face. This helps prevent segmentation of the skin rather than clothing. We take advantage of hue and intensity histograms incorporating spatial priors to develop an efficient segmentation method. Evaluated against ground truth on a dataset of 100 people, mostly in groups, the accuracy has an average F-score of 0.97 with an approach which can be over 88% more efficient than the state of the art.",
"title": ""
},
{
"docid": "8ef6a44e42dbc3ba2418a5b72243cdd4",
"text": "This study aims to contribute empirical computational results to the understanding of tonality and harmonic structure. It analyses aspects of tonal harmony and harmonic patterns based on a statistical, computational corpus analysis of Bach’s chorales. This is carried out using a novel heuristic method of segmentation developed specifically for that purpose. Analyses of distributions of single pc sets, chord classes and pc set transitions reveal very different structural patterns in both modes, many, but not all of which accord with standard music theory. In addition, most frequent chord transitions are found to exhibit a large degree of asymmetry, or, directedness, in way that for two pc sets A,B the transition frequencies f(A→B) and f(B→A) may differ to a large extent. Distributions of unigrams and bigrams are found to follow a Zipf distribution, i.e. decay in frequency roughly according to 1/x which implies that the majority of the musical structure is governed by a few frequent elements. The findings provide evidence for an underlying harmonic syntax which results in distinct statistical patterns. A subsequent hierarchical cluster analysis of pc sets based on respective antecedent and consequent patterns finds that this information suffices to group chords into meaningful functional groups solely on intrinsic statistical grounds without regard to pitch",
"title": ""
},
{
"docid": "efc63a7feba2dad141177fcd0160f7e2",
"text": "Recently, very high resolution (VHR) panchromatic and multispectral (MS) remote-sensing images can be acquired easily. However, it is still a challenging task to fuse and classify these VHR images. Generally, there are two ways for the fusion and classification of panchromatic and MS images. One way is to use a panchromatic image to sharpen an MS image, and then classify a pan-sharpened MS image. Another way is to extract features from panchromatic and MS images, respectively, and then combine these features for classification. In this paper, we propose a superpixel-based multiple local convolution neural network (SML-CNN) model for panchromatic and MS images classification. In order to reduce the amount of input data for the CNN, we extend simple linear iterative clustering algorithm for segmenting MS images and generating superpixels. Superpixels are taken as the basic analysis unit instead of pixels. To make full advantage of the spatial-spectral and environment information of superpixels, a superpixel-based multiple local regions joint representation method is proposed. Then, an SML-CNN model is established to extract an efficient joint feature representation. A softmax layer is used to classify these features learned by multiple local CNN into different categories. Finally, in order to eliminate the adverse effects on the classification results within and between superpixels, we propose a multi-information modification strategy that combines the detailed information and semantic information to improve the classification performance. Experiments on the classification of Vancouver and Xi’an panchromatic and MS image data sets have demonstrated the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "22ddd01d6658567ef5417829ecfe1104",
"text": "Recently electrocorticography (ECoG) has emerged as a potential tool for Brain Computer Interfacing applications. In this paper, a continuous wavelet transform (CWT) based method is proposed for classifying ECoG motor imagery signals corresponding to left pinky and tongue movement. The total experiment is carried out with the publicly available benchmark BCI competition III, data set I. The L2 norms of the CWT coefficients obtained from ECoG signals are shown to be separable for the two classes of motor imagery signals. Then the L2 norm based features are subjected to principal component analysis, yielding a feature set with lower dimension. Among various types of classifiers used, support vector machine based classifiers have been shown to provide a good accuracy of 92% which is shown to be better than several existing techniques. In addition, unlike most of the existing methods, our proposed method involves no pre-processing and thus can have better potential for practical implementation while requiring much lower computational time in extracting the features.",
"title": ""
},
{
"docid": "c582e3c1f3896e5f86b0d322184582fd",
"text": "The interest for data mining techniques has increased tremendously during the past decades, and numerous classification techniques have been applied in a wide range of business applications. Hence, the need for adequate performance measures has become more important than ever. In this paper, a cost-benefit analysis framework is formalized in order to define performance measures which are aligned with the main objectives of the end users, i.e., profit maximization. A new performance measure is defined, the expected maximum profit criterion. This general framework is then applied to the customer churn problem with its particular cost-benefit structure. The advantage of this approach is that it assists companies with selecting the classifier which maximizes the profit. Moreover, it aids with the practical implementation in the sense that it provides guidance about the fraction of the customer base to be included in the retention campaign.",
"title": ""
},
{
"docid": "4088b1148b5631f91f012ddc700cc136",
"text": "BACKGROUND\nAny standard skin flap of the body including a detectable or identified perforator at its axis can be safely designed and harvested in a free-style fashion.\n\n\nMETHODS\nFifty-six local free-style perforator flaps in the head and neck region, 33 primary and 23 recycle flaps, were performed in 53 patients. The authors introduced the term \"recycle\" to describe a perforator flap harvested within the borders of a previously transferred flap. A Doppler device was routinely used preoperatively for locating perforators in the area adjacent to a given defect. The final flap design and degree of mobilization were decided intraoperatively, depending on the location of the most suitable perforator and the ability to achieve primary closure of the donor site. Based on clinical experience, the authors suggest a useful classification of local free-style perforator flaps.\n\n\nRESULTS\nAll primary and 20 of 23 recycle free-style perforator flaps survived completely, providing tension-free coverage and a pleasing final contour for patients. In the remaining three recycle cases, the skeletonization of the pedicle resulted in pedicle damage, because of surrounding postradiotherapy scarring and flap failure. All donor sites except one were closed primarily, and all of them healed without any complications.\n\n\nCONCLUSIONS\nThe free-style concept has significantly increased the potential and versatility of the standard local and recycled head and neck flap alternatives for moderate to large defects, providing a more robust, custom-made, tissue-sparing, and cosmetically superior outcome in a one-stage procedure, with minimal donor-site morbidity.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "f584b2d89bacacf31158496460d6f546",
"text": "Significant advances in clinical practice as well as basic and translational science were presented at the American Transplant Congress this year. Topics included innovative clinical trials to recent advances in our basic understanding of the scientific underpinnings of transplant immunology. Key areas of interest included the following: clinical trials utilizing hepatitis C virus-positive (HCV+ ) donors for HCV- recipients, the impact of the new allocation policies, normothermic perfusion, novel treatments for desensitization, attempts at precision medicine, advances in xenotransplantation, the role of mitochondria and exosomes in rejection, nanomedicine, and the impact of the microbiota on transplant outcomes. This review highlights some of the most interesting and noteworthy presentations from the meeting.",
"title": ""
},
{
"docid": "2dd3ca2e8e9bc9b6d9ab6d4e8c9c3974",
"text": "With the advancement of data acquisition techniques, tensor (multidimensional data) objects are increasingly accumulated and generated, for example, multichannel electroencephalographies, multiview images, and videos. In these applications, the tensor objects are usually nonnegative, since the physical signals are recorded. As the dimensionality of tensor objects is often very high, a dimension reduction technique becomes an important research topic of tensor data. From the perspective of geometry, high-dimensional objects often reside in a low-dimensional submanifold of the ambient space. In this paper, we propose a new approach to perform the dimension reduction for nonnegative tensor objects. Our idea is to use nonnegative Tucker decomposition (NTD) to obtain a set of core tensors of smaller sizes by finding a common set of projection matrices for tensor objects. To preserve geometric information in tensor data, we employ a manifold regularization term for the core tensors constructed in the Tucker decomposition. An algorithm called manifold regularization NTD (MR-NTD) is developed to solve the common projection matrices and core tensors in an alternating least squares manner. The convergence of the proposed algorithm is shown, and the computational complexity of the proposed method scales linearly with respect to the number of tensor objects and the size of the tensor objects, respectively. These theoretical results show that the proposed algorithm can be efficient. Extensive experimental results have been provided to further demonstrate the effectiveness and efficiency of the proposed MR-NTD algorithm.",
"title": ""
},
{
"docid": "8f79360872a095e634e04d34f0c3baea",
"text": "T paper formalizes and adapts the well-known concept of Pareto efficiency in the context of the popular robust optimization (RO) methodology for linear optimization problems. We argue that the classical RO paradigm need not produce solutions that possess the associated property of Pareto optimality, and we illustrate via examples how this could lead to inefficiencies and suboptimal performance in practice. We provide a basic theoretical characterization of Pareto robustly optimal (PRO) solutions and extend the RO framework by proposing practical methods that verify Pareto optimality and generate solutions that are PRO. Critically important, our methodology involves solving optimization problems that are of the same complexity as the underlying robust problems; hence, the potential improvements from our framework come at essentially limited extra computational cost. We perform numerical experiments drawn from three different application areas (portfolio optimization, inventory management, and project management), which demonstrate that PRO solutions have a significant potential upside compared with solutions obtained using classical RO methods.",
"title": ""
},
{
"docid": "fbcd8871bc3d8509687698073b97d5de",
"text": "A longstanding debate concerns the use of concrete versus abstract instructional materials, particularly in domains such as mathematics and science. Although decades of research have focused on the advantages and disadvantages of concrete and abstract materials considered independently, we argue for an approach that moves beyond this dichotomy and combines their advantages. Specifically, we recommend beginning with concrete materials and then explicitly and gradually fading to the more abstract. Theoretical benefits of this “concreteness fading” technique for mathematics and science instruction include (1) helping learners interpret ambiguous or opaque abstract symbols in terms of well-understood concrete objects, (2) providing embodied perceptual and physical experiences that can ground abstract thinking, (3) enabling learners to build up a store of memorable images that can be used when abstract symbols lose meaning, and (4) guiding learners to strip away extraneous concrete properties and distill the generic, generalizable properties. In these ways, concreteness fading provides advantages that go beyond the sum of the benefits of concrete and abstract materials.",
"title": ""
},
{
"docid": "9b8819832849177b5d7db29386936368",
"text": "Artemisinin resistance in Plasmodium falciparum threatens the remarkable efficacy of artemisinin-based combination therapies worldwide. Thus, greater insight into the resistance mechanism using monitoring tools is essential. The ring-stage survival assay is used for phenotyping artemisinin-resistance or decreased artemisinin sensitivity. Here, we review the progress of this measurement assay and explore its limitations and potential applications.",
"title": ""
},
{
"docid": "ee4c6084527c6099ea5394aec66ce171",
"text": "Gualzru’s path to the Advertisement World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Fernando Fernández, Moisés Mart́ınez, Ismael Garćıa-Varea, Jesús Mart́ınez-Gómez, Jose Pérez-Lorenzo, Raquel Viciana, Pablo Bustos, Luis J. Manso, Luis Calderita, Marco Antonio Gutiérrez Giraldo, Pedro Núñez, Antonio Bandera, Adrián Romero-Garcés, Juan Bandera and Rebeca Marfil",
"title": ""
},
{
"docid": "2f9e93892a013452df2cce84374ab7d7",
"text": "Minimum cut/maximum flow algorithms on graphs have emerged as an increasingly useful tool for exactor approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-Tarjan style \"push -relabel\" methods and algorithms based on Ford-Fulkerson style \"augmenting paths.\" We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and segmentation. In many cases, our new algorithm works several times faster than any of the other methods, making near real-time performance possible. An implementation of our max-flow/min-cut algorithm is available upon request for research purposes.",
"title": ""
}
] |
scidocsrr
|
c89c3b365965fb17e44173b4861bd31f
|
Flexible Spatio-Temporal Networks for Video Prediction
|
[
{
"docid": "3c3ae987e018322ca45b280c3d01eba8",
"text": "Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established realworld video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of “intuitive physics” that can be applied to novel scenes.",
"title": ""
},
{
"docid": "53f7958f77563b9dfaeedf38099cedf2",
"text": "The availability of new techniques and tools for Video Surveillance and the capability of storing huge amounts of visual data acquired by hundreds of cameras every day call for a convergence between pattern recognition, computer vision and multimedia paradigms. A clear need for this convergence is shown by new research projects which attempt to exploit both ontology-based retrieval and video analysis techniques also in the field of surveillance. This paper presents the ViSOR (Video Surveillance Online Repository) framework, designed with the aim of establishing an open platform for collecting, annotating, retrieving, and sharing surveillance videos, as well as evaluating the performance of automatic surveillance systems. Annotations are based on a reference ontology which has been defined integrating hundreds of concepts, some of them coming from the LSCOM and MediaMill ontologies. A new annotation classification schema is also provided, which is aimed at identifying the spatial, temporal and domain detail level used. The ViSOR web interface allows video browsing, querying by annotated concepts or by keywords, compressed video previewing, media downloading and uploading. Finally, ViSOR includes a performance evaluation desk which can be used to compare different annotations.",
"title": ""
}
] |
[
{
"docid": "36c4b2ab451c24d2d0d6abcbec491116",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "ee0e4dda5654896a27fa6525c23199cc",
"text": "This paper addresses the task of designing a modular neural network architecture that jointly solves different tasks. As an example we use the tasks of depth estimation and semantic segmentation given a single RGB image. The main focus of this work is to analyze the cross-modality influence between depth and semantic prediction maps on their joint refinement. While most of the previous works solely focus on measuring improvements in accuracy, we propose a way to quantify the cross-modality influence. We show that there is a relationship between final accuracy and cross-modality influence, although not a simple linear one. Hence a larger cross-modality influence does not necessarily translate into an improved accuracy. We find that a beneficial balance between the cross-modality influences can be achieved by network architecture and conjecture that this relationship can be utilized to understand different network design choices. Towards this end we propose a Convolutional Neural Network (CNN) architecture that fuses the state-of-the-art results for depth estimation and semantic labeling. By balancing the cross-modality influences between depth and semantic prediction, we achieve improved results for both tasks using the NYU-Depth v2 benchmark.",
"title": ""
},
{
"docid": "28bbcecc552dfb78fa434005ae06bf40",
"text": "Prominent semantic annotations take an inclusive approach to argument span annotation, marking arguments as full constituency subtrees. Some works, however, showed that identifying a reduced argument span can be beneficial for various semantic tasks. While certain practical methods do extract reduced argument spans, such as in Open-IE , these solutions are often ad-hoc and system-dependent, with no commonly accepted standards. In this paper we propose a generic argument reduction criterion, along with an annotation procedure, and show that it can be consistently and intuitively annotated using the recent QA-SRL paradigm.",
"title": ""
},
{
"docid": "a56a95db6d9d0f0ccf26192b7e2322ff",
"text": "CRISPR-Cas9 is a versatile genome editing technology for studying the functions of genetic elements. To broadly enable the application of Cas9 in vivo, we established a Cre-dependent Cas9 knockin mouse. We demonstrated in vivo as well as ex vivo genome editing using adeno-associated virus (AAV)-, lentivirus-, or particle-mediated delivery of guide RNA in neurons, immune cells, and endothelial cells. Using these mice, we simultaneously modeled the dynamics of KRAS, p53, and LKB1, the top three significantly mutated genes in lung adenocarcinoma. Delivery of a single AAV vector in the lung generated loss-of-function mutations in p53 and Lkb1, as well as homology-directed repair-mediated Kras(G12D) mutations, leading to macroscopic tumors of adenocarcinoma pathology. Together, these results suggest that Cas9 mice empower a wide range of biological and disease modeling applications.",
"title": ""
},
{
"docid": "fa691b72e61685d0fa89bf7a821373da",
"text": "BACKGROUND\nStabilization of a pelvic discontinuity with a posterior column plate with or without an associated acetabular cage sometimes results in persistent micromotion across the discontinuity with late fatigue failure and component loosening. Acetabular distraction offers an alternative technique for reconstruction in cases of severe bone loss with an associated pelvic discontinuity.\n\n\nQUESTIONS/PURPOSES\nWe describe the acetabular distraction technique with porous tantalum components and evaluate its survival, function, and complication rate in patients undergoing revision for chronic pelvic discontinuity.\n\n\nMETHODS\nBetween 2002 and 2006, we treated 28 patients with a chronic pelvic discontinuity with acetabular reconstruction using acetabular distraction. A porous tantalum elliptical acetabular component was used alone or with an associated modular porous tantalum augment in all patients. Three patients died and five were lost to followup before 2 years. The remaining 20 patients were followed semiannually for a minimum of 2 years (average, 4.5 years; range, 2-7 years) with clinical (Merle d'Aubigné-Postel score) and radiographic (loosening, migration, failure) evaluation.\n\n\nRESULTS\nOne of the 20 patients required rerevision for aseptic loosening. Fifteen patients remained radiographically stable at last followup. Four patients had early migration of their acetabular component but thereafter remained clinically asymptomatic and radiographically stable. At latest followup, the average improvement in the patients not requiring rerevision using the modified Merle d'Aubigné-Postel score was 6.6 (range, 3.3-9.6). There were no postoperative dislocations; however, one patient had an infection, one a vascular injury, and one a bowel injury.\n\n\nCONCLUSIONS\nAcetabular distraction with porous tantalum components provides predictable pain relief and durability at 2- to 7-year followup when reconstructing severe acetabular defects with an associated pelvic discontinuity.\n\n\nLEVEL OF EVIDENCE\nLevel IV, therapeutic study. See Instructions for Authors for a complete description of levels of evidence.",
"title": ""
},
{
"docid": "51c4dd282e85db5741b65ae4386f6c48",
"text": "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These lowlevel visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.",
"title": ""
},
{
"docid": "776688e1b33a5f5b11e0609d8b1b46d2",
"text": "Entity resolution (ER) is a process to identify records in information systems, which refer to the same real-world entity. Because in the two recent decades the data volume has grown so large, parallel techniques are called upon to satisfy the ER requirements of high performance and scalability. The development of parallel ER has reached a relatively prosperous stage, and has found its way into several applications. In this work, we first comprehensively survey the state of the art of parallel ER approaches. From the comprehensive overview, we then extract the classification criteria of parallel ER, classify and compare these approaches based on these criteria. Finally, we identify open research questions and challenges and discuss potential solutions and further research potentials in this field. TYPE OF PAPER AND",
"title": ""
},
{
"docid": "d5771929cdaf41ce059e00b35825adf2",
"text": "We develop a new collaborative filtering (CF) method that combines both previously known users’ preferences, i.e. standard CF, as well as product/user attributes, i.e. classical function approximation, to predict a given user’s interest in a particular product. Our method is a generalized low rank matrix completion problem, where we learn a function whose inputs are pairs of vectors – the standard low rank matrix completion problem being a special case where the inputs to the function are the row and column indices of the matrix. We solve this generalized matrix completion problem using tensor product kernels for which we also formally generalize standard kernel properties. Benchmark experiments on movie ratings show the advantages of our generalized matrix completion method over the standard matrix completion one with no information about movies or people, as well as over standard multi-task or single task learning methods.",
"title": ""
},
{
"docid": "d52a933abc629237853b41deb63a2022",
"text": "BACKGROUND\nTrastuzumab--a humanised monoclonal antibody against HER2--has been shown to improve disease-free survival after chemotherapy in women with HER2-positive early breast cancer. We investigated the drug's effect on overall survival after a median follow-up of 2 years in the Herceptin Adjuvant (HERA) study.\n\n\nMETHODS\nHERA is an international multicentre randomised trial that compared 1 or 2 years of trastuzumab treatment with observation alone after standard neoadjuvant or adjuvant chemotherapy in women with HER2-positive node positive or high-risk node negative breast cancer. 5102 women participated in the trial; we analysed data from 1703 women who had been randomised for treatment with trastuzumab for 1 year and 1698 women from the control group, with median follow-up of 23.5 months (range 0-48 months). The primary endpoint of the trial was disease-free survival. Here, we assess overall survival, a secondary endpoint. Analyses were done on an intent-to-treat basis. This trial is registered with the European Clinical Trials Database, number 2005-002385-11.\n\n\nFINDINGS\n97 (5.7%) patients randomised to observation alone and 58 (3.4%) patients randomised to 1 year of treatment with trastuzumab were lost to follow-up. 172 women stopped trastuzumab prematurely. 59 deaths were reported for trastuzumab and 90 in the control group. The unadjusted hazard ratio (HR) for the risk of death with trastuzumab compared with observation alone was 0.66 (95% CI 0.47-0.91; p=0.0115). 218 disease-free survival events were reported with trastuzumab compared with 321 in the control group. The unadjusted HR for the risk of an event with trastuzumab compared with observation alone was 0.64 (0.54-0.76; p<0.0001).\n\n\nINTERPRETATION\nOur results show that 1 year of treatment with trastuzumab after adjuvant chemotherapy has a significant overall survival benefit after a median follow-up of 2 years. The emergence of this benefit after only 2 years reinforces the importance of trastuzumab in the treatment of women with HER2-positive early breast cancer.",
"title": ""
},
{
"docid": "896edd4e7b3db05d67035a7159b927d6",
"text": "Chronic rhinosinusitis (CRS) is a heterogeneous disease characterized by local inflammation of the upper airways and sinuses which persists for at least 12 weeks. CRS can be divided into two phenotypes dependent on the presence of nasal polyps (NPs); CRS with NPs (CRSwNP) and CRS without NPs (CRSsNP). Immunological patterns in the two diseases are known to be different. Inflammation in CRSsNP is rarely investigated and limited studies show that CRSsNP is characterized by type 1 inflammation. Inflammation in CRSwNP is well investigated and CRSwNP in Western countries shows type 2 inflammation and eosinophilia in NPs. In contrast, mixed inflammatory patterns are found in CRSwNP in Asia and the ratio of eosinophilic NPs and non-eosinophilic NPs is almost 50:50 in these countries. Inflammation in eosinophilic NPs is mainly controlled by type 2 cytokines, IL-5 and IL-13, which can be produced from several immune cells including Th2 cells, mast cells and group 2 innate lymphoid cells (ILC2s) that are all elevated in eosinophilic NPs. IL-5 strongly induces eosinophilia. IL-13 activates macrophages, B cells and epithelial cells to induce recruitment of eosinophils and Th2 cells, IgE mediated reactions and remodeling. Epithelial derived cytokines, TSLP, IL-33 and IL-1 can directly and indirectly control type 2 cytokine production from these cells in eosinophilic NPs. Recent clinical trials showed the beneficial effect on eosinophilic NPs and/or asthma by monoclonal antibodies against IL-5, IL-4Rα, IgE and TSLP suggesting that they can be therapeutic targets for eosinophilic CRSwNP.",
"title": ""
},
{
"docid": "4d0522eed482d894b59cc7c3fed23d81",
"text": "Extracting knowledge by performing computations on graphs is becoming increasingly challenging as graphs grow in size. A standard approach distributes the graph over a cluster of nodes, but performing computations on a distributed graph is expensive if large amount of data have to be moved. Without partitioning the graph, communication quickly becomes a limiting factor in scaling the system up. Existing graph partitioning heuristics incur high computation and communication cost on large graphs, sometimes as high as the future computation itself. Observing that the graph has to be loaded into the cluster, we ask if the partitioning can be done at the same time with a lightweight streaming algorithm.\n We propose natural, simple heuristics and compare their performance to hashing and METIS, a fast, offline heuristic. We show on a large collection of graph datasets that our heuristics are a significant improvement, with the best obtaining an average gain of 76%. The heuristics are scalable in the size of the graphs and the number of partitions. Using our streaming partitioning methods, we are able to speed up PageRank computations on Spark, a distributed computation system, by 18% to 39% for large social networks.",
"title": ""
},
{
"docid": "73b4cceb1546a94260c75ae8bed8edd8",
"text": "We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship – an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15% points, while converging three times as fast as other triplet-based losses.",
"title": ""
},
{
"docid": "73fdbdbff06b57195cde51ab5135ccbe",
"text": "1 Abstract This paper describes five widely-applicable business strategy patterns. The initiate patterns where inspired Michael Porter's work on competitive strategy (1980). By applying the pattern form we are able to explore the strategies and consequences in a fresh light. The patterns form part of a larger endeavour to apply pattern thinking to the business domain. This endeavour seeks to map the business domain in patterns, this involves develop patterns, possibly based on existing literature, and mapping existing patterns into a coherent model of the business domain. If you find the paper interesting you might be interested in some more patterns that are currently (May 2005) in development. These describe in more detail how these strategies can be implemented: This paper is one of the most downloaded pieces on my website. I'd be interested to know more about who is downloading the paper, what use your making of it and any comments you have on [email protected]. Cost Leadership Build an organization that can produce your chosen product more cheaply than anyone else. You can then choose to undercut the opposition (and sell more) or sell at the same price (and make more profit per unit.) Differentiated Product Build a product that fulfils the same functions as your competitors but is clearly different, e.g. it is better quality, novel design, or carries a brand name. Customer will be prepared to pay more for your product than the competition. Market Focus You can't compete directly on cost or differentiation with the market leader; so, focus on a niche in the market. The niche will be smaller than the overall market (so sales will be lower) but the customer requirements will be different, serve these customers requirements better then the mass market and they will buy from you again and again. Sweet Spot Customers don't always want the best or the cheapest, so, produce a product that combines elements of differentiation with reasonable cost so you offer superior value. However, be careful, customer tastes",
"title": ""
},
{
"docid": "d6379e449f1b7c6d845a004c59c1023c",
"text": "Phase-shifted ZVS PWM full-bridge converter realizes ZVS and eliminates the voltage oscillation caused by the reverse recovery of the rectifier diodes by introducing a resonant inductance and two clamping diodes. This paper improves the converter just by exchanging the position of the resonant inductance and the transformer such that the transformer is connected with the lagging leg. The improved converter has several advantages over the original counterpart, e.g., the clamping diodes conduct only once in a switching cycle, and the resonant inductance current is smaller in zero state, leading to a higher efficiency and reduced duty cycle loss. A blocking capacitor is usually introduced to the primary side to prevent the transformer from saturating, this paper analyzes the effects of the blocking capacitor in different positions, and a best scheme is determined. A 2850 W prototype converter is built to verify the effectiveness of the improved converter and the best scheme for the blocking capacitor.",
"title": ""
},
{
"docid": "cdb295a5a98da527a244d9b9f490407e",
"text": "The Toggle-based <italic>X</italic>-masking method requires a single toggle at a given cycle, there is a chance that non-<italic>X</italic> values are also masked. Hence, the non-<italic>X</italic> value over-masking problem may cause a fault coverage degradation. In this paper, a scan chain partitioning scheme is described to alleviate non-<italic>X </italic> bit over-masking problem arising from Toggle-based <italic>X</italic>-Masking method. The scan chain partitioning method finds a scan chain combination that gives the least toggling conflicts. The experimental results show that the amount of over-masked bits is significantly reduced, and it is further reduced when the proposed method is incorporated with <italic>X</italic>-canceling method. However, as the number of scan chain partitions increases, the control data for decoder increases. To reduce a control data overhead, this paper exploits a Huffman coding based data compression. Assuming two partitions, the size of control bits is even smaller than the conventional <italic>X </italic>-toggling method that uses only one decoder. In addition, selection rules of <italic>X</italic>-bits delivered to <italic>X</italic>-Canceling MISR are also proposed. With the selection rules, a significant test time increase can be prevented.",
"title": ""
},
{
"docid": "b33e896a23f27a81f04aaeaff2f2350c",
"text": "Nowadays it has become increasingly common for family members to be distributed in different time zones. These time differences pose specific challenges for communication within the family and result in different communication practices to cope with them. To gain an understanding of current challenges and practices, we interviewed people who regularly communicate with immediate family members living in other time zones. We report primary findings from the interviews, and identify design opportunities for improving the experience of cross time zone family communication.",
"title": ""
},
{
"docid": "cfcae9b30fda24358e79e4e664ed747d",
"text": "Automated driving is predicted to enhance traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually realize fully automated driving, the intelligent vehicle system must have the ability to plan different maneuvers while adapting to the surrounding traffic environment. This paper presents an algorithm for longitudinal and lateral trajectory planning for automated driving maneuvers where the vehicle does not have right of way, i.e., yielding maneuvers. Such maneuvers include, e.g., lane change, roundabout entry, and intersection crossing. In the proposed approach, the traffic environment which the vehicle must traverse is incorporated as constraints on its longitudinal and lateral positions. The trajectory planning problem can thereby be formulated as two loosely coupled low-complexity model predictive control problems for longitudinal and lateral motion. Simulation results demonstrate the ability of the proposed trajectory planning algorithm to generate smooth collision-free maneuvers which are appropriate for various traffic situations.",
"title": ""
},
{
"docid": "5ea59255b0ffd15285477fe5b997d48d",
"text": "Gastric cancer in humans arises in the setting of oxyntic atrophy (parietal cell loss) and attendant hyperplastic and metaplastic lineage changes within the gastric mucosa. Helicobacter infection in mice and humans leads to spasmolytic polypeptide-expressing metaplasia (SPEM). In a number of mouse models, SPEM arises after oxyntic atrophy. In mice treated with the parietal cell toxic protonophore DMP-777, SPEM appears to arise from the transdifferentiation of chief cells. These results support the concept that intrinsic mucosal influences regulate and modulate the appearance of gastric metaplasia even in the absence of significant inflammation, whereas chronic inflammation is required for the further neoplastic transition.",
"title": ""
},
{
"docid": "cc15583675d6b19fbd9a10f06876a61e",
"text": "Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.",
"title": ""
},
{
"docid": "cf95d41dc5a2bcc31b691c04e3fb8b96",
"text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.",
"title": ""
}
] |
scidocsrr
|
f8a56babbb0a788a5a5846259882844d
|
Improving Security Level through Obfuscation Technique for Source Code Protection using AES Algorithm
|
[
{
"docid": "fe944f1845eca3b0c252ada2c0306d61",
"text": "Now a days sharing the information over internet is becoming a critical issue due to security problems. Hence more techniques are needed to protect the shared data in an unsecured channel. The present work focus on combination of cryptography and steganography to secure the data while transmitting in the network. Firstly the data which is to be transmitted from sender to receiver in the network must be encrypted using the encrypted algorithm in cryptography .Secondly the encrypted data must be hidden in an image or video or an audio file with help of steganographic algorithm. Thirdly by using decryption technique the receiver can view the original data from the hidden image or video or audio file. Transmitting data or document can be done through these ways will be secured. In this paper we implemented three encrypt techniques like DES, AES and RSA algorithm along with steganographic algorithm like LSB substitution technique and compared their performance of encrypt techniques based on the analysis of its stimulated time at the time of encryption and decryption process and also its buffer size experimentally. The entire process has done in C#.",
"title": ""
},
{
"docid": "24fc1997724932c6ddc3311a529d7505",
"text": "In these days securing a network is an important issue. Many techniques are provided to secure network. Cryptographic is a technique of transforming a message into such form which is unreadable, and then retransforming that message back to its original form. Cryptography works in two techniques: symmetric key also known as secret-key cryptography algorithms and asymmetric key also known as public-key cryptography algorithms. In this paper we are reviewing different symmetric and asymmetric algorithms.",
"title": ""
},
{
"docid": "395dcc7c09562f358c07af9c999fbdc7",
"text": "Protecting source code against reverse engineering and theft is an important problem. The goal is to carry out computations using confidential algorithms on an untrusted party while ensuring confidentiality of algorithms. This problem has been addressed for Boolean circuits known as ‘circuit privacy’. Circuits corresponding to real-world programs are impractical. Well-known obfuscation techniques are highly practicable, but provide only limited security, e.g., no piracy protection. In this work, we modify source code yielding programs with adjustable performance and security guarantees ranging from indistinguishability obfuscators to (non-secure) ordinary obfuscation. The idea is to artificially generate ‘misleading’ statements. Their results are combined with the outcome of a confidential statement using encrypted selector variables. Thus, an attacker must ‘guess’ the encrypted selector variables to disguise the confidential source code. We evaluated our method using more than ten programmers as well as pattern mining across open source code repositories to gain insights of (micro-)coding patterns that are relevant for generating misleading statements. The evaluation reveals that our approach is effective in that it successfully preserves source code confidentiality.",
"title": ""
}
] |
[
{
"docid": "806088642828d5064e0b52f3c08f6ce9",
"text": "We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.",
"title": ""
},
{
"docid": "d6e76bfeeb127addcbe2eb77b1b0ad7e",
"text": "The choice of modeling units is critical to automatic speech recognition (ASR) tasks. Conventional ASR systems typically choose context-dependent states (CD-states) or contextdependent phonemes (CD-phonemes) as their modeling units. However, it has been challenged by sequence-to-sequence attention-based models, which integrate an acoustic, pronunciation and language model into a single neural network. On English ASR tasks, previous attempts have already shown that the modeling unit of graphemes can outperform that of phonemes by sequence-to-sequence attention-based model. In this paper, we are concerned with modeling units on Mandarin Chinese ASR tasks using sequence-to-sequence attention-based models with the Transformer. Five modeling units are explored including context-independent phonemes (CI-phonemes), syllables, words, sub-words and characters. Experiments on HKUST datasets demonstrate that the lexicon free modeling units can outperform lexicon related modeling units in terms of character error rate (CER). Among five modeling units, character based model performs best and establishes a new state-of-the-art CER of 26.64% on HKUST datasets without a hand-designed lexicon and an extra language model integration, which corresponds to a 4.8% relative improvement over the existing best CER of 28.0% by the joint CTC-attention based encoder-decoder network.",
"title": ""
},
{
"docid": "032db9c2dba42ca376e87b28ecb812fa",
"text": "This paper tries to put various ways in which Natural Language Processing (NLP) and Software Engineering (SE) can be seen as inter-disciplinary research areas. We survey the current literature, with the aim of assessing use of Software Engineering and Natural Language Processing tools in the researches undertaken. An assessment of how various phases of SDLC can employ NLP techniques is presented. The paper also provides the justification of the use of text for automating or combining both these areas. A short research direction while undertaking multidisciplinary research is also provided.",
"title": ""
},
{
"docid": "25176cef55afd54f06b7127d10729f5e",
"text": "Senescent cells (SCs) accumulate with age and after genotoxic stress, such as total-body irradiation (TBI). Clearance of SCs in a progeroid mouse model using a transgenic approach delays several age-associated disorders, suggesting that SCs play a causative role in certain age-related pathologies. Thus, a 'senolytic' pharmacological agent that can selectively kill SCs holds promise for rejuvenating tissue stem cells and extending health span. To test this idea, we screened a collection of compounds and identified ABT263 (a specific inhibitor of the anti-apoptotic proteins BCL-2 and BCL-xL) as a potent senolytic drug. We show that ABT263 selectively kills SCs in culture in a cell type– and species-independent manner by inducing apoptosis. Oral administration of ABT263 to either sublethally irradiated or normally aged mice effectively depleted SCs, including senescent bone marrow hematopoietic stem cells (HSCs) and senescent muscle stem cells (MuSCs). Notably, this depletion mitigated TBI-induced premature aging of the hematopoietic system and rejuvenated the aged HSCs and MuSCs in normally aged mice. Our results demonstrate that selective clearance of SCs by a pharmacological agent is beneficial in part through its rejuvenation of aged tissue stem cells. Thus, senolytic drugs may represent a new class of radiation mitigators and anti-aging agents.",
"title": ""
},
{
"docid": "a17241732ee8e9a8bc34caea2f08545d",
"text": "Text line segmentation is an essential pre-processing stage for off-line handwriting recognition in many Optical Character Recognition (OCR) systems. It is an important step because inaccurately segmented text lines will cause errors in the recognition stage. Text line segmentation of the handwritten documents is still one of the most complicated problems in developing a reliable OCR. The nature of handwriting makes the process of text line segmentation very challenging. Several techniques to segment handwriting text line have been proposed in the past. This paper seeks to provide a comprehensive review of the methods of off-line handwriting text line segmentation proposed by researchers.",
"title": ""
},
{
"docid": "e872173252bf7b516183d3e733c36f6c",
"text": "Nonlinear autoregressive moving average with exogenous inputs (NARMAX) models have been successfully demonstrated for modeling the input-output behavior of many complex systems. This paper deals with the proposition of a scheme to provide time series prediction. The approach is based on a recurrent NARX model obtained by linear combination of a recurrent neural network (RNN) output and the real data output. Some prediction metrics are also proposed to assess the quality of predictions. This metrics enable to compare different prediction schemes and provide an objective way to measure how changes in training or prediction model (Neural network architecture) affect the quality of predictions. Results show that the proposed NARX approach consistently outperforms the prediction obtained by the RNN neural network.",
"title": ""
},
{
"docid": "33a9140fb57200a489b9150d39f0ab65",
"text": "In this paper, a double-quadrant state-of-charge (SoC)-based droop control method for distributed energy storage system is proposed to reach the proper power distribution in autonomous dc microgrids. In order to prolong the lifetime of the energy storage units (ESUs) and avoid the overuse of a certain unit, the SoC of each unit should be balanced and the injected/output power should be gradually equalized. Droop control as a decentralized approach is used as the basis of the power sharing method for distributed energy storage units. In the charging process, the droop coefficient is set to be proportional to the nth order of SoC, while in the discharging process, the droop coefficient is set to be inversely proportional to the nth order of SoC. Since the injected/output power is inversely proportional to the droop coefficient, it is obtained that in the charging process the ESU with higher SoC absorbs less power, while the one with lower SoC absorbs more power. Meanwhile, in the discharging process, the ESU with higher SoC delivers more power and the one with lower SoC delivers less power. Hence, SoC balancing and injected/output power equalization can be gradually realized. The exponent n of SoC is employed in the control diagram to regulate the speed of SoC balancing. It is found that with larger exponent n, the balancing speed is higher. MATLAB/simulink model comprised of three ESUs is implemented and the simulation results are shown to verify the proposed approach.",
"title": ""
},
{
"docid": "4d52865efa6c359d68125c7013647c86",
"text": "In recent years, we have witnessed an unprecedented proliferation of large document collections. This development has spawned the need for appropriate analytical means. In particular, to seize the thematic composition of large document collections, researchers increasingly draw on quantitative topic models. Among their most prominent representatives is the Latent Dirichlet Allocation (LDA). Yet, these models have significant drawbacks, e.g. the generated topics lack context and thus meaningfulness. Prior research has rarely addressed this limitation through the lens of mixed-methods research. We position our paper towards this gap by proposing a structured mixedmethods approach to the meaningful analysis of large document collections. Particularly, we draw on qualitative coding and quantitative hierarchical clustering to validate and enhance topic models through re-contextualization. To illustrate the proposed approach, we conduct a case study of the thematic composition of the AIS Senior Scholars' Basket of Journals.",
"title": ""
},
{
"docid": "d6bd475e9929748bbb71ac0d82e4f067",
"text": "We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions – the math portion of the Scholastic Aptitude Test (SAT). By using a tree transducer cascade as its basic architecture, our system (called EUCLID) propagates uncertainty from multiple sources (e.g. coreference resolution or verb interpretation) until it can be confidently resolved. Experiments show the first-ever results (43% recall and 91% precision) on SAT algebra word problems. We also apply EUCLID to the public Dolphin algebra question set, and improve the state-of-the-art F1-score from 73.9% to 77.0%.",
"title": ""
},
{
"docid": "a0ebe19188abab323122a5effc3c4173",
"text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.",
"title": ""
},
{
"docid": "119a4b04bc042b68f4b32480a069f6d4",
"text": "Preserving the availability and integrity of the power grid critical infrastructures in the face of fast-spreading intrusions requires advances in detection techniques specialized for such large-scale cyber-physical systems. In this paper, we present a security-oriented cyber-physical state estimation (SCPSE) system, which, at each time instant, identifies the compromised set of hosts in the cyber network and the maliciously modified set of measurements obtained from power system sensors. SCPSE fuses uncertain information from different types of distributed sensors, such as power system meters and cyber-side intrusion detectors, to detect the malicious activities within the cyber-physical system. We implemented a working prototype of SCPSE and evaluated it using the IEEE 24-bus benchmark system. The experimental results show that SCPSE significantly improves on the scalability of traditional intrusion detection techniques by using information from both cyber and power sensors. Furthermore, SCPSE was able to detect all the attacks against the control network in our experiments.",
"title": ""
},
{
"docid": "759bf80a33903899cb7f684aa277eddd",
"text": "Effective patient similarity assessment is important for clinical decision support. It enables the capture of past experience as manifested in the collective longitudinal medical records of patients to help clinicians assess the likely outcomes resulting from their decisions and actions. However, it is challenging to devise a patient similarity metric that is clinically relevant and semantically sound. Patient similarity is highly context sensitive: it depends on factors such as the disease, the particular stage of the disease, and co-morbidities. One way to discern the semantics in a particular context is to take advantage of physicians’ expert knowledge as reflected in labels assigned to some patients. In this paper we present a method that leverages localized supervised metric learning to effectively incorporate such expert knowledge to arrive at semantically sound patient similarity measures. Experiments using data obtained from the MIMIC II database demonstrate the effectiveness of this approach.",
"title": ""
},
{
"docid": "902e6d047605a426ae9bebc3f9ddf139",
"text": "Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "11962ec2381422cfac77ad543b519545",
"text": "In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable—beyond running the base learner itself, it only requires computing the top singular vector of a certain n×d matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with 1% corruptions, we achieved 7.4% test error, compared to 13.4%− 20.5% for the baselines, and 3% error on the uncorrupted dataset. Similarly, on the drug design dataset, with 10% corruptions, we achieved 1.42 mean-squared error test error, compared to 1.51-2.33 for the baselines, and 1.23 error on the uncorrupted dataset.",
"title": ""
},
{
"docid": "448b1a9645216cedc89feac0afd70d0c",
"text": "Voluminous amounts of data have been produced, since the past decade as the miniaturization of Internet of things (IoT) devices increases. However, such data are not useful without analytic power. Numerous big data, IoT, and analytics solutions have enabled people to obtain valuable insight into large data generated by IoT devices. However, these solutions are still in their infancy, and the domain lacks a comprehensive survey. This paper investigates the state-of-the-art research efforts directed toward big IoT data analytics. The relationship between big data analytics and IoT is explained. Moreover, this paper adds value by proposing a new architecture for big IoT data analytics. Furthermore, big IoT data analytic types, methods, and technologies for big data mining are discussed. Numerous notable use cases are also presented. Several opportunities brought by data analytics in IoT paradigm are then discussed. Finally, open research challenges, such as privacy, big data mining, visualization, and integration, are presented as future research directions.",
"title": ""
},
{
"docid": "4e5c9901da9ee977d995dd4fd6b9b6bd",
"text": "kmlonolgpbqJrtsHu qNvwlyxzl{vw|~}ololyp | xolyxoqNv
J lgxgOnyc}g pAqNvwl lgrc p|HqJbxz|r rc|pb4|HYl xzHnzl}o}gpb |p'w|rmlypnoHpb0rpb }zqJOn pyxg |HqJOp c}&olypb%nov4|rrclgpbYlo%ys{|Xq|~qlo noxX}ozz|~}lz rlo|xgp4pb0|~} |3 loqNvwH J xzOpb0| p|HqJbxz|rr|pbw|~lmxzHnolo}o}gpb;}gsH}oqly ¡cqOv rpb }zqJOnm¢~p TrloHYly¤£;r¥qOv4XHv&noxX}ozz|~}lz |YxzH|Ynvwl}]vw|~l zlolyp¦}4nonolo}o}gbrp2 |p4s o lyxzlypbq |xzlo|~}^]p|~q§bxz|r4r|pbw|~lmxzHnolo}o}gpbHu ̈cq©c} Joqhlyp qNvwl]no|~}yl^qNvw|~qaqNvwl}llqOv4~} no|o4qJbxzl qNvwl&rtpbbc}oq§Nn pgHxg |HqJOp#qNvwlys%|xol Xlgrrpb«pxzlonoqJrts¦p r|xJYl2w|X¬g4l&q|Xgrclo}2J }oqh|HqJc}o qJOn};®v }&no|p |~¢l¦cq3 ̄=nybr°q]qh%|p|rsH±ylu bpXlgx}zqh|p|p%]xzl qNvwl«|XgrcqJsLJ&qOv4lo}l |Yxo|Xnov4lo}q HYlyr pYlyxgrtspw0rtpw~bc}oqJOn;zlvw|Nxg 2gp¦qNv c} 4|o4lyxou 3l rr Yl}ngxgNzl;| }g rlxgbrlzo|H}lo |oYxzH|Ynv q |Xq|~qlo rlo|xgp4pb0 rpbbc}oq§On^¢p TrcloHYlgT®v } |oYxzH|Ynv vw|~} ololgp}ovw ¡p2xL| ́p4bolyxLJ&q|~}o¢} qhno|4qJ xol¦pgxg|~q§Np p |nyrlo|xolgx2|p# xzl«xzlonq |~}ov Op cqNvwXq]|%noxo c}l p«wlyxxg |pnoly3μLl¶xzl}lgpwq¶|«Ylq|rlo«no|H}l }oqJ4s%J qOvbc} rclz|xgp4pw0lqNvwHL|YrtOlo qh4|xoq]J;}J4lolznv2qh|HHpb",
"title": ""
},
{
"docid": "3d2e47ed90e8ff4dec54e85e4996c961",
"text": "Open source software encourages innovation by allowing users to extend the functionality of existing applications. Treeview is a popular application for the visualization of microarray data, but is closed-source and platform-specific, which limits both its current utility and suitability as a platform for further development. Java Treeview is an open-source, cross-platform rewrite that handles very large datasets well, and supports extensions to the file format that allow the results of additional analysis to be visualized and compared. The combination of a general file format and open source makes Java Treeview an attractive choice for solving a class of visualization problems. An applet version is also available that can be used on any website with no special server-side setup.",
"title": ""
},
{
"docid": "6a51aba04d0af9351e86b8a61b4529cb",
"text": "Cloud computing is a newly emerged technology, and the rapidly growing field of IT. It is used extensively to deliver Computing, data Storage services and other resources remotely over internet on a pay per usage model. Nowadays, it is the preferred choice of every IT organization because it extends its ability to meet the computing demands of its everyday operations, while providing scalability, mobility and flexibility with a low cost. However, the security and privacy is a major hurdle in its success and its wide adoption by organizations, and the reason that Chief Information Officers (CIOs) hesitate to move the data and applications from premises of organizations to the cloud. In fact, due to the distributed and open nature of the cloud, resources, applications, and data are vulnerable to intruders. Intrusion Detection System (IDS) has become the most commonly used component of computer system security and compliance practices that defends network accessible Cloud resources and services from various kinds of threats and attacks. This paper presents an overview of different intrusions in cloud, various detection techniques used by IDS and the types of Cloud Computing based IDS. Then, we analyze some pertinent existing cloud based intrusion detection systems with respect to their various types, positioning, detection time and data source. The analysis also gives strengths of each system, and limitations, in order to evaluate whether they carry out the security requirements of cloud computing environment or not. We highlight the deployment of IDS that uses multiple detection approaches to deal with security challenges in cloud.",
"title": ""
},
{
"docid": "2e2cffc777e534ad1ab7a5c638e0574e",
"text": "BACKGROUND\nPoly(ADP-ribose)polymerase-1 (PARP-1) is a highly promising novel target in breast cancer. However, the expression of PARP-1 protein in breast cancer and its associations with outcome are yet poorly characterized.\n\n\nPATIENTS AND METHODS\nQuantitative expression of PARP-1 protein was assayed by a specific immunohistochemical signal intensity scanning assay in a range of normal to malignant breast lesions, including a series of patients (N = 330) with operable breast cancer to correlate with clinicopathological factors and long-term outcome.\n\n\nRESULTS\nPARP-1 was overexpressed in about a third of ductal carcinoma in situ and infiltrating breast carcinomas. PARP-1 protein overexpression was associated to higher tumor grade (P = 0.01), estrogen-negative tumors (P < 0.001) and triple-negative phenotype (P < 0.001). The hazard ratio (HR) for death in patients with PARP-1 overexpressing tumors was 7.24 (95% CI; 3.56-14.75). In a multivariate analysis, PARP-1 overexpression was an independent prognostic factor for both disease-free (HR 10.05; 95% CI 5.42-10.66) and overall survival (HR 1.82; 95% CI 1.32-2.52).\n\n\nCONCLUSIONS\nNuclear PARP-1 is overexpressed during the malignant transformation of the breast, particularly in triple-negative tumors, and independently predicts poor prognosis in operable invasive breast cancer.",
"title": ""
}
] |
scidocsrr
|
9cf359b6932bcf61505b74ba3c1c5d7b
|
Lessons from the Amazon Picking Challenge: Four Aspects of Building Robotic Systems
|
[
{
"docid": "f670b91f8874c2c2db442bc869889dbd",
"text": "This paper summarizes lessons learned from the first Amazon Picking Challenge in which 26 international teams designed robotic systems that competed to retrieve items from warehouse shelves. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned. Note to Practitioners: Abstract—Perception, motion planning, grasping, and robotic system engineering has reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semi-structured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "50d22974ef09d0f02ee05d345e434055",
"text": "We present the exploring/exploiting tree (EET) algorithm for motion planning. The EET planner deliberately trades probabilistic completeness for computational efficiency. This tradeoff enables the EET planner to outperform state-of-the-art sampling-based planners by up to three orders of magnitude. We show that these considerable speedups apply for a variety of challenging real-world motion planning problems. The performance improvements are achieved by leveraging work space information to continuously adjust the sampling behavior of the planner. When the available information captures the planning problem's inherent structure, the planner's sampler becomes increasingly exploitative. When the available information is less accurate, the planner automatically compensates by increasing local configuration space exploration. We show that active balancing of exploration and exploitation based on workspace information can be a key ingredient to enabling highly efficient motion planning in practical scenarios.",
"title": ""
}
] |
[
{
"docid": "35cbd0797156630e7b3edf7cf76868c1",
"text": "Given a bipartite graph of users and the products that they review, or followers and followees, how can we detect fake reviews or follows? Existing fraud detection methods (spectral, etc.) try to identify dense subgraphs of nodes that are sparsely connected to the remaining graph. Fraudsters can evade these methods using camouflage, by adding reviews or follows with honest targets so that they look “normal.” Even worse, some fraudsters use hijacked accounts from honest users, and then the camouflage is indeed organic.\n Our focus is to spot fraudsters in the presence of camouflage or hijacked accounts. We propose FRAUDAR, an algorithm that (a) is camouflage resistant, (b) provides upper bounds on the effectiveness of fraudsters, and (c) is effective in real-world data. Experimental results under various attacks show that FRAUDAR outperforms the top competitor in accuracy of detecting both camouflaged and non-camouflaged fraud. Additionally, in real-world experiments with a Twitter follower--followee graph of 1.47 billion edges, FRAUDAR successfully detected a subgraph of more than 4, 000 detected accounts, of which a majority had tweets showing that they used follower-buying services.",
"title": ""
},
{
"docid": "4071b0a0f3887a5ad210509e6ad5498a",
"text": "Nowadays, the IoT is largely dependent on sensors. The IoT devices are embedded with sensors and have the ability to communicate. A variety of sensors play a key role in networked devices in IoT. In order to facilitate the management of such sensors, this paper investigates how to use SNMP protocol, which is widely used in network device management, to implement sensors information management of IoT system. The principles and implement details to setup the MIB file, agent and manager application are discussed. A prototype system is setup to validate our methods. The test results show that because of its easy use and strong expansibility, SNMP is suitable and a bright way for sensors information management of IoT system.",
"title": ""
},
{
"docid": "2f901bcc774a104db449e38fd8ebb3c4",
"text": "Web service composition concerns the building of new value added services by integrating the sets of existing web services. Due to the seamless proliferation of web services, it becomes difficult to find a suitable web service that satisfies the requirements of users during web service composition. This paper systematically reviews existing research on QoS-aware web service composition using computational intelligence techniques (published between 2005 and 2015). This paper develops a classification of research approaches on computational intelligence based QoS-aware web service composition and describes future research directions in this area. In particular, the results of this study confirms that new meta-heuristic algorithms have not yet been applied for solving QoS-aware web services composition.",
"title": ""
},
{
"docid": "dfb3a6fea5c2b12e7865f8b6664246fb",
"text": "We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains, *An earlier version of this article was entitled \"Cumulative Prospect Theory: An Analysis of Decision under Uncertainty.\" This article has benefited from discussions with Colin Camerer, Chew Soo-Hong, David Freedman, and David H. Krantz. We are especially grateful to Peter P. Wakker for his invaluable input and contribution to the axiomatic analysis. We are indebted to Richard Gonzalez and Amy Hayes for running the experiment and analyzing the data. This work was supported by Grants 89-0064 and 88-0206 from the Air Force Office of Scientific Research, by Grant SES-9109535 from the National Science Foundation, and by the Sloan Foundation. 298 AMOS TVERSKY/DANIEL KAHNEMAN and 2) a nonlinear transformation of the probability scale, which overweights small probabilities and underweights moderate and high probabilities. In an important later development, several authors (Quiggin, 1982; Schmeidler, 1989; Yaari, 1987; Weymark, 1981) have advanced a new representation, called the rank-dependent or the cumulative functional, that transforms cumulative rather than individual probabilities. This article presents a new version of prospect theory that incorporates the cumulative functional and extends the theory to uncertain as well to risky prospects with any number of outcomes. The resulting model, called cumulative prospect theory, combines some of the attractive features of both developments (see also Luce and Fishburn, 1991). It gives rise to different evaluations of gains and losses, which are not distinguished in the standard cumulative model, and it provides a unified treatment of both risk and uncertainty. To set the stage for the present development, we first list five major phenomena of choice, which violate the standard model and set a minimal challenge that must be met by any adequate descriptive theory of choice. All these findings have been confirmed in a number of experiments, with both real and hypothetical payoffs. Framing effects. The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences (Tversky and Kahneman, 1986). Nonlinear preferences. According to the expectation principle, the utility of a risky prospect is linear in outcome probabilities. Allais's (1953) famous example challenged this principle by showing that the difference between probabilities of .99 and 1.00 has more impact on preferences than the difference between 0.10 and 0.11. More recent studies observed nonlinear preferences in choices that do not involve sure things (Camerer and Ho, 1991). Source dependence. People's willingness to bet on an uncertain event depends not only on the degree of uncertainty but also on its source. Ellsberg (1961) observed that people prefer to bet on an urn containing equal numbers of red and green balls, rather than on an urn that contains red and green balls in unknown proportions. More recent evidence indicates that people often prefer a bet on an event in their area of competence over a bet on a matched chance event, although the former probability is vague and the latter is clear (Heath and Tversky, 1991). Risk seeking. Risk aversion is generally assumed in economic analyses of decision under uncertainty. However, risk-seeking choices are consistently observed in two classes of decision problems. First, people often prefer a small probability of winning a large prize over the expected value of that prospect. Second, risk seeking is prevalent when people must choose between a sure loss and a substantial probability of a larger loss. Loss' aversion. One of the basic phenomena of choice under both risk and uncertainty is that losses loom larger than gains (Kahneman and Tversky, 1984; Tversky and Kahneman, 1991). The observed asymmetry between gains and losses is far too extreme to be explained by income effects or by decreasing risk aversion. ADVANCES IN PROSPECT THEORY 299 The present development explains loss aversion, risk seeking, and nonlinear preferences in terms of the value and the weighting functions. It incorporates a framing process, and it can accommodate source preferences. Additional phenomena that lie beyond the scope of the theory--and of its alternatives--are discussed later. The present article is organized as follows. Section 1.1 introduces the (two-part) cumulative functional; section 1.2 discusses relations to previous work; and section 1.3 describes the qualitative properties of the value and the weighting functions. These properties are tested in an extensive study of individual choice, described in section 2, which also addresses the question of monetary incentives. Implications and limitations of the theory are discussed in section 3. An axiomatic analysis of cumulative prospect theory is presented in the appendix.",
"title": ""
},
{
"docid": "ed5185ea36f61a9216c6f0183b81d276",
"text": "Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.",
"title": ""
},
{
"docid": "9a58dc3eada29c2b929c4442ce0ac025",
"text": "Gamification is the application of game elements and game design techniques in non-game contexts to engage and motivate people to achieve their goals. Motivation is an essential requirement for effective and efficient collaboration, which is particularly challenging when people work distributedly. In this paper, we discuss the topics of collaboration, motivation, and gamification in the context of software engineering. We then introduce our long-term research goal—building a theoretical framework that defines how gamification can be used as a collaboration motivator for virtual software teams. We also highlight the roles that social and cultural issues might play in understanding the phenomenon. Finally, we give an overview of our proposed research method to foster discussion during the workshop on how to best investigate the topic. Author",
"title": ""
},
{
"docid": "3cc6d54cb7a8507473f623a149c3c64b",
"text": "The measurement of loyalty is a topic of great interest for the marketing academic literature. The relation that loyalty has with the results of organizations has been tested by numerous studies and the search to retain profitable customers has become a maxim in firm management. Tourist destinations have not remained oblivious to this trend. However, the difficulty involved in measuring the loyalty of a tourist destination is a brake on its adoption by those in charge of destination management. The usefulness of measuring loyalty lies in being able to apply strategies which enable improving it, but that also impact on the enhancement of the organization’s results. The study of tourists’ loyalty to a destination is considered relevant for the literature and from the point of view of the management of the multiple actors involved in the tourist activity. Based on these considerations, this work proposes a synthetic indictor that allows the simple measurement of the tourist’s loyalty. To do so, we used as a starting point Best’s (2007) customer loyalty index adapted to the case of tourist destinations. We also employed a variable of results – the tourist’s overnight stays in the destination – to create a typology of customers according to their levels of loyalty and the number of their overnight stays. The data were obtained from a survey carried out with 2373 tourists of the city of Seville. In accordance with the results attained, the proposal of the synthetic indicator to measure tourist loyalty is viable, as it is a question of a simple index constructed from easily obtainable data. Furthermore, four groups of tourists have been identified, according to their degree of loyalty and profitability, using the number of overnight stays of the tourists in their visit to the destination. The study’s main contribution stems from the possibility of simply measuring loyalty and from establishing four profiles of tourists for which marketing strategies of differentiated relations can be put into practice and that contribute to the improvement of the destination’s results. © 2018 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/",
"title": ""
},
{
"docid": "83651ca357b0f978400de4184be96443",
"text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.",
"title": ""
},
{
"docid": "307d92caf4ff7e64db7d5f23035a7440",
"text": "In this paper, the effective use of flight-time constrained unmanned aerial vehicles (UAVs) as flying base stations that provide wireless service to ground users is investigated. In particular, a novel framework for optimizing the performance of such UAV-based wireless systems in terms of the average number of bits (data service) transmitted to users as well as the UAVs’ hover duration (i.e. flight time) is proposed. In the considered model, UAVs hover over a given geographical area to serve ground users that are distributed within the area based on an arbitrary spatial distribution function. In this case, two practical scenarios are considered. In the first scenario, based on the maximum possible hover times of UAVs, the average data service delivered to the users under a fair resource allocation scheme is maximized by finding the optimal cell partitions associated to the UAVs. Using the powerful mathematical framework of optimal transport theory, this cell partitioning problem is proved to be equivalent to a convex optimization problem. Subsequently, a gradient-based algorithm is proposed for optimally partitioning the geographical area based on the users’ distribution, hover times, and locations of the UAVs. In the second scenario, given the load requirements of ground users, the minimum average hover time that the UAVs need for completely servicing their ground users is derived. To this end, first, an optimal bandwidth allocation scheme for serving the users is proposed. Then, given this optimal bandwidth allocation, optimal cell partitions associated with the UAVs are derived by exploiting the optimal transport theory. Simulation results show that our proposed cell partitioning approach leads to a significantly higher fairness among the users compared with the classical weighted Voronoi diagram. Furthermore, the results demonstrate that the average hover time of the UAVs can be reduced by 64% by adopting the proposed optimal bandwidth allocation scheme as well as the optimal cell partitioning approach. In addition, our results reveal an inherent tradeoff between the hover time of UAVs and bandwidth efficiency while serving the ground users.",
"title": ""
},
{
"docid": "6bf2280158dca2d69501255d47322246",
"text": "Distal deletion of the long arm of chromosome 10 is associated with a dysmorphic craniofacial appearance, microcephaly, behavioral issues, developmental delay, intellectual disability, and ocular, urogenital, and limb abnormalities. Herein, we present clinical, molecular, and cytogenetic investigations of four patients, including two siblings, with nearly identical terminal deletions of 10q26.3, all of whom have an atypical presentation of this syndrome. Their prominent features include ataxia, mild-to-moderate intellectual disability, and hyperemia of the hands and feet, and they do not display many of the other features commonly associated with deletions of this region. These results point to a novel gene locus associated with ataxia and highlight the variability of the clinical presentation of patients with deletions of this region.",
"title": ""
},
{
"docid": "d611a165b088d7087415aa2c8843b619",
"text": "Type synthesis of 1-DOF remote center of motion (RCM) mechanisms is the preliminary for research on many multiDOF RCM mechanisms. Since types of existing RCM mechanisms are few, it is necessary to find an efficient way to create more new RCM mechanisms. In this paper, existing 1-DOF RCM mechanisms are first classified, then base on a proposed concept of the planar virtual center (VC) mechanism, which is a more generalized concept than a RCM mechanism, two approaches of type synthesis for 1-DOF RCM mechanisms are addressed. One case is that a 1-DOF parallel or serial–parallel RCM mechanism can be constructed by assembling two planar VC mechanisms; the other case, a VC mechanism can be expanded to a serial–parallel RCM mechanism. Concrete samples are provided accordingly, some of which are new types. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fe2870a3f36b9a042ec9cece5a64dafd",
"text": "This paper provides a methodology to study the PHY layer vulnerability of wireless protocols in hostile radio environments. Our approach is based on testing the vulnerabilities of a system by analyzing the individual subsystems. By targeting an individual subsystem or a combination of subsystems at a time, we can infer the weakest part and revise it to improve the overall system performance. We apply our methodology to 4G LTE downlink by considering each control channel as a subsystem. We also develop open-source software enabling research and education using software-defined radios. We present experimental results with open-source LTE systems and shows how the different subsystems behave under targeted interference. The analysis for the LTE downlink shows that the synchronization signals (PSS/SSS) are very resilient to interference, whereas the downlink pilots or Cell-Specific Reference signals (CRS) are the most susceptible to a synchronized protocol-aware interferer. We also analyze the severity of control channel attacks for different LTE configurations. Our methodology and tools allow rapid evaluation of the PHY layer reliability in harsh signaling environments, which is an asset to improve current standards and develop new and robust wireless protocols.",
"title": ""
},
{
"docid": "2d66994a185ee4d57c87ac8b012c86ac",
"text": "The majority of projects dealing with monitoring and diagnosis of Cyber Physical Systems (CPSs) relies on models created by human experts. But these models are rarely available, are hard to verify and to maintain and are often incomplete. Data-driven approaches are a promising alternative: They leverage on the large amount of data which is collected nowadays in CPSs, this data is then used to learn the necessary models automatically. For this, several challenges have to be tackled, such as real-time data acquisition and storage solutions, data analysis and machine learning algorithms, task specific human-machine-interfaces (HMI) and feedback/control mechanisms. In this paper, we propose a cognitive reference architecture which addresses these challenges. This reference architecture should both ease the reuse of algorithms and support scientific discussions by providing a comparison schema. Use cases from different industries are outlined and support the correctness of the architecture.",
"title": ""
},
{
"docid": "313b4f6832d45a428fe264cc16e6ff9f",
"text": "This theme issue provides a comprehensive collection of original research articles on the creation of diverse types of theranostic upconversion nanoparticles, their fundamental interactions in biology, as well as their biophotonic applications in noninvasive diagnostics and therapy.",
"title": ""
},
{
"docid": "ebc6b9c213fd20397aaabe1a15a36591",
"text": "In this paper, we propose an Arabic Question-Answering (Q-A) system called QASAL «Question -Answering system for Arabic Language». QASAL accepts as an input a natural language question written in Modern Standard Arabic (MSA) and generates as an output the most efficient and appropriate answer. The proposed system is composed of three modules: A question analysis module, a passage retrieval module and an answer extraction module. To process these three modules we use the NooJ Platform which represents a linguistic development environment.",
"title": ""
},
{
"docid": "96af91aed1c131f1c8c9d8076ed5835d",
"text": "Hedge funds are unique among investment vehicles in that they are relatively unconstrained in their use of derivative investments, short-selling, and leverage. This flexibility allows investment managers to span a broad spectrum of distinct risks, such as momentum and option-like investments. Taking a revealed preference approach, we find that Capital Asset Pricing Model (CAPM) alpha explains hedge fund flows better than alphas from more sophisticated models. This result suggests that investors pool together sophisticated model alpha with returns from exposures to traditional and exotic risks. We decompose performance into traditional and exotic risk components and find that while investors chase both components, they place greater relative emphasis on returns associated with exotic risk exposures that can only be obtained through hedge funds. However, we find little evidence of persistence in performance from traditional or exotic risks, which cautions against investors’ practice of seeking out risk exposures following periods of recent success.",
"title": ""
},
{
"docid": "9b49a4673456ab8e9f14a0fe5fb8bcc7",
"text": "Legged robots offer the potential to navigate a wide variety of terrains that are inaccessible to wheeled vehicles. In this paper we consider the planning and control tasks of navigating a quadruped robot over a wide variety of challenging terrain, including terrain which it has not seen until run-time. We present a software architecture that makes use of both static and dynamic gaits, as well as specialized dynamic maneuvers, to accomplish this task. Throughout the paper we highlight two themes that have been central to our approach: 1) the prevalent use of learning algorithms, and 2) a focus on rapid recovery and replanning techniques; we present several novel methods and algorithms that we developed for the quadruped and that illustrate these two themes. We evaluate the performance of these different methods, and also present and discuss the performance of our system on the official Learning Locomotion tests.",
"title": ""
},
{
"docid": "5f49c93d7007f0f14f1410ce7805b29a",
"text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.",
"title": ""
},
{
"docid": "fef4ab4bebb16560135cbf4d49c63b4d",
"text": "The two-fold aim of the paper is to unify and generalize on the one hand the double integrals of Beukers for ζ(2) and ζ(3), and of the second author for Euler’s constant γ and its alternating analog ln(4/π), and on the other hand the infinite products of the first author for e, of the second author for π, and of Ser for e . We obtain new double integral and infinite product representations of many classical constants, as well as a generalization to Lerch’s transcendent of Hadjicostas’s double integral formula for the Riemann zeta function, and logarithmic series for the digamma and Euler beta functions. The main tools are analytic continuations of Lerch’s function, including Hasse’s series. We also use Ramanujan’s polylogarithm formula for the sum of a particular series involving harmonic numbers, and his relations between certain dilogarithm values.",
"title": ""
}
] |
scidocsrr
|
09cf479f99dc361449129fcdf6d174b7
|
Low-Power Low-Noise CTIA Readout Integrated Circuit Design for Thermal Imaging Applications
|
[
{
"docid": "e5c4870acea1c7315cce0561f583626c",
"text": "A discussion of CMOS readout technologies for infrared (IR) imaging systems is presented. First, the description of various types of IR detector materials and structures is given. The advances of detector fabrication technology and microelectronics process technology have led to the development of large format array of IR imaging detectors. For such large IR FPA’s which is the critical component of the advanced infrared imaging system, general requirement and specifications are described. To support a good interface between FPA and downstream signal processing stage, both conventional and recently developed CMOS readout techniques are presented and discussed. Finally, future development directions including the smart focal plane concept are also introduced.",
"title": ""
}
] |
[
{
"docid": "70fbeaa603b37230d37d593a9b87f56e",
"text": "Umbilical venous catheterization is a common procedure performed in neonatal intensive care units. Hepatic collections due to inadvertent extravasation of parenteral nutrition into the liver have been described previously in literature. To recognize the clinicoradiologic features and treatment options of hepatic collections due to inadvertent extravasation of parenteral nutrition fluids caused by malpositioning of umbilical venous catheter (UVC) in the portal venous system. This is a case series describing five neonates during a 6-year period at a single tertiary care referral center, with extravasation of parenteral nutrition into the liver parenchyma causing hepatic collections. All five neonates receiving parenteral nutrition presented with abdominal distension in the second week of life. Two out of five (40%) had anemia requiring blood transfusion and 3/5 (60%) had hemodynamic instability at presentation. Ultrasound of the liver confirmed the diagnosis in all the cases. Three of the five (60%) cases underwent US-guided aspiration of the collections, one case underwent conservative management and one case required emergent laparotomy due to abdominal compartment syndrome. US used in follow-up of these cases revealed decrease in size of the lesions and/or development of calcifications. Early recognition of this complication, prompt diagnosis with US of liver and timely treatment can lead to better outcome in newborns with hepatic collections secondary to inadvertent parenteral nutrition infusion via malposition of UVC.",
"title": ""
},
{
"docid": "70242cb6aee415682c03da6bfd033845",
"text": "This paper presents a class of linear predictors for nonlinear controlled dynamical systems. The basic idea is to lift (or embed) the nonlinear dynamics into a higher dimensional space where its evolution is approximately linear. In an uncontrolled setting, this procedure amounts to numerical approximations of the Koopman operator associated to the nonlinear dynamics. In this work, we extend the Koopman operator to controlled dynamical systems and apply the Extended Dynamic Mode Decomposition (EDMD) to compute a finite-dimensional approximation of the operator in such a way that this approximation has the form of a linear controlled dynamical system. In numerical examples, the linear predictors obtained in this way exhibit a performance superior to existing linear predictors such as those based on local linearization or the so called Carleman linearization. Importantly, the procedure to construct these linear predictors is completely data-driven and extremely simple – it boils down to a nonlinear transformation of the data (the lifting) and a linear least squares problem in the lifted space that can be readily solved for large data sets. These linear predictors can be readily used to design controllers for the nonlinear dynamical system using linear controller design methodologies. We focus in particular on model predictive control (MPC) and show that MPC controllers designed in this way enjoy computational complexity of the underlying optimization problem comparable to that of MPC for a linear dynamical system with the same number of control inputs and the same dimension of the state-space. Importantly, linear inequality constraints on the state and control inputs as well as nonlinear constraints on the state can be imposed in a linear fashion in the proposed MPC scheme. Similarly, cost functions nonlinear in the state variable can be handled in a linear fashion. We treat both the full-state measurement case and the input-output case, as well as systems with disturbances / noise. Numerical examples (including a high-dimensional nonlinear PDE control) demonstrate the approach with the source code available online2.",
"title": ""
},
{
"docid": "bf36c139b531fb738bff0cabf04ef006",
"text": "A new capacitive type of MEMS microphone is presented. In contrast to existing technologies which are highly specialized for this particular type of application, our approach is based on a standard process and layer system which has been in use for more than a decade now for the manufacturing of inertial sensors. For signal conversion, a mixed-signal ASIC with digital sampling of the microphone capacitance is used. The MEMS microphone yields high signal-to-noise performance (58 dB) after mounting it in a standard LGA-type package. It is well-suited for a wide range of potential applications and demonstrates the universal scope of the used process technology.",
"title": ""
},
{
"docid": "945f129f81e9b7a69a6ba9dc982ed7c6",
"text": "Geographic location of a person is important contextual information that can be used in a variety of scenarios like disaster relief, directional assistance, context-based advertisements, etc. GPS provides accurate localization outdoors but is not useful inside buildings. We propose an coarse indoor localization approach that exploits the ubiquity of smart phones with embedded sensors. GPS is used to find the building in which the user is present. The Accelerometers are used to recognize the user’s dynamic activities (going up or down stairs or an elevator) to determine his/her location within the building. We demonstrate the ability to estimate the floor-level of a user. We compare two techniques for activity classification, one is naive Bayes classifier and the other is based on dynamic time warping. The design and implementation of a localization application on the HTC G1 platform running Google Android is also presented.",
"title": ""
},
{
"docid": "aa4d12547a6b85a34ee818f1cc71d1da",
"text": "OBJECTIVE\nDevelopment of a new framework for the National Institute on Aging (NIA) to assess progress and opportunities toward stimulating and supporting rigorous research to address health disparities.\n\n\nDESIGN\nPortfolio review of NIA's health disparities research portfolio to evaluate NIA's progress in addressing priority health disparities areas.\n\n\nRESULTS\nThe NIA Health Disparities Research Framework highlights important factors for health disparities research related to aging, provides an organizing structure for tracking progress, stimulates opportunities to better delineate causal pathways and broadens the scope for malleable targets for intervention, aiding in our efforts to address health disparities in the aging population.\n\n\nCONCLUSIONS\nThe promise of health disparities research depends largely on scientific rigor that builds on past findings and aggressively pursues new approaches. The NIA Health Disparities Framework provides a landscape for stimulating interdisciplinary approaches, evaluating research productivity and identifying opportunities for innovative health disparities research related to aging.",
"title": ""
},
{
"docid": "a7607444b58f0e86000c7f2d09551fcc",
"text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.",
"title": ""
},
{
"docid": "677f5e0ca482bf7ea7bf929ae3adbf76",
"text": "Multilevel modulation formats, such as PAM-4, have been introduced in recent years for next generation wireline communication systems for more efficient use of the available link bandwidth. High-speed ADCs with digital signal processing (DSP) can provide robust performance for such systems to compensate for the severe channel impairment as the data rate continues to increase.",
"title": ""
},
{
"docid": "af22932b48a2ea64ecf3e5ba1482564d",
"text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.",
"title": ""
},
{
"docid": "e8b2498c4a81c36f1e7816c84a5074da",
"text": "Corresponding author: Magdalena Magnowska MD, PhD Department of Gynecology, Obstetrics and Gynecologic Oncology Division of Gynecologic Oncology Poznan University of Medical Sciences 33 Polna St 60-535 Poznan, Poland Phone: +48 618 419 330 Fax: +48 616 599 645 E-mail: [email protected] 1 Department of Gynecology, Obstetrics and Gynecologic Oncology, Division of Gynecologic Oncology, Poznan University of Medical Sciences, Poznan, Poland 2 Department of Biochemistry and Pathomorphology, Chair of Gynecology, Obstetrics and Gynecologic Oncology, Poznan University of Medical Sciences, Poznan, Poland",
"title": ""
},
{
"docid": "056c5033e71eecb8a683fded0dd149bb",
"text": "There is a severe lack of knowledge regarding the brain regions involved in human sexual performance in general, and female orgasm in particular. We used [15O]-H2O positron emission tomography to measure regional cerebral blood flow (rCBF) in 12 healthy women during a nonsexual resting state, clitorally induced orgasm, sexual clitoral stimulation (sexual arousal control) and imitation of orgasm (motor output control). Extracerebral markers of sexual performance and orgasm were rectal pressure variability (RPstd) and perceived level of sexual arousal (PSA). Sexual stimulation of the clitoris (compared to rest) significantly increased rCBF in the left secondary and right dorsal primary somatosensory cortex, providing the first account of neocortical processing of sexual clitoral information. In contrast, orgasm was mainly associated with profound rCBF decreases in the neocortex when compared with the control conditions (clitoral stimulation and imitation of orgasm), particularly in the left lateral orbitofrontal cortex, inferior temporal gyrus and anterior temporal pole. Significant positive correlations were found between RPstd and rCBF in the left deep cerebellar nuclei, and between PSA and rCBF in the ventral midbrain and right caudate nucleus. We propose that decreased blood flow in the left lateral orbitofrontal cortex signifies behavioural disinhibition during orgasm in women, and that deactivation of the temporal lobe is directly related to high sexual arousal. In addition, the deep cerebellar nuclei may be involved in orgasm-specific muscle contractions while the involvement of the ventral midbrain and right caudate nucleus suggests a role for dopamine in female sexual arousal and orgasm.",
"title": ""
},
{
"docid": "5f7c0161f910f0288c86349613a9b08b",
"text": "The problem of joint feature selection across a group of related tasks has applications in many areas including biomedical informatics and computer vision. We consider the 2,1-norm regularized regression model for joint feature selection from multiple tasks, which can be derived in the probabilistic framework by assuming a suitable prior from the exponential family. One appealing feature of the 2,1-norm regularization is that it encourages multiple predictors to share similar sparsity patterns. However, the resulting optimization problem is challenging to solve due to the non-smoothness of the 2,1-norm regularization. In this paper, we propose to accelerate the computation by reformulating it as two equivalent smooth convex optimization problems which are then solved via the Nesterov’s method—an optimal first-order black-box method for smooth convex optimization. A key building block in solving the reformulations is the Euclidean projection. We show that the Euclidean projection for the first reformulation can be analytically computed, while the Euclidean projection for the second one can be computed in linear time. Empirical evaluations on several data sets verify the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "60245551fe055b67e94def9fcff15bca",
"text": "Redundancy can, in general, improve the ability and performance of parallel manipulators by implementing the redundant degrees of freedom to optimize a secondary objective function. Almost all published researches in the area of parallel manipulators redundancy were focused on the design and analysis of redundant parallel manipulators with rigid (nonconfigurable) platforms and on grasping hands to be attached to the platforms. Conventional grippers usually are not appropriate to grasp irregular or large objects. Very few studies focused on the idea of using a configurable platform as a grasping device. This paper highlights the idea of using configurable platforms in both planar and spatial redundant parallel manipulators, and generalizes their analysis. The configurable platform is actually a closed kinematic chain of mobility equal to the degree of redundancy of the manipulator. The additional redundant degrees of freedom are used in reconfiguring the shape of the platform itself. Several designs of kinematically redundant planar and spatial parallel manipulators with configurable platform are presented. Such designs can be used as a grasping device especially for irregular or large objects or even as a micro-positioning device after grasping the object. Screw algebra is used to develop a general framework that can be adapted to analyze the kinematics of any general-geometry planar or spatial kinematically redundant parallel manipulator with configurable platform.",
"title": ""
},
{
"docid": "b7b2f1c59dfc00ab6776c6178aff929c",
"text": "Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.",
"title": ""
},
{
"docid": "94a5e443ff4d6a6decdf1aeeb1460788",
"text": "Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure— e.g. grammar, semantics or syntax— from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources. In this thesis, we present so-called unsupervisedmethods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data. In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis. Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be",
"title": ""
},
{
"docid": "f60bf27f4f557ba4705b1f75b743e932",
"text": "Intelligent fashion outfit composition becomes more and more popular in these years. Some deep learning based approaches reveal competitive composition recently. However, the unexplainable characteristic makes such deep learning based approach cannot meet the the designer, businesses and consumers’ urge to comprehend the importance of different attributes in an outfit composition. To realize interpretable and customized fashion outfit compositions, we propose a partitioned embedding network to learn interpretable representations from clothing items. The overall network architecture consists of three components: an auto-encoder module, a supervised attributes module and a multi-independent module. The auto-encoder module serves to encode all useful information into the embedding. In the supervised attributes module, multiple attributes labels are adopted to ensure that different parts of the overall embedding correspond to different attributes. In the multi-independent module, adversarial operation are adopted to fulfill the mutually independent constraint. With the interpretable and partitioned embedding, we then construct an outfit composition graph and an attribute matching map. Given specified attributes description, our model can recommend a ranked list of outfit composition with interpretable matching scores. Extensive experiments demonstrate that 1) the partitioned embedding have unmingled parts which corresponding to different attributes and 2) outfits recommended by our model are more desirable in comparison with the existing methods.",
"title": ""
},
{
"docid": "a0e7712da82a338fda01e1fd0bb4a44e",
"text": "Compliance specifications concisely describe selected aspects of what a business operation should adhere to. To enable automated techniques for compliance checking, it is important that these requirements are specified correctly and precisely, describing exactly the behavior intended. Although there are rigorous mathematical formalisms for representing compliance rules, these are often perceived to be difficult to use for business users. Regardless of notation, however, there are often subtle but important details in compliance requirements that need to be considered. The main challenge in compliance checking is to bridge the gap between informal description and a precise specification of all requirements. In this paper, we present an approach which aims to facilitate creating and understanding formal compliance requirements by providing configurable templates that capture these details as options for commonly-required compliance requirements. These options are configured interactively with end-users, using question trees and natural language. The approach is implemented in the Process Mining Toolkit ProM.",
"title": ""
},
{
"docid": "b4c5337997d33fce8553709a6d727d75",
"text": "Helicopters are often used to transport supplies and equipment to hard-to-reach areas. When a load is carried via suspension cables below a helicopter, the load oscillates in response to helicopter motion and external disturbances, such as wind. This oscillation is dangerous and adversely affects control of the helicopter, especially when carrying heavy loads. To provide better control over the helicopter, one approach is to suppress the load swing dynamics using a command-filtering method called input shaping. This approach does not require real-time measurement or estimation of the load states. A simple model of a helicopter carrying a suspended load is developed and experimentally verified on a micro coaxial radio-controlled helicopter. In addition, the effectiveness of input shaping at eliminating suspended load oscillation is demonstrated on the helicopter. The proposed model may assist with the design of input-shaping controllers for a wide array of helicopters carrying suspended loads.",
"title": ""
},
{
"docid": "c5e56d3ff1fbc7ebbdb691d1db66cdf9",
"text": "Most data mining research is concerned with building high-quality classification models in isolation. In massive production systems, however, the ability to monitor and maintain performance over time while growing in size and scope is equally important. Many external factors may degrade classification performance including changes in data distribution, noise or bias in the source data, and the evolution of the system itself. A well-functioning system must gracefully handle all of these. This paper lays out a set of design principles for large-scale autonomous data mining systems and then demonstrates our application of these principles within the m6d automated ad targeting system. We demonstrate a comprehensive set of quality control processes that allow us monitor and maintain thousands of distinct classification models automatically, and to add new models, take on new data, and correct poorly-performing models without manual intervention or system disruption.",
"title": ""
},
{
"docid": "83c184c457e35e80ce7ff8012b5dcd06",
"text": "The goal of this paper is to enable a 3D “virtual-tour” of an apartment given a small set of monocular images of different rooms, as well as a 2D floor plan. We frame the problem as inference in a Markov Random Field which reasons about the layout of each room and its relative pose (3D rotation and translation) within the full apartment. This gives us accurate camera pose in the apartment for each image. What sets us apart from past work in layout estimation is the use of floor plans as a source of prior knowledge, as well as localization of each image within a bigger space (apartment). In particular, we exploit the floor plan to impose aspect ratio constraints across the layouts of different rooms, as well as to extract semantic information, e.g., the location of windows which are marked in floor plans. We show that this information can significantly help in resolving the challenging room-apartment alignment problem. We also derive an efficient exact inference algorithm which takes only a few ms per apartment. This is due to the fact that we exploit integral geometry as well as our new bounds on the aspect ratio of rooms which allow us to carve the space, significantly reducing the number of physically possible configurations. We demonstrate the effectiveness of our approach on a new dataset which contains over 200 apartments.",
"title": ""
},
{
"docid": "6882f244253e0367b85c76bd4884ddaa",
"text": "Publishers of news information are keen to amplify the reach of their content by making it as re-sharable as possible on social media. In this work we study the relationship between the concept of social deviance and the re-sharing of news headlines by network gatekeepers on Twitter. Do network gatekeepers have the same predilection for selecting socially deviant news items as professionals? Through a study of 8,000 news items across 8 major news outlets in the U.S. we predominately find that network gatekeepers re-share news items more often when they reference socially deviant events. At the same time we find and discuss exceptions for two outlets, suggesting a more complex picture where newsworthiness for networked gatekeepers may be moderated by other effects such as topicality or varying motivations and relationships with their audience.",
"title": ""
}
] |
scidocsrr
|
e8eb31adf90de8289ff64570a8df9286
|
To See and Be Seen : Celebrity Practice on Twitter
|
[
{
"docid": "53477003e3c57381201a69e7cc54cfc9",
"text": "Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.",
"title": ""
}
] |
[
{
"docid": "a7f81672b718b7f5990330e3a77663a9",
"text": "CHARLES A. THIGPEN: Effects Of Forward Head And Rounded Shoulder Posture On Scapular Kinematics, Muscle Activity, And Shoulder Coordination (Under the direction of Dr. Darin A. Padua) Forward head and rounded shoulder posture (FHRSP) has been identified as a potential risk factor for the development of shoulder pain. The mechanism through which forward head and rounded shoulder can facilitate shoulder injury is not well understood. Altered scapular kinematics, muscle activity, and shoulder joint coordination due to FHRSP may lead to the development of shoulder pain. However, there is little evidence to support the influence of FHRSP on scapular kinematics, muscle activity, and shoulder joint coordination. Therefore, the purpose of this study was to compare scapular kinematics, muscle activity, and shoulder joint coordination in individuals with and without FHRSP. Eighty volunteers without shoulder pain were classified as having FHRSP or ideal posture. An electromagnetic tracking system together with hard-wired surface electromyography was used to collect three-dimensional scapular kinematics concurrently with muscle activity of the upper and lower trapezius as well as the serratus anterior during",
"title": ""
},
{
"docid": "0090413bf614e3dbeb97cfe0725446bc",
"text": "Imitation learning has proven to be useful for many real-world problems, but approaches such as behavioral cloning suffer from data mismatch and compounding error issues. One attempt to address these limitations is the DAGGER algorithm, which uses the state distribution induced by the novice to sample corrective actions from the expert. Such sampling schemes, however, require the expert to provide action labels without being fully in control of the system. This can decrease safety and, when using humans as experts, is likely to degrade the quality of the collected labels due to perceived actuator lag. In this work, we propose HG-DAGGER, a variant of DAGGER that is more suitable for interactive imitation learning from human experts in real-world systems. In addition to training a novice policy, HG-DAGGER also learns a safety threshold for a model-uncertainty-based risk metric that can be used to predict the performance of the fully trained novice in different regions of the state space. We evaluate our method on both a simulated and real-world autonomous driving task, and demonstrate improved performance over both DAGGER and behavioral cloning.",
"title": ""
},
{
"docid": "af7479706cd15bb91fc84fba4e194eec",
"text": "Wireless positioning has attracted much research attention and has become increasingly important in recent years. Wireless positioning has been found very useful for other applications besides E911 service, ranging from vehicle navigation and network optimization to resource management and automated billing. Although many positioning devices and services are currently available, it is necessary to develop an integrated and seamless positioning platform to provide a uniform solution for different network configurations. This article surveys the state-of-the-art positioning designs, focusing specifically on signal processing techniques in network-aided positioning. It serves as a tutorial for researchers and engineers interested in this rapidly growing field. It also provides new directions for future research for those who have been working in this field for many years.",
"title": ""
},
{
"docid": "ce29ddfd7b3d3a28ddcecb7a5bb3ac8e",
"text": "Steganography consist of concealing secret information in a cover object to be sent over a public communication channel. It allows two parties to share hidden information in a way that no intruder can detect the presence of hidden information. This paper presents a novel steganography approach based on pixel location matching of the same cover image. Here the information is not directly embedded within the cover image but a sequence of 4 bits of secret data is compared to the 4 most significant bits (4MSB) of the cover image pixels. The locations of the matching pixels are taken to substitute the 2 least significant bits (2LSB) of the cover image pixels. Since the data are not directly hidden in cover image, the proposed approach is more secure and difficult to break. Intruders cannot intercept it by using common LSB techniques.",
"title": ""
},
{
"docid": "14838947ee3b95c24daba5a293067730",
"text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.",
"title": ""
},
{
"docid": "b0f752f3886de8e5d4fe0f186a495c68",
"text": "Granular materials composed of a mixture of grain sizes are notoriously prone to segregation during shaking or transport. In this paper, a binary mixture theory is used to formulate a model for kinetic sieving of large and small particles in thin, rapidly flowing avalanches, which occur in many industrial and geophysical free-surface flows. The model is based on a simple percolation idea, in which the small particles preferentially fall into underlying void space and lever large particles upwards. Exact steady-state solutions have been constructed for general steady uniform velocity fields, as well as time-dependent solutions for plug-flow, that exploit the decoupling of material columns in the avalanche. All the solutions indicate the development of concentration shocks, which are frequently observed in experiments. A shock-capturing numerical algorithm is formulated to solve general problems and is used to investigate segregation in flows with weak shear.",
"title": ""
},
{
"docid": "1f28ca58aabd0e2523492308c4da3929",
"text": "Sepsis is a leading cause of in-hospital death over the world and septic shock, the most severe complication of sepsis, reaches a mortality rate as high as 50%. Early diagnosis and treatment can prevent most morbidity and mortality. In this work, Recent Temporal Patterns (RTPs) are used in conjunction with SVM classifier to build a robust yet interpretable model for early diagnosis of septic shock. This model is applied to two different prediction tasks: visit-level early diagnosis and event-level early prediction. For each setting, this model is compared against several strong baselines including atemporal method called Last-Value, six classic machine learning algorithms, and lastly, a state-of-the-art deep learning model: Long Short-Term Memory (LSTM). Our results suggest that RTP-based model can outperform all aforementioned baseline models for both diagnosis tasks. More importantly, the extracted interpretative RTPs can shed lights for the clinicians to discover progression behavior and latent patterns among septic shock patients.",
"title": ""
},
{
"docid": "dbd6f56d4337ee35c7c375b6d31e7f38",
"text": "Augmented Reality (AR) is becoming mobile. Mobile devices have many constraints but also rich new features that traditional desktop computers do not have. There are several survey papers on AR, but none is dedicated to Mobile Augmented Reality (MAR). Our work serves the purpose of closing this gap. The contents are organized with a bottom-up approach. We first present the state-of-the-art in system components including hardware platforms, software frameworks and display devices, follows with enabling technologies such as tracking and data management. We then survey the latest technologies and methods to improve run-time performance and energy efficiency for practical implementation. On top of these, we further introduce the application fields and several typical MAR applications. Finally we conclude the survey with several challenge problems, which are under exploration and require great research efforts in the future.",
"title": ""
},
{
"docid": "9f3b749ee0035fa749efa6d16528e325",
"text": "There's growing interest in developing applications for the Internet of Things. Such applications' main objective is to integrate technology into people's everyday lives, to be of service to them en masse. The form in which this integration is implemented, however, still leaves much room for improvement. Usually, the user must set parameters within the application. When the person's context changes, they have to manually reconfigure the parameters. What was meant to be a commodity in an unforeseen situation then becomes extra noise. This article describes a reference architecture that improves how people are integrated with the IoT, with smartphones doing the connecting. The resulting integration opens the way to new IoT scenarios supporting evolution towards the Internet of People.",
"title": ""
},
{
"docid": "077f9a831b5adbff1e809df01428a197",
"text": "In this paper, we simulated and analyzed about through-via's signal integrity, (SI)/power integrity, and (PI)/electromagnetic interference (EMI) that goes through the power/ground plane which was caused by the high dielectric material that supports the embedded high value capacitors. In order to evaluate through-via's effectiveness, the simulation condition was operated on the LTCC module for mixed signal system. For the circumstance SI, delay time of signal line and signal quality significantly decrease because of higher parasitic capacitance between through-via's and anti-pads. However, in a situation where the dielectric material is chosen, the EMI's characteristic power/ground plan with embedded high dielectric material shows a better characteristic than when the low dielectric material was chosen. As a result, if the high dielectric material is applied on LTCC module, the mixed module packaging that is made with the digital IC and RF component will be realized as the optimistic design. The simulation structure takes the LTCC process designer guidebook as a basic structure and uses the HFSS/designer tool. When the dielectric constant uses 7.8 and 500, the through-via's that pass through the LTCC module are delay time of 41.4 psec and 56, respectively. When the dielectric constant of 500 is compared with 7.8, the power/ground plane impedance shows a trait lower than several GHz range and effectiveness in the rejection of the resonance mode. When uses the dielectric constant is 500, the EMI level is 7.8 and it is prove that the EMI level improves at maximum 20 dB V/m.",
"title": ""
},
{
"docid": "58b4320c2cf52c658275eaa4748dede5",
"text": "Backing-out and heading-out maneuvers in perpendicular or angle parking lots are one of the most dangerous maneuvers, especially in cases where side parked cars block the driver view of the potential traffic flow. In this paper, a new vision-based Advanced Driver Assistance System (ADAS) is proposed to automatically warn the driver in such scenarios. A monocular grayscale camera was installed at the back-right side of a vehicle. A Finite State Machine (FSM) defined according to three CAN Bus variables and a manual signal provided by the user is used to handle the activation/deactivation of the detection module. The proposed oncoming traffic detection module computes spatio-temporal images from a set of predefined scan-lines which are related to the position of the road. A novel spatio-temporal motion descriptor is proposed (STHOL) accounting for the number of lines, their orientation and length of the spatio-temporal images. Some parameters of the proposed descriptor are adapted for nighttime conditions. A Bayesian framework is then used to trigger the warning signal using multivariate normal density functions. Experiments are conducted on image data captured from a vehicle parked at different location of an urban environment, including both daytime and nighttime lighting conditions. We demonstrate that the proposed approach provides robust results maintaining processing rates close to real time.",
"title": ""
},
{
"docid": "faa43d139c23620e47aef56308f8a0fb",
"text": "In the present research was to investigate the levels and the prevalence of academic procrastination on high school, undergraduate and graduate students. In this respect, Procrastination Assessment Scale-Student (PASS) was administered to a total of 448 students who were 149 (83 female; 66 male) high-school, 150 (80 female; 70 male) undergraduate and 148 (84 female; 64 male) graduate students. The average age was 15.5 years old (SD = .56) for High-school, 20.4 years old (SD = 1.71) for Undergraduate, and 25.5 years old (SD = 2.32) for Graduate students. Results showed a significant difference among the academic levels of the students. Specifically, undergraduate students claimed to procrastinate more than graduate and high school students. High school and undergraduate students claimed to be nearly always or always procrastinator on studying for exams, while graduate students procrastinate more on writing term papers.",
"title": ""
},
{
"docid": "db480610388d64d8f5ad556b6f5651bd",
"text": "An inter-band carrier aggregation (CA) quadplexer allows doubling the download rates and is therefore an important element in upcoming mobile phone systems. In a duplexer design, both Tx and Rx filters should behave like an open-circuit for respective counter-bands at the antenna port. Additionally, they have to attain high suppression in their counter-band to get good in-band isolation. The situation gets more difficult and challenging for a CA quadplexer. Besides an open-circuit requirement for now 3 bands simultaneously, each Tx filter should achieve high suppression in the frequency range of both Rx filters in order to get high in-band and cross-isolation. The same applies for each Rx filter. Generally, it is not sufficient to simply expand the well-known topology concepts of duplexers rather a different design methodology is required. In this paper we will discuss several approaches to realize a CA quadplexer and present an implementation of a B25-B4 CA quadplexer sized 3.6×2.0×0.8mm3 utilizing the company's advanced acoustic wave filtering technologies: bulk acoustic wave (BAW) and surface acoustic wave (SAW).",
"title": ""
},
{
"docid": "f321e510630a9997ad2759d7789b3fc7",
"text": "Dental ceramics are presented within a simplifying framework allowing for understanding of their composition and development. The meaning of strength and details of the fracture process are explored, and recommendations are given regarding making structural comparisons among ceramics. Assessment of clinical survival data is dealt with, and literature is reviewed on the clinical behavior of metal-ceramic and all-ceramic systems. Practical aspects are presented regarding the choice and use of dental ceramics.",
"title": ""
},
{
"docid": "21edf22bbe51ce6a6d429fee59985fc5",
"text": "This paper details filtering subsystem for a tetra-vision based pedestrian detection system. The complete system is based on the use of both visible and far infrared cameras; in an initial phase it produces a list of areas of attention in the images which can contain pedestrians. This list is furtherly refined using symmetry-based assumptions. Then, this results is fed to a number of independent validators that evaluate the presence of human shapes inside the areas of attention. Histogram of oriented gradients and Support Vector Machines are used as a filter and demonstrated to be able to successfully classify up to 91% of pedestrians in the areas of attention.",
"title": ""
},
{
"docid": "0f0a869bfbfaf3c00f6ea4db9c5eda1d",
"text": "Robotic Autonomy is a seven-week, hands-on introduction to robotics designed for high school students. The course presents a broad survey of robotics, beginning with mechanism and electronics and ending with robot behavior, navigation and remote teleoperation. During the summer of 2002, Robotic Autonomy was taught to twenty eight students at Carnegie Mellon West in cooperation with NASA/Ames (Moffett Field, CA). The educational robot and course curriculum were the result of a ground-up design effort chartered to develop an effective and low-cost robot for secondary level education and home use. Cooperation between Carnegie Mellon’s Robotics Institute, Gogoco, LLC. and Acroname Inc. yielded notable innovations including a fast-build robot construction kit, indoor/outdoor terrainability, CMOS vision-centered sensing, back-EMF motor speed control and a Java-based robot programming interface. In conjunction with robot and curriculum design, the authors at the Robotics Institute and the University of Pittsburgh’s Learning Research and Development Center planned a methodology for evaluating the educational efficacy of Robotic Autonomy, implementing both formative and summative evaluations of progress as well as an indepth, one week ethnography to identify micro-genetic mechanisms of learning that would inform the broader evaluation. This article describes the robot and curriculum design processes and then the educational analysis methodology and statistically significant results, demonstrating the positive impact of Robotic Autonomy on student learning well beyond the boundaries of specific technical concepts in robotics.",
"title": ""
},
{
"docid": "00d76380bcc967a5b7eee4c8903cedf1",
"text": "This paper demonstrates models that were designed and implemented to simulate slotted ALOHA multiple access computer network protocol. The models are spreadsheet-based simulating e-forms that were designed for students use in college level data communication and networking courses. Specifically, three models for simulating this protocol are fully implemented using spreadsheets. The features of these models are simplicity and quickness of implementation compared with other implementation techniques. These models assisted instructors to develop educational objects that in turn will help students for better understanding and exploring of the scientific concepts related to computer protocols by the aid of visual and interactive spreadsheet-based e-forms. Moreover, advanced spreadsheet techniques such as imagery integration, hyperlinks, conditional structures, conditional formats, and charts insetting, to simulate scientific notions that are taught to undergraduate students were exploited in these models. The models designing technique is characterized by simplicity, flexibility, and affordability. The technique can be applied and used in many disciplines of education, business, science, and technology. Generally, the developed computational e-forms can be used by instructors to illustrate topics in attractive fashions. In addition, students and learners can use the developed educational objects without instructor supervision in self-education or e-learning environments.",
"title": ""
},
{
"docid": "27a4c7681dba525859bc2f153675b190",
"text": "Ladder diagram (LD) and instruction list (IL) have been widely used in industries as programming languages for PLC (programmable logic controller). The LD is similar to the electrical schematic diagram, and can represent control logic explicitly. However, LD programs can not be executed directly by PLC. On the other hand the IL which is similar to the assemble language can be processed directly by PLC. Thus, it is necessary to study transformation algorithm from the LD to the IL. This paper proposes a transformation algorithm used to transform the LD into the IL for PLC systems. The transformation algorithm uses an AOV digraph to represent the LD program, and then realizes the transformation by postorder traversing binary trees built form the AOV digraph. In this paper, first some basic concepts and data structures in the algorithm are presented. Second, this paper describes main ideas and detailed steps of the transformation algorithm. A transformation example in this paper shows that the proposed algorithm is correct and has practicability",
"title": ""
},
{
"docid": "a234950dce1d69ee1cfb1a9fa4231c82",
"text": "RDF is the format of choice for representing Semantic Web data. RDF graphs may be large and their structure is heterogeneous and complex, making them very hard to explore and understand. To help users discover valuable insights from RDF graph, we have developed Dagger, a tool which automatically recommends interesting aggregation queries over the RDF graphs; Dagger evaluates the queries and graphically shows their results to the user, in the ranked order of their interestingness. We propose to demonstrate Dagger to the ISWC audience, based on popular real and synthetic RDF graphs.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] |
scidocsrr
|
bedd771bc6d2a805c72aa585df3d7340
|
Reviewing CS1 exam question content
|
[
{
"docid": "05c82f9599b431baa584dd1e6d7dfc3e",
"text": "It is a common conception that CS1 is a very difficult course and that failure rates are high. However, until now there has only been anecdotal evidence for this claim. This article reports on a survey among institutions around the world regarding failure rates in introductory programming courses. The article describes the design of the survey and the results. The number of institutions answering the call for data was unfortunately rather low, so it is difficult to make firm conclusions. It is our hope that this article can be the starting point for a systematic collection of data in order to find solid proof of the actual failure and pass rates of CS1.",
"title": ""
}
] |
[
{
"docid": "9e1c3d4a8bbe211b85b19b38e39db28e",
"text": "This paper presents a novel context-based scene recognition method that enables mobile robots to recognize previously observed topological places in known environments or categorize previously unseen places in new environments. We achieve this by introducing the Histogram of Oriented Uniform Patterns (HOUP), which provides strong discriminative power for place recognition, while offering a significant level of generalization for place categorization. HOUP descriptors are used for image representation within a subdivision framework, where the size and location of sub-regions are determined using an informative feature selection method based on kernel alignment. Further improvement is achieved by developing a similarity measure that accounts for perceptual aliasing to eliminate the effect of indistinctive but visually similar regions that are frequently present in outdoor and indoor scenes. An extensive set of experiments reveals the excellent performance of our method on challenging categorization and recognition tasks. Specifically, our proposed method outperforms the current state of the art on two place categorization datasets with 15 and 5 place categories, and two topological place recognition datasets, with 5 and 27 places.",
"title": ""
},
{
"docid": "853edc6c6564920d0d2b69e0e2a63ad0",
"text": "This study evaluates the environmental performance and discounted costs of the incineration and landfilling of municipal solid waste that is ready for the final disposal while accounting for existing waste diversion initiatives, using the life cycle assessment (LCA) methodology. Parameters such as changing waste generation quantities, diversion rates and waste composition were also considered. Two scenarios were assessed in this study on how to treat the waste that remains after diversion. The first scenario is the status quo, where the entire residual waste was landfilled whereas in the second scenario approximately 50% of the residual waste was incinerated while the remainder is landfilled. Electricity was produced in each scenario. Data from the City of Toronto was used to undertake this study. Results showed that the waste diversion initiatives were more effective in reducing the organic portion of the waste, in turn, reducing the net electricity production of the landfill while increasing the net electricity production of the incinerator. Therefore, the scenario that incorporated incineration performed better environmentally and contributed overall to a significant reduction in greenhouse gas emissions because of the displacement of power plant emissions; however, at a noticeably higher cost. Although landfilling proves to be the better financial option, it is for the shorter term. The landfill option would require the need of a replacement landfill much sooner. The financial and environmental effects of this expenditure have yet to be considered.",
"title": ""
},
{
"docid": "fa855a3d92bf863c33b269383ddde081",
"text": "A network supporting deep unsupervised learning is present d. The network is an autoencoder with lateral shortcut connections from the enc oder to decoder at each level of the hierarchy. The lateral shortcut connections al low the higher levels of the hierarchy to focus on abstract invariant features. Wher eas autoencoders are analogous to latent variable models with a single layer of st ochastic variables, the proposed network is analogous to hierarchical latent varia bles models. Learning combines denoising autoencoder and denoising sou rces separation frameworks. Each layer of the network contributes to the cos t function a term which measures the distance of the representations produce d by the encoder and the decoder. Since training signals originate from all leve ls of the network, all layers can learn efficiently even in deep networks. The speedup offered by cost terms from higher levels of the hi erarchy and the ability to learn invariant features are demonstrated in exp eriments.",
"title": ""
},
{
"docid": "d42aaf5c7c4f7982c1630e7b95b0377a",
"text": "In this paper we analyze our recent research on the use of document analysis techniques for metadata extraction from PDF papers. We describe a package that is designed to extract basic metadata from these documents. The package is used in combination with a digital library software suite to easily build personal digital libraries. The proposed software is based on a suitable combination of several techniques that include PDF parsing, low level document image processing, and layout analysis. In addition, we use the information gathered from a widely known citation database (DBLP) to assist the tool in the difficult task of author identification. The system is tested on some paper collections selected from recent conference proceedings.",
"title": ""
},
{
"docid": "6c81b1fe36a591b3b86a5e912a8792c1",
"text": "Mobile phones, sensors, patients, hospitals, researchers, providers and organizations are nowadays, generating huge amounts of healthcare data. The real challenge in healthcare systems is how to find, collect, analyze and manage information to make people's lives healthier and easier, by contributing not only to understand new diseases and therapies but also to predict outcomes at earlier stages and make real-time decisions. In this paper, we explain the potential benefits of big data to healthcare and explore how it improves treatment and empowers patients, providers and researchers. We also describe the ability of reality mining in collecting large amounts of data to understand people's habits, detect and predict outcomes, and illustrate the benefits of big data analytics through five effective new pathways that could be adopted to promote patients' health, enhance medicine, reduce cost and improve healthcare value and quality. We cover some big data solutions in healthcare and we shed light on implementations, such as Electronic Healthcare Record (HER) and Electronic Healthcare Predictive Analytics (e-HPA) in US hospitals. Furthermore, we complete the picture by highlighting some challenges that big data analytics faces in healthcare.",
"title": ""
},
{
"docid": "073f129a34957b19c6d9af96c869b9ab",
"text": "The stability of dc microgrids (MGs) depends on the control strategy adopted for each mode of operation. In an islanded operation mode, droop control is the basic method for bus voltage stabilization when there is no communication among the sources. In this paper, it is shown the consequences of droop implementation on the voltage stability of dc power systems, whose loads are active and nonlinear, e.g., constant power loads. The set of parallel sources and their corresponding transmission lines are modeled by an ideal voltage source in series with an equivalent resistance and inductance. This approximate model allows performing a nonlinear stability analysis to predict the system qualitative behavior due to the reduced number of differential equations. Additionally, nonlinear analysis provides analytical stability conditions as a function of the model parameters and it leads to a design guideline to build reliable (MGs) based on safe operating regions.",
"title": ""
},
{
"docid": "f086fef6b9026a67e73cd6f892aa1c37",
"text": "Shoulder girdle movement is critical for stabilizing and orientating the arm during daily activities. During robotic arm rehabilitation with stroke patients, the robot must assist movements of the shoulder girdle. Shoulder girdle movement is characterized by a highly nonlinear function of the humeral orientation, which is different for each person. Hence it is improper to use pre-calculated shoulder girdle movement. If an exoskeleton robot cannot mimic the patient's shoulder girdle movement well, the robot axes will not coincide with the patient's, which brings reduced range of motion (ROM) and discomfort to the patients. A number of exoskeleton robots have been developed to assist shoulder girdle movement. The shoulder mechanism of these robots, along with the advantages and disadvantages, are introduced. In this paper, a novel shoulder mechanism design of exoskeleton robot is proposed, which can fully mimic the patient's shoulder girdle movement in real time.",
"title": ""
},
{
"docid": "fab33f2e32f4113c87e956e31674be58",
"text": "We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, uniqueand synergistic contributions. We focus on the relationship be tween “redundant information” and the more familiar information theoretic notions of “common information.” Our main contri bution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decompositi on of the total mutual information. Interestingly, this entai ls that any reasonable measure of redundant information cannot be deri ved by optimization over a single random variable. Keywords—common and private information, synergy, redundancy, information lattice, sufficient statistic, partial information decomposition",
"title": ""
},
{
"docid": "f128c1903831e9310d0ed179838d11d1",
"text": "A partially corporate feeding waveguide located below the radiating waveguide is introduced to a waveguide slot array to enhance the bandwidth of gain. A PMC termination associated with the symmetry of the feeding waveguide as well as uniform excitation is newly proposed for realizing dense and uniform slot arrangement free of high sidelobes. To exploit the bandwidth of the feeding circuit, the 4 × 4-element subarray is also developed for wider bandwidth by using standing-wave excitation. A 16 × 16-element array with uniform excitation is fabricated in the E-band by diffusion bonding of laminated thin copper plates which has the advantages of high precision and high mass-productivity. The antenna gain of 32.4 dBi and the antenna efficiency of 83.0% are measured at the center frequency. The 1 dB-down gain bandwidth is no less than 9.0% and a wideband characteristic is achieved.",
"title": ""
},
{
"docid": "71da7722f6ce892261134bd60ca93ab7",
"text": "Semantically annotated data, using markup languages like RDFa and Microdata, has become more and more publicly available in the Web, especially in the area of e-commerce. Thus, a large amount of structured product descriptions are freely available and can be used for various applications, such as product search or recommendation. However, little efforts have been made to analyze the categories of the available product descriptions. Although some products have an explicit category assigned, the categorization schemes vary a lot, as the products originate from thousands of different sites. This heterogeneity makes the use of supervised methods, which have been proposed by most previous works, hard to apply. Therefore, in this paper, we explain how distantly supervised approaches can be used to exploit the heterogeneous category information in order to map the products to set of target categories from an existing product catalogue. Our results show that, even though this task is by far not trivial, we can reach almost 56% accuracy for classifying products into 37 categories.",
"title": ""
},
{
"docid": "6afe0360f074304e9da9c100e28e9528",
"text": "Unikernels are a promising alternative for application deployment in cloud platforms. They comprise a very small footprint, providing better deployment agility and portability among virtualization platforms. Similar to Linux containers, they are a lightweight alternative for deploying distributed applications based on microservices. However, the comparison of unikernels with other virtualization options regarding the concurrent provisioning of instances, as in the case of microservices-based applications, is still lacking. This paper provides an evaluation of KVM (Virtual Machines), Docker (Containers), and OSv (Unikernel), when provisioning multiple instances concurrently in an OpenStack cloud platform. We confirmed that OSv outperforms the other options and also identified opportunities for optimization.",
"title": ""
},
{
"docid": "6ed5198b9b0364f41675b938ec86456f",
"text": "Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI ARTIFICIAL INTELLIGENCE [Al] and other developments in computer science are giving birth to a dramatically different class of machinesPmachines that can perform tasks requiring reasoning, judgment, and perception that previously could be done only by humans. Will these I am grateful for the helpful comments provided by many people Specifically I would like to acknowledge the advice teceived from Sandra Cook and Victor Walling of SRI, Wassily Leontief and Faye Duchin of the New York University Institute for Economic Analysis, Margaret Boden of The University of Sussex, Henry Levin and Charles Holloway of Stanford University, James Albus of the National Bureau of Standards, and Peter Hart of Syntelligence Herbert Simon, of CarnegieMellon Univetsity, wrote me extensive criticisms and rebuttals of my arguments Robert Solow of MIT was quite skeptical of my premises, but conceded nevertheless that my conclusions could possibly follow from them if certain other economic conditions were satisfied. Save1 Kliachko of SRI improved my composition and also referred me to a prescient article by Keynes (Keynes, 1933) who, a half-century ago, predicted an end to toil within one hundred years machines reduce the need for human toil and thus cause unemployment? There are two opposing views in response to this question Some claim that AI is not really very different from other technologies that have supported automation and increased productivitytechnologies such as mechanical engineering, ele&onics, control engineering, and operations rcsearch. Like them, AI may also lead ultimately to an expanding economy with a concomitant expansion of employment opportunities. At worst, according to this view, thcrc will be some, perhaps even substantial shifts in the types of jobs, but certainly no overall reduction in the total number of jobs. In my opinion, however, such an out,come is based on an overly conservative appraisal of the real potential of artificial intelligence. Others accept a rather strong hypothesis with regard to AI-one that sets AI far apart from previous labor-saving technologies. Quite simply, this hypothesis affirms that anything people can do, AI can do as well. Cert,ainly AI has not yet achieved human-level performance in many important functions, but many AI scientists believe that artificial intelligence inevitably will equal and surpass human mental abilities-if not in twenty years, then surely in fifty. The main conclusion of this view of AI is that, even if AI does create more work, this work can also be performed by AI devices without necessarily implying more jobs for humans Of course, the mcrc fact that some work can be performed automatically does not make it inevitable that it, will be. Automation depends on many factorsPeconomic, political, and social. The major economic parameter would seem to be the relative cost of having either people or machines execute a given task (at a specified rate and level of quality) In THE AI MAGAZINE Summer 1984 5 AI Magazine Volume 5 Number 2 (1984) (© AAAI)",
"title": ""
},
{
"docid": "9b628f47102a0eee67e469e223ece837",
"text": "We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.",
"title": ""
},
{
"docid": "7121d534b758bab829e1db31d0ce2e43",
"text": "With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (eg. whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias xspace, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias xspace on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias xspace are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task.",
"title": ""
},
{
"docid": "aef76a8375b12f4c38391093640a704a",
"text": "Storytelling plays an important role in human life, from everyday communication to entertainment. Interactive storytelling (IS) offers its audience an opportunity to actively participate in the story being told, particularly in video games. Managing the narrative experience of the player is a complex process that involves choices, authorial goals and constraints of a given story setting (e.g., a fairy tale). Over the last several decades, a number of experience managers using artificial intelligence (AI) methods such as planning and constraint satisfaction have been developed. In this paper, we extend existing work and propose a new AI experience manager called player-specific automated storytelling (PAST), which uses automated planning to satisfy the story setting and authorial constraints in response to the player's actions. Out of the possible stories algorithmically generated by the planner in response, the one that is expected to suit the player's style best is selected. To do so, we employ automated player modeling. We evaluate PAST within a video-game domain with user studies and discuss the effects of combining planning and player modeling on the player's perception of agency.",
"title": ""
},
{
"docid": "9086d8f1d9a0978df0bd93cff4bce20a",
"text": "Australian government enterprises have shown a significant interest in the cloud technology-enabled enterprise transformation. Australian government suggests the whole-of-a-government strategy to cloud adoption. The challenge is how best to realise this cloud adoption strategy for the cloud technology-enabled enterprise transformation? The cloud adoption strategy realisation requires concrete guidelines and a comprehensive practical framework. This paper proposes the use of an agile enterprise architecture framework to developing and implementing the adaptive cloud technology-enabled enterprise architecture in the Australian government context. The results of this paper indicate that a holistic strategic agile enterprise architecture approach seems appropriate to support the strategic whole-of-a-government approach to cloud technology-enabled government enterprise transformation.",
"title": ""
},
{
"docid": "e0a314eb1fe221791bc08094d0c04862",
"text": "The present study was undertaken with the objective to explore the influence of the five personality dimensions on the information seeking behaviour of the students in higher educational institutions. Information seeking behaviour is defined as the sum total of all those activities that are usually undertaken by the students of higher education to collect, utilize and process any kind of information needed for their studies. Data has been collected from 600 university students of the three broad disciplines of studies from the Universities of Eastern part of India (West Bengal). The tools used for the study were General Information schedule (GIS), Information Seeking Behaviour Inventory (ISBI) and NEO-FFI Personality Inventory. Product moment correlation has been worked out between the scores in ISBI and those in NEO-FFI Personality Inventory. The findings indicated that the five personality traits are significantly correlated to all the dimensions of information seeking behaviour of the university students.",
"title": ""
},
{
"docid": "a4ed5c4f87e4faa357f0dec0f5c0e354",
"text": "In today's information age, information sharing and transfer has increased exponentially. The threat of an intruder accessing secret information has been an ever existing concern for the data communication experts. Cryptography and steganography are the most widely used techniques to overcome this threat. Cryptography involves converting a message text into an unreadable cipher. On the other hand, steganography embeds message into a cover media and hides its existence. Both these techniques provide some security of data neither of them alone is secure enough for sharing information over an unsecure communication channel and are vulnerable to intruder attacks. Although these techniques are often combined together to achieve higher levels of security but still there is a need of a highly secure system to transfer information over any communication media minimizing the threat of intrusion. In this paper we propose an advanced system of encrypting data that combines the features of cryptography, steganography along with multimedia data hiding. This system will be more secure than any other these techniques alone and also as compared to steganography and cryptography combined systems Visual steganography is one of the most secure forms of steganography available today. It is most commonly implemented in image files. However embedding data into image changes its color frequencies in a predictable way. To overcome this predictability, we propose the concept of multiple cryptography where the data will be encrypted into a cipher and the cipher will be hidden into a multimedia image file in encrypted format. We shall use traditional cryptographic techniques to achieve data encryption and visual steganography algorithms will be used to hide the encrypted data.",
"title": ""
},
{
"docid": "7ba3f13f58c4b25cc425b706022c1f2b",
"text": "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast/Faster R-CNN [1,2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN [2] for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.",
"title": ""
},
{
"docid": "953d1b368a4a6fb09e6b34e3131d7804",
"text": "The activation of the Deep Convolutional Neural Networks hidden layers can be successfully used as features, often referred as Deep Features, in generic visual similarity search tasks. Recently scientists have shown that permutation-based methods offer very good performance in indexing and supporting approximate similarity search on large database of objects. Permutation-based approaches represent metric objects as sequences (permutations) of reference objects, chosen from a predefined set of data. However, associating objects with permutations might have a high cost due to the distance calculation between the data objects and the reference objects. In this work, we propose a new approach to generate permutations at a very low computational cost, when objects to be indexed are Deep Features. We show that the permutations generated using the proposed method are more effective than those obtained using pivot selection criteria specifically developed for permutation-based methods.",
"title": ""
}
] |
scidocsrr
|
9e5015fdd74d1cc798e4ddae4dd3f0e1
|
RRA: Recurrent Residual Attention for Sequence Learning
|
[
{
"docid": "dadd12e17ce1772f48eaae29453bc610",
"text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st",
"title": ""
}
] |
[
{
"docid": "97e4facde730c97a080ed160682f5dd0",
"text": "The application of deep learning to symbolic domains remains an active research endeavour. Graph neural networks (GNN), consisting of trained neural modules which can be arranged in different topologies at run time, are sound alternatives to tackle relational problems which lend themselves to graph representations. In this paper, we show that GNNs are capable of multitask learning, which can be naturally enforced by training the model to refine a single set of multidimensional embeddings ∈ R and decode them into multiple outputs by connecting MLPs at the end of the pipeline. We demonstrate the multitask learning capability of the model in the relevant relational problem of estimating network centrality measures, i.e. is vertex v1 more central than vertex v2 given centrality c?. We then show that a GNN can be trained to develop a lingua franca of vertex embeddings from which all relevant information about any of the trained centrality measures can be decoded. The proposed model achieves 89% accuracy on a test dataset of random instances with up to 128 vertices and is shown to generalise to larger problem sizes. The model is also shown to obtain reasonable accuracy on a dataset of real world instances with up to 4k vertices, vastly surpassing the sizes of the largest instances with which the model was trained (n = 128). Finally, we believe that our contributions attest to the potential of GNNs in symbolic domains in general and in relational learning in particular.",
"title": ""
},
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "2c15bef67e6bdbfaf66e1164f8dddf52",
"text": "Social behavior is ordinarily treated as being under conscious (if not always thoughtful) control. However, considerable evidence now supports the view that social behavior often operates in an implicit or unconscious fashion. The identifying feature of implicit cognition is that past experience influences judgment in a fashion not introspectively known by the actor. The present conclusion--that attitudes, self-esteem, and stereotypes have important implicit modes of operation--extends both the construct validity and predictive usefulness of these major theoretical constructs of social psychology. Methodologically, this review calls for increased use of indirect measures--which are imperative in studies of implicit cognition. The theorized ordinariness of implicit stereotyping is consistent with recent findings of discrimination by people who explicitly disavow prejudice. The finding that implicit cognitive effects are often reduced by focusing judges' attention on their judgment task provides a basis for evaluating applications (such as affirmative action) aimed at reducing such unintended discrimination.",
"title": ""
},
{
"docid": "0ea6d4a02a4013a0f9d5aa7d27b5a674",
"text": "Recently, there has been growing interest in social network analysis. Graph models for social network analysis are usually assumed to be a deterministic graph with fixed weights for its edges or nodes. As activities of users in online social networks are changed with time, however, this assumption is too restrictive because of uncertainty, unpredictability and the time-varying nature of such real networks. The existing network measures and network sampling algorithms for complex social networks are designed basically for deterministic binary graphs with fixed weights. This results in loss of much of the information about the behavior of the network contained in its time-varying edge weights of network, such that is not an appropriate measure or sample for unveiling the important natural properties of the original network embedded in the varying edge weights. In this paper, we suggest that using stochastic graphs, in which weights associated with the edges are random variables, can be a suitable model for complex social network. Once the network model is chosen to be stochastic graphs, every aspect of the network such as path, clique, spanning tree, network measures and sampling algorithms should be treated stochastically. In particular, the network measures should be reformulated and new network sampling algorithms must be designed to reflect the stochastic nature of the network. In this paper, we first define some network measures for stochastic graphs, and then we propose four sampling algorithms based on learning automata for stochastic graphs. In order to study the performance of the proposed sampling algorithms, several experiments are conducted on real and synthetic stochastic graphs. The performances of these algorithms are studied in terms of Kolmogorov-Smirnov D statistics, relative error, Kendall’s rank correlation coefficient and relative cost.",
"title": ""
},
{
"docid": "746895b98974415f71912ed5dcd6ed61",
"text": "In the present study, Hu-Mikβ1, a humanized mAb directed at the shared IL-2/IL-15Rβ subunit (CD122) was evaluated in patients with T-cell large granular lymphocytic (T-LGL) leukemia. Hu-Mikβ1 blocked the trans presentation of IL-15 to T cells expressing IL-2/IL-15Rβ and the common γ-chain (CD132), but did not block IL-15 action in cells that expressed the heterotrimeric IL-15 receptor in cis. There was no significant toxicity associated with Hu-Mikβ1 administration in patients with T-LGL leukemia, but no major clinical responses were observed. One patient who had previously received murine Mikβ1 developed a measurable Ab response to the infused Ab. Nevertheless, the safety profile of this first in-human study of the humanized mAb to IL-2/IL-15Rβ (CD122) supports its evaluation in disorders such as refractory celiac disease, in which IL-15 and its receptor have been proposed to play a critical role in the pathogenesis and maintenance of disease activity.",
"title": ""
},
{
"docid": "215bb5273dbf5c301ae4170b5da39a34",
"text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.",
"title": ""
},
{
"docid": "08ecf17772853fe198c96837d43cf572",
"text": "Long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) interventions can reduce malaria transmission by targeting mosquitoes when they feed upon sleeping humans and/or when they rest inside houses, livestock shelters or other man-made structures. However, many malaria vector species can maintain robust transmission, despite high coverage of LLINs/IRS containing insecticides to which they are physiologically fully susceptible, because they exhibit one or more behaviours that define the biological limits of achievable impact with these interventions: (1) natural or insecticide-induced avoidance of contact with treated surfaces within houses and early exit from them, minimizing exposure hazard of vectors which feed indoors upon humans, (2) feeding upon humans when they are active and unprotected outdoors, attenuating personal protection and any consequent community-wide suppression of transmission, (3) feeding upon animals, minimizing contact with insecticides targeted at humans or houses, (4) resting outdoors, away from insecticide-treated surfaces of nets, walls and roofs. Residual malaria transmission is therefore defined as all forms of transmission that can persist after achieving full population-wide coverage with effective LLIN and/or IRS containing active ingredients to which local vector populations are fully susceptible. Residual transmission is sufficiently intense across most of the tropics to render malaria elimination infeasible without new or improved vector control methods. Many novel or improved vector control strategies to address residual transmission are emerging that either (1) enhance control of adult vectors that enter houses to feed and/or rest by killing, repelling or excluding them, (2) kill or repel adult mosquitoes when they attack people outdoors, (3) kill adult mosquitoes when they attack livestock, (4) kill adult mosquitoes when they feed upon sugar, or (5) kill immature mosquitoes at aquatic habitats. However, none of these options has sufficient supporting evidence to justify full-scale programmatic implementation so concerted investment in their rigorous selection, development and evaluation is required over the coming decade to enable control and, ultimately, elimination of residual malaria transmission. In the meantime, national programmes may assess options for addressing residual transmission under programmatic conditions through exploratory pilot studies with strong monitoring, evaluation and operational research components, similarly to the Onchocerciasis Control Programme.",
"title": ""
},
{
"docid": "8c31d750a503929a0776ae3b1e1d9f41",
"text": "Topic segmentation and labeling is often considered a prerequisite for higher-level conversation analysis and has been shown to be useful in many Natural Language Processing (NLP) applications. We present two new corpora of email and blog conversations annotated with topics, and evaluate annotator reliability for the segmentation and labeling tasks in these asynchronous conversations. We propose a complete computational framework for topic segmentation and labeling in asynchronous conversations. Our approach extends state-of-the-art methods by considering a fine-grained structure of an asynchronous conversation, along with other conversational features by applying recent graph-based methods for NLP. For topic segmentation, we propose two novel unsupervised models that exploit the fine-grained conversational structure, and a novel graph-theoretic supervised model that combines lexical, conversational and topic features. For topic labeling, we propose two novel (unsupervised) random walk models that respectively capture conversation specific clues from two different sources: the leading sentences and the fine-grained conversational structure. Empirical evaluation shows that the segmentation and the labeling performed by our best models beat the state-of-the-art, and are highly correlated with human annotations.",
"title": ""
},
{
"docid": "b2589260e4e8d26df598bb873646b7ec",
"text": "In this paper, the performance of a topological-metric visual-path-following framework is investigated in different environments. The framework relies on a monocular camera as the only sensing modality. The path is represented as a series of reference images such that each neighboring pair contains a number of common landmarks. Local 3-D geometries are reconstructed between the neighboring reference images to achieve fast feature prediction. This condition allows recovery from tracking failures. During navigation, the robot is controlled using image-based visual servoing. The focus of this paper is on the results from a number of experiments that were conducted in different environments, lighting conditions, and seasons. The experiments with a robot car show that the framework is robust to moving objects and moderate illumination changes. It is also shown that the system is capable of online path learning.",
"title": ""
},
{
"docid": "14a3e0f52760802ae74a21cd0cb66507",
"text": "Credit scoring has been regarded as a core appraisal tool of different institutions during the last few decades, and has been widely investigated in different areas, such as finance and accounting. Different scoring techniques are being used in areas of classification and prediction, where statistical techniques have conventionally been used. Both sophisticated and traditional techniques, as well as performance evaluation criteria are investigated in the literature. The principal aim of this paper is to carry out a comprehensive review of 214 articles/books/theses that involve credit scoring applications in various areas, in general, but primarily in finance and banking, in particular. This paper also aims to investigate how credit scoring has developed in importance, and to identify the key determinants in the construction of a scoring model, by means of a widespread review of different statistical techniques and performance evaluation criteria. Our review of literature revealed that there is no overall best statistical technique used in building scoring models and the best technique for all circumstances does not yet exist. Also, the applications of the scoring methodologies have been widely extended to include different areas, and this subsequently can help decision makers, particularly in banking, to predict their clients‟ behaviour. Finally, this paper also suggests a number of directions for future research.",
"title": ""
},
{
"docid": "e3bbd0ccc00cd545f11d05ab1421ed01",
"text": "The expectation-confirmation model (ECM) of IT continuance is a model for investigating continued information technology (IT) usage behavior. This paper reports on a study that attempts to expand the set of post-adoption beliefs in the ECM, in order to extend the application of the ECM beyond an instrumental focus. The expanded ECM, incorporating the post-adoption beliefs of perceived usefulness, perceived enjoyment and perceived ease of use, was empirically validated with data collected from an on-line survey of 811 existing users of mobile Internet services. The data analysis showed that the expanded ECM has good explanatory power (R 1⁄4 57:6% of continued IT usage intention and R 1⁄4 67:8% of satisfaction), with all paths supported. Hence, the expanded ECM can provide supplementary information that is relevant for understanding continued IT usage. The significant effects of post-adoption perceived ease of use and perceived enjoyment signify that the nature of the IT can be an important boundary condition in understanding the continued IT usage behavior. At a practical level, the expanded ECM presents IT product/service providers with deeper insights into how to address IT users’ satisfaction and continued patronage. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b8dae71335b9c6caa95bed38d32f102a",
"text": "Mining frequent closed itemsets provides complete and non-redundant results for frequent pattern analysis. Extensive studies have proposed various strategies for efficient frequent closed itemset mining, such as depth-first search vs. breadthfirst search, vertical formats vs. horizontal formats, tree-structure vs. other data structures, top-down vs. bottom-up traversal, pseudo projection vs. physical projection of conditional database, etc. It is the right time to ask \"what are the pros and cons of the strategies?\" and \"what and how can we pick and integrate the best strategies to achieve higher performance in general cases?\"In this study, we answer the above questions by a systematic study of the search strategies and develop a winning algorithm CLOSET+. CLOSET+ integrates the advantages of the previously proposed effective strategies as well as some ones newly developed here. A thorough performance study on synthetic and real data sets has shown the advantages of the strategies and the improvement of CLOSET+ over existing mining algorithms, including CLOSET, CHARM and OP, in terms of runtime, memory usage and scalability.",
"title": ""
},
{
"docid": "0dfd5345c2dc3fe047dcc635760ffedd",
"text": "This paper presents a fast, joint spatial- and Doppler velocity-based, probabilistic approach for ego-motion estimation for single and multiple radar-equipped robots. The normal distribution transform is used for the fast and accurate position matching of consecutive radar detections. This registration technique is successfully applied to laser-based scan matching. To overcome discontinuities of the original normal distribution approach, an appropriate clustering technique provides a globally smooth mixed-Gaussian representation. It is shown how this matching approach can be significantly improved by taking the Doppler information into account. The Doppler information is used in a density-based approach to extend the position matching to a joint likelihood optimization function. Then, the estimated ego-motion maximizes this function. Large-scale real world experiments in an urban environment using a 77 GHz radar show the robust and accurate ego-motion estimation of the proposed algorithm. In the experiments, comparisons are made to state-of-the-art algorithms, the vehicle odometry, and a high-precision inertial measurement unit.",
"title": ""
},
{
"docid": "c8f97cc28c124f08c161898f1c1023ad",
"text": "Nonnegative matrix factorization (NMF) is a widely-used method for low-rank approximation (LRA) of a nonnegative matrix (matrix with only nonnegative entries), where nonnegativity constraints are imposed on factor matrices in the decomposition. A large body of past work on NMF has focused on the case where the data matrix is complete. In practice, however, we often encounter with an incomplete data matrix where some entries are missing (e.g., a user-rating matrix). Weighted low-rank approximation (WLRA) has been studied to handle incomplete data matrix. However, there is only few work on weighted nonnegative matrix factorization (WNMF) that is WLRA with nonnegativity constraints. Existing WNMF methods are limited to a direct extension of NMF multiplicative updates, which suffer from slow convergence while the implementation is easy. In this paper we develop relatively fast and scalable algorithms for WNMF, borrowed from well-studied optimization techniques: (1) alternating nonnegative least squares; (2) generalized expectation maximization. Numerical experiments on MovieLens and Netflix prize datasets confirm the useful behavior of our methods, in a task of collaborative prediction.",
"title": ""
},
{
"docid": "f83017ad2454c465d19f70f8ba995e95",
"text": "The origins of life on Earth required the establishment of self-replicating chemical systems capable of maintaining and evolving biological information. In an RNA world, single self-replicating RNAs would have faced the extreme challenge of possessing a mutation rate low enough both to sustain their own information and to compete successfully against molecular parasites with limited evolvability. Thus theoretical analyses suggest that networks of interacting molecules were more likely to develop and sustain life-like behaviour. Here we show that mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. We find that a specific three-membered network has highly cooperative growth dynamics. When such cooperative networks are competed directly against selfish autocatalytic cycles, the former grow faster, indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation. We can observe the evolvability of networks through in vitro selection. Our experiments highlight the advantages of cooperative behaviour even at the molecular stages of nascent life.",
"title": ""
},
{
"docid": "87037d2da4c9fcf346023562a46773eb",
"text": "From the perspective of kinematics, dual-arm manipulation in robots differs from single-arm manipulation in that it requires high dexterity in a specific region of the manipulator’s workspace. This feature has motivated research on the specialized design of manipulators for dualarm robots. These recently introduced robots often utilize a shoulder structure with a tilted angle of some magnitude. The tilted shoulder yields better kinematic performance for dual-arm manipulation, such as a wider common workspace for each arm. However, this method tends to reduce total workspace volume, which results in lower kinematic performance for single-arm tasks in the outer region of the workspace. To overcome this trade-off, the authors of this study propose a design for a dual-arm robot with a biologically inspired four degree-of-freedom shoulder mechanism. This study analyzes the kinematic performance of the proposed design and compares it with that of a conventional dual-arm robot from the perspective of workspace and single-/dual-arm manipulability. The comparative analysis Electronic supplementary material The online version of this article (doi:10.1007/s11370-017-0215-z) contains supplementary material, which is available to authorized users. B Ji-Hun Bae [email protected] Dong-Hyuk Lee [email protected] Hyeonjun Park [email protected] Jae-Han Park [email protected] Moon-Hong Baeg [email protected] 1 Robot Control and Cognition Lab., Robot R&D Group, Korea Institute of Industrial Technology (KITECH), Ansan, Korea revealed that the proposed structure can significantly enhance singleand dual-arm kinematic performance in comparison with conventional dual-arm structures. This superior kinematic performance was verified through experiments, which showed that the proposed method required shorter settling time and trajectory-following performance than the conventional dual-arm robot.",
"title": ""
},
{
"docid": "3b2a3fc20a03d829e4c019fbdbc0f2ae",
"text": "First cars equipped with 24 GHz short range radar (SRR) systems in combination with 77 GHz long range radar (LRR) system enter the market in autumn 2005 enabling new safety and comfort functions. In Europe the 24 GHz ultra wideband (UWB) frequency band is temporally allowed only till end of June 2013 with a limitation of the car pare penetration of 7%. From middle of 2013 new cars have to be equipped with SRR sensors which operate in the frequency band of 79 GHz (77 GHz to 81 GHz). The development of the 79 GHz SRR technology within the German government (BMBF) funded project KOKON is described",
"title": ""
},
{
"docid": "c2da0c999b00aa25753dee4e5d4521b7",
"text": "Quality degradation and computational complexity are the major challenges for image interpolation algorithms. Advanced interpolation techniques achieve to preserve fine image details but typically suffer from lower computational efficiency, while simpler interpolation techniques lead to lower quality images. In this paper, we propose an edge preserving technique based on inverse gradient weights as well as pixel locations for interpolation. Experimental results confirm that the proposed algorithm exhibits better image quality compared to conventional algorithms. At the same time, our approach is shown to be faster than several advanced edge preserving interpolation algorithms.",
"title": ""
},
{
"docid": "d658b95cc9dc81d0dbb3918795ccab50",
"text": "A brain–computer interface (BCI) is a communication channel which does not depend on the brain’s normal output pathways of peripheral nerves and muscles [1–3]. It supplies paralyzed patients with a new approach to communicate with the environment. Among various brain monitoring methods employed in current BCI research, electroencephalogram (EEG) is the main interest due to its advantages of low cost, convenient operation and non-invasiveness. In present-day EEG-based BCIs, the following signals have been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta rhythms, P300 evoked potential, slow cortical potential (SCP), and movement-related cortical potential (MRCP). Details about these signals can be found in chapter “Brain Signals for Brain–Computer Interfaces”. These systems offer some practical solutions (e.g., cursor movement and word processing) for patients with motor disabilities. In this chapter, practical designs of several BCIs developed in Tsinghua University will be introduced. First of all, we will propose the paradigm of BCIs based on the modulation of EEG rhythms and challenges confronting practical system designs. In Sect. 2, modulation and demodulation methods of EEG rhythms will be further explained. Furthermore, practical designs of a VEP-based BCI and a motor imagery based BCI will be described in Sect. 3. Finally, Sect. 4 will present some real-life application demos using these practical BCI systems.",
"title": ""
},
{
"docid": "5d17ff397a09da24945bb549a8bfd3ec",
"text": "For applications of 5G (5th generation mobile networks) communication systems, dual-polarized patch array antenna operating at 28.5 GHz is designed on the package substrate. To verify the radiation performance of designed antenna itself, a test package including two patch antennas is also design and its scattering parameters were measured. Using a large height of dielectric materials, 1.5 ∼ 2.0 GHz of antenna bandwidth is achieved which is wide enough. Besides, the dielectric constants are reduced to reflect variances of material properties in the higher frequency region. Measured results of the test package show a good performance at the operating frequency, indicating that the fabricated antenna package will perform well, either. In the future work, manufacturing variances will be investigated further.",
"title": ""
}
] |
scidocsrr
|
8bcf0a9eed2179d9bb6d3fa3a3e7f29e
|
Linear classifier design under heteroscedasticity in Linear Discriminant Analysis
|
[
{
"docid": "5d0a77058d6b184cb3c77c05363c02e0",
"text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.",
"title": ""
}
] |
[
{
"docid": "3c22c94c9ab99727840c2ca00c66c0f3",
"text": "The impact of numerous distributed generators (DGs) coupled with the implementation of virtual inertia on the transient stability of power systems has been studied extensively. Time-domain simulation is the most accurate and reliable approach to evaluate the dynamic behavior of power systems. However, the computational efficiency is restricted by their multi-time-scale property due to the combination of various DGs and synchronous generators. This paper presents a novel projective integration method (PIM) for the efficient transient stability simulation of power systems with high DG penetration. One procedure of the proposed PIM is decomposed into two stages, which adopt mixed explicit-implicit integration methods to achieve both efficiency and numerical stability. Moreover, the stability of the PIM is not affected by its parameter, which is related to the step size. Based on this property, an adaptive parameter scheme is developed based on error estimation to fit the time constants of the system dynamics and further increase the simulation speed. The presented approach is several times faster than the conventional integration methods with a similar level of accuracy. The proposed method is demonstrated using test systems with DGs and virtual synchronous generators, and the performance is verified against MATLAB/Simulink and DIgSILENT PowerFactory.",
"title": ""
},
{
"docid": "89297a4aef0d3251e8d947ccc2acacc7",
"text": "We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.",
"title": ""
},
{
"docid": "e2b74db574db8001dace37cbecb8c4eb",
"text": "Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get/put operations.",
"title": ""
},
{
"docid": "e599659ec215993598b98d26384ce6ac",
"text": "The Computer aided modeling and optimization analysis of crankshaft is to study was to evaluate and compare the fatigue performance of two competing manufacturing technologies for automotive crankshafts, namely forged steel and ductile cast iron. In this study a dynamic simulation was conducted on two crankshafts, cast iron and forged steel, from similar single cylinder four stroke engines.Finite element analyses was performed to obtain the variation of stress magnitude at critical locations. The dynamic analysis was done analytically and was verified by simulations in ANSYS.Results achived from aforementioned analysis were used in optimization of the forged steel crankshaft.Geometry,material and manufacturing processes were optimized considering different constraints,manufacturing feasibility and cost.The optimization process included geometry changes compatible with the current engine,fillet rolling and result in increased fatigue strength and reduced cost of the crankshaft, without changing connecting rod and engine block.",
"title": ""
},
{
"docid": "d2c202e120fecf444e77b08bd929e296",
"text": "Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker interpolation by adding a new output layer (α-layer) on top of the multi-output branches. An identifying code is injected into the layer together with acoustic features of many speakers. Experiments show that the α-layer can effectively learn to interpolate the acoustic features between speakers.",
"title": ""
},
{
"docid": "49680e94843e070a5ed0179798f66f33",
"text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.",
"title": ""
},
{
"docid": "f055f5f02b264b47c6218ea6683bcc7b",
"text": "Prepositions are very common and very ambiguous, and understanding their sense is critical for understanding the meaning of the sentence. Supervised corpora for the preposition-sense disambiguation task are small, suggesting a semi-supervised approach to the task. We show that signals from unannotated multilingual data can be used to improve supervised prepositionsense disambiguation. Our approach pre-trains an LSTM encoder for predicting the translation of a preposition, and then incorporates the pre-trained encoder as a component in a supervised classification system, and fine-tunes it for the task. The multilingual signals consistently improve results on two preposition-sense datasets.",
"title": ""
},
{
"docid": "41a74c1664143f602bdde3be9e26312f",
"text": "This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily Aggregated Gradient — justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex smooth cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.",
"title": ""
},
{
"docid": "f499ea5160d1e787a51b456ee01c3814",
"text": "In this paper, a tri band compact octagonal fractal monopole MIMO antenna is presented. The proposed antenna is microstrip line fed and its structure is based on fractal geometry where the resonance frequency of antenna is lowered by applying iteration techniques. The simulated bandwidth of the antenna are 2.3706GHz to 2.45GHz, 3.398GHz to 3.677GHz and 4.9352GHz to 5.8988GHz (S11 <; -10 dB), covering the bands of WLAN and WiMAX. The characteristics of small size, nearly omnidirectional radiation pattern and moderate gain make the proposed MIMO antenna entirely applicable to WLAN and WiMAX applications. The proposed antenna has compact size of 50 mm × 50 mm. Details of the proposed antenna design and performance are presented and discussed.",
"title": ""
},
{
"docid": "3ddf6fab70092eade9845b04dd8344a0",
"text": "Fractional Fourier transform (FRFT) is a generalization of the Fourier transform, rediscovered many times over the past 100 years. In this paper, we provide an overview of recent contributions pertaining to the FRFT. Specifically, the paper is geared toward signal processing practitioners by emphasizing the practical digital realizations and applications of the FRFT. It discusses three major topics. First, the manuscripts relates the FRFT to other mathematical transforms. Second, it discusses various approaches for practical realizations of the FRFT. Third, we overview the practical applications of the FRFT. From these discussions, we can clearly state that the FRFT is closely related to other mathematical transforms, such as time–frequency and linear canonical transforms. Nevertheless, we still feel that major contributions are expected in the field of the digital realizations and its applications, especially, since many digital realizations of a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "843a56ac5a061e8131bd4ce3ff7238a5",
"text": "OBJECTIVE\nTo compare the efficacy of 2 atypical anti-psychotic drugs, olanzapine and risperidone, in the treatment of paradoxical insomnia.\n\n\nMETHODS\nIn this cross-sectional study over a 2-year period (September 2008 to September 2010), 29 patients with paradoxical insomnia, diagnosed in Kermanshah, Iran by both psychiatric interview and actigraphy, were randomly assigned to 2 groups. For 8 weeks, the first group (n=14) was treated with 10 mg olanzapine daily, and the second group (n=15) was treated with 4 mg risperidone daily. All participants completed the Pittsburgh Sleep Quality Inventory (PSQI) at baseline and at the end of the study.\n\n\nRESULTS\nAs expected, a baseline actigraphy analysis showed that total sleep time was not significantly different between the 2 treatment groups (p<0.3). In both groups, sleep quality was improved (p<0.001) with treatment. When comparing the 2 treatments directions, a significant difference emerged (9.21+/-2.35, 6.07+/-4.46) among the 2 treatment groups based on data from the PSQI. Patients who were treated with olanzapine showed greater improvement than patients who were treated by risperidone (p<0.04).\n\n\nCONCLUSION\nAtypical anti-psychotic drugs such as olanzapine and risperidone may be beneficial options for treatment of paradoxical insomnia. Larger clinical trials with longer periods of follow-up are needed for further investigation.",
"title": ""
},
{
"docid": "f78b6308d5fc78ec6440433af45925bb",
"text": "Recognizing the potentially ruinous effect of negative reviews on the reputation of the hosts as well as a subjective nature of the travel experience judgements, peer-to-peer accommodation sharing platforms, like Airbnb, have readily embraced the “response” option, empowering hosts with the voice to challenge, deny or at least apologize for the subject of critique. However, the effects of different response strategies on trusting beliefs towards the host remain unclear. To fill this gap, this study focuses on understanding the impact of different response strategies and review negativity on trusting beliefs towards the host in peer-to-peer accommodation sharing setting utilizing experimental methods. Examination of two different contexts, varying in the controllability of the subject of complaint, reveals that when the subject of complaint is controllable by a host, such strategies as confession / apology and denial can improve trusting beliefs towards the host. However, when the subject of criticism is beyond the control of the host, denial of the issue does not yield guest’s confidence in the host, whereas confession and excuse have positive influence on trusting beliefs.",
"title": ""
},
{
"docid": "b722f2fbdf20448e3a7c28fc6cab026f",
"text": "Alternative Mechanisms Rationale/Arguments/ Assumptions Connected Literature/Theory Resulting (Possible) Effect Support for/Against A1. Based on WTP and Exposure Theory A1a Light user segments (who are likely to have low WTP) are more likely to reduce (or even discontinue in extreme cases) their consumption of NYT content after the paywall implementation. Utility theory — WTP (Danaher 2002) Juxtaposing A1a and A1b leads to long tail effect due to the disproportionate reduction of popular content consumption (as a results of reduction of content consumption by light users). A1a. Supported (see the descriptive statistics in Table 11). A1b. Supported (see results from the postestimation of finite mixture model in Table 9) Since the resulting effects as well as both the assumptions (A1a and A1b) are supported, we suggest that there is support for this mechanism. A1b Light user segments are more likely to consume popular articles whereas the heavy user segment is more likely to consume a mix of niche articles and popular content. Exposure theory (McPhee 1963)",
"title": ""
},
{
"docid": "c5cc7fc9651ff11d27e08e1910a3bd20",
"text": "An omnidirectional circularly polarized (OCP) antenna operating at 28 GHz is reported and has been found to be a promising candidate for device-to-device (D2D) communications in the next generation (5G) wireless systems. The OCP radiation is realized by systematically integrating electric and magnetic dipole elements into a compact disc-shaped configuration (9.23 mm $^{3} =0.008~\\lambda _{0}^{3}$ at 28 GHz) in such a manner that they are oriented in parallel and radiate with the proper phase difference. The entire antenna structure was printed on a single piece of dielectric substrate using standard PCB manufacturing technologies and, hence, is amenable to mass production. A prototype OCP antenna was fabricated on Rogers 5880 substrate and was tested. The measured results are in good agreement with their simulated values and confirm the reported design concepts. Good OCP radiation patterns were produced with a measured peak realized RHCP gain of 2.2 dBic. The measured OCP overlapped impedance and axial ratio bandwidth was 2.2 GHz, from 26.5 to 28.7 GHz, an 8 % fractional bandwidth, which completely covers the 27.5 to 28.35 GHz band proposed for 5G cellular systems.",
"title": ""
},
{
"docid": "929f294583267ca8cb8616e803687f1e",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "ea9bafe86af4418fa51abe27a2c2180b",
"text": "In this work, we propose a novel phenomenological model of the EEG signal based on the dynamics of a coupled Duffing-van der Pol oscillator network. An optimization scheme is adopted to match data generated from the model with clinically obtained EEG data from subjects under resting eyes-open (EO) and eyes-closed (EC) conditions. It is shown that a coupled system of two Duffing-van der Pol oscillators with optimized parameters yields signals with characteristics that match those of the EEG in both the EO and EC cases. The results, which are reinforced using statistical analysis, show that the EEG recordings under EC and EO resting conditions are clearly distinct realizations of the same underlying model occurring due to parameter variations with qualitatively different nonlinear dynamic characteristics. In addition, the interplay between noise and nonlinearity is addressed and it is shown that, for appropriately chosen values of noise intensity in the model, very good agreement exists between the model output and the EEG in terms of the power spectrum as well as Shannon entropy. In summary, the results establish that an appropriately tuned stochastic coupled nonlinear oscillator network such as the Duffing-van der Pol system could provide a useful framework for modeling and analysis of the EEG signal. In turn, design of algorithms based on the framework has the potential to positively impact the development of novel diagnostic strategies for brain injuries and disorders. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "21c7cbcf02141c60443f912ae5f1208b",
"text": "A novel driving scheme based on simultaneous emission is reported for 2D/3D AMOLED TVs. The new method reduces leftright crosstalk without sacrificing luminance. The new scheme greatly simplifies the pixel circuit as the number of transistors for Vth compensation is reduced from 6 to 3. The capacitive load of scan lines is reduced by 48%, enabling very high refresh rate (240 Hz).",
"title": ""
},
{
"docid": "afd378cf5e492a9627e746254586763b",
"text": "Gradient-based optimization has enabled dramatic advances in computational imaging through techniques like deep learning and nonlinear optimization. These methods require gradients not just of simple mathematical functions, but of general programs which encode complex transformations of images and graphical data. Unfortunately, practitioners have traditionally been limited to either hand-deriving gradients of complex computations, or composing programs from a limited set of coarse-grained operators in deep learning frameworks. At the same time, writing programs with the level of performance needed for imaging and deep learning is prohibitively difficult for most programmers.\n We extend the image processing language Halide with general reverse-mode automatic differentiation (AD), and the ability to automatically optimize the implementation of gradient computations. This enables automatic computation of the gradients of arbitrary Halide programs, at high performance, with little programmer effort. A key challenge is to structure the gradient code to retain parallelism. We define a simple algorithm to automatically schedule these pipelines, and show how Halide's existing scheduling primitives can express and extend the key AD optimization of \"checkpointing.\"\n Using this new tool, we show how to easily define new neural network layers which automatically compile to high-performance GPU implementations, and how to solve nonlinear inverse problems from computational imaging. Finally, we show how differentiable programming enables dramatically improving the quality of even traditional, feed-forward image processing algorithms, blurring the distinction between classical and deep methods.",
"title": ""
},
{
"docid": "debb2bc6845eb2355c54c2599b40e102",
"text": "Graphs are used to model many real objects such as social networks and web graphs. Many real applications in various fields require efficient and effective management of large-scale, graph-structured data. Although distributed graph engines such as GBase and Pregel handle billion-scale graphs, users need to be skilled at managing and tuning a distributed system in a cluster, which is a non-trivial job for ordinary users. Furthermore, these distributed systems need many machines in a cluster in order to provide reasonable performance. Several recent works proposed non-distributed graph processing platforms as complements to distributed platforms. In fact, efficient non-distributed platforms require less hardware resource and can achieve better energy efficiency than distributed ones. GraphChi is a representative non-distributed platform that is disk-based and can process billions of edges on CPUs in a single PC. However, the design drawbacks of GraphChi on I/O and computation model have limited its parallelism and performance. In this paper, we propose a general, disk-based graph engine called gGraph to process billion-scale graphs efficiently by utilizing both CPUs and GPUs in a single PC. GGraph exploits full parallelism and full overlap of computation and I/O processing as much as possible. Experiment results show that gGraph outperforms GraphChi and PowerGraph. In addition, gGraph achieves the best energy efficiency among all evaluated platforms.",
"title": ""
},
{
"docid": "f49bb940c12e2eac57112862a564c95f",
"text": "Hydrogels in which cells are encapsulated are of great potential interest for tissue engineering applications. These gels provide a structure inside which cells can spread and proliferate. Such structures benefit from controlled microarchitectures that can affect the behavior of the enclosed cells. Microfabrication-based techniques are emerging as powerful approaches to generate such cell-encapsulating hydrogel structures. In this paper we introduce common hydrogels and their crosslinking methods and review the latest microscale approaches for generation of cell containing gel particles. We specifically focus on microfluidics-based methods and on techniques such as micromolding and electrospinning.",
"title": ""
}
] |
scidocsrr
|
ec8ca1843aede3eba3652535c2ba7e56
|
Arithmetic Coding for Data Compression
|
[
{
"docid": "bbf581230ec60c2402651d51e3a37211",
"text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.",
"title": ""
}
] |
[
{
"docid": "eb761eb499b2dc82f7f2a8a8a5ff64a7",
"text": "We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.",
"title": ""
},
{
"docid": "69d3c943755734903b9266ca2bd2fad1",
"text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.",
"title": ""
},
{
"docid": "c3f25271d25590bf76b36fee4043d227",
"text": "Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.",
"title": ""
},
{
"docid": "a09cb533a0a90a056857d597213efdf2",
"text": "一 引言 图像的边缘是图像的重要的特征,它给出了图像场景中物体的轮廓特征信息。当要对图 像中的某一个物体进行识别时,边缘信息是重要的可以利用的信息,例如在很多系统中采用 的模板匹配的识别算法。基于此,我们设计了一套基于 PCI Bus和 Vision Bus的可重构的机 器人视觉系统[3]。此系统能够实时的对图像进行采集,并可以通过系统实时的对图像进行 边缘的提取。 对于图像的边缘提取,采用二阶的边缘检测算子处理后要进行过零点检测,计算量很大 而且用硬件实现资源占用大且速度慢,所以在我们的视觉系统中,卷积器中选择的是一阶的 边缘检测算子。采用一阶的边缘检测算子进行卷积运算之后,仅仅需要对卷积得到的图像进 行阈值处理就可以得到图像的边缘,而阈值处理的操作用硬件实现占用资源少且速度快。由 于本视觉系统要求与应用环境下的精密装配机器人配合使用,系统的实时性要求非常高。因 此,如何对实时采集图像进行快速实时的边缘提取阈值的自动选取,是我们必须要考虑的问 题。 遗传算法是一种仿生物系统的基因进化的迭代搜索算法,其基本思想是由美国Michigan 大学的 J.Holland 教授提出的。由于遗传算法的整体寻优策略以及优化计算时不依赖梯度信 息,所以它具有很强的全局搜索能力,即对于解空间中的全局最优解有着很强的逼近能力。 它适用于问题结构不是十分清楚,总体很大,环境复杂的场合,而对于实时采集的图像进行 边缘检测阈值的选取就是此类问题。本文在对传统的遗传算法进行改进的基础上,提出了一 种对于实时采集图像进行边缘检测的阈值的自动选取方法。",
"title": ""
},
{
"docid": "a8d3a75cdc3bb43217a0120edf5025ff",
"text": "An important approach to text mining involves the use of natural-language information extraction. Information extraction (IE) distills structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. IE systems can be used to directly extricate abstract knowledge from a text corpus, or to extract concrete data from a set of documents which can then be further analyzed with traditional data-mining techniques to discover more general patterns. We discuss methods and implemented systems for both of these approaches and summarize results on mining real text corpora of biomedical abstracts, job announcements, and product descriptions. We also discuss challenges that arise when employing current information extraction technology to discover knowledge in text.",
"title": ""
},
{
"docid": "71dd012b54ae081933bddaa60612240e",
"text": "This paper analyzes & compares four adders with different logic styles (Conventional, transmission gate, 14 transistors & GDI based technique) for transistor count, power dissipation, delay and power delay product. It is performed in virtuoso platform, using Cadence tool with available GPDK - 90nm kit. The width of NMOS and PMOS is set at 120nm and 240nm respectively. Transmission gate full adder has sheer advantage of high speed but consumes more power. GDI full adder gives reduced voltage swing not being able to pass logic 1 and logic 0 completely showing degraded output. Transmission gate full adder shows better performance in terms of delay (0.417530 ns), whereas 14T full adder shows better performance in terms of all three aspects.",
"title": ""
},
{
"docid": "79a20b9a059a2b4cc73120812c010495",
"text": "The present article summarizes the state of the art algorithms to compute the discrete Moreau envelope, and presents a new linear-time algorithm, named NEP for NonExpansive Proximal mapping. Numerical comparisons between the NEP and two existing algorithms: The Linear-time Legendre Transform (LLT) and the Parabolic Envelope (PE) algorithms are performed. Worst-case time complexity, convergence results, and examples are included. The fast Moreau envelope algorithms first factor the Moreau envelope as several one-dimensional transforms and then reduce the brute force quadratic worst-case time complexity to linear time by using either the equivalence with Fast Legendre Transform algorithms, the computation of a lower envelope of parabolas, or, in the convex case, the non expansiveness of the proximal mapping.",
"title": ""
},
{
"docid": "efe70da1a3118e26acf10aa480ad778d",
"text": "Background: Facebook (FB) is becoming an increasingly salient feature in peoples’ lives and has grown into a bastion in our current society with over 1 billion users worldwide –the majority of which are college students. However, recent studies conducted suggest that the use of Facebook may impacts individuals’ well being. Thus, this paper aimed to explore the effects of Facebook usage on adolescents’ emotional states of depression, anxiety, and stress. Method and Material: A cross sectional design was utilized in this investigation. The study population included 76 students enrolled in the Bachelor of Science in Nursing program from a government university in Samar, Philippines. Facebook Intensity Scale (FIS) and the Depression Anxiety and Stress Scale (DASS) were the primary instruments used in this study. Results: Findings indicated correlation coefficients of 0.11 (p=0.336), 0.07 (p=0.536), and 0.10 (p=0.377) between Facebook Intensity Scale (FIS) and Depression, Anxiety, and Stress scales in the DASS. Time spent on FBcorrelated significantly with depression (r=0.233, p=0.041) and anxiety (r=0.259, p=0.023). Similarly, the three emotional states (depression, anxiety, and stress) correlated significantly. Conclusions: Intensity of Facebook use is not directly related to negative emotional states. However, time spent on Facebooking increases depression and anxiety scores. Implications of the findings to the fields of counseling and psychology are discussed.",
"title": ""
},
{
"docid": "c487af41ead3ee0bc8fe6c95b356a80b",
"text": "With such a large volume of material accessible from the World Wide Web, there is an urgent need to increase our knowledge of factors in#uencing reading from screen. We investigate the e!ects of two reading speeds (normal and fast) and di!erent line lengths on comprehension, reading rate and scrolling patterns. Scrolling patterns are de\"ned as the way in which readers proceed through the text, pausing and scrolling. Comprehension and reading rate are also examined in relation to scrolling patterns to attempt to identify some characteristics of e!ective readers. We found a reduction in overall comprehension when reading fast, but the type of information recalled was not dependent on speed. A medium line length (55 characters per line) appears to support e!ective reading at normal and fast speeds. This produced the highest level of comprehension and was also read faster than short lines. Scrolling patterns associated with better comprehension (more time in pauses and more individual scrolling movements) contrast with scrolling patterns used by faster readers (less time in pauses between scrolling). Consequently, e!ective readers can only be de\"ned in relation to the aims of the reading task, which may favour either speed or accuracy. ( 2001 Academic Press",
"title": ""
},
{
"docid": "34623fb38c81af8efaf8e7073e4c43bc",
"text": "The k-means problem consists of finding k centers in R that minimize the sum of the squared distances of all points in an input set P from R to their closest respective center. Awasthi et. al. recently showed that there exists a constant ε′ > 0 such that it is NP-hard to approximate the k-means objective within a factor of 1 + ε′. We establish that the constant ε′ is at least 0.0013. For a given set of points P ⊂ R, the k-means problem consists of finding a partition of P into k clusters (C1, . . . , Ck) with corresponding centers (c1, . . . , ck) that minimize the sum of the squared distances of all points in P to their corresponding center, i.e. the quantity arg min (C1,...,Ck),(c1,...,ck) k ∑",
"title": ""
},
{
"docid": "455e3f0c6f755d78ecafcdff14c46014",
"text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.",
"title": ""
},
{
"docid": "89322e0d2b3566aeb85eeee9f505d5b2",
"text": "Parkinson's disease is a neurological disorder with evolving layers of complexity. It has long been characterised by the classical motor features of parkinsonism associated with Lewy bodies and loss of dopaminergic neurons in the substantia nigra. However, the symptomatology of Parkinson's disease is now recognised as heterogeneous, with clinically significant non-motor features. Similarly, its pathology involves extensive regions of the nervous system, various neurotransmitters, and protein aggregates other than just Lewy bodies. The cause of Parkinson's disease remains unknown, but risk of developing Parkinson's disease is no longer viewed as primarily due to environmental factors. Instead, Parkinson's disease seems to result from a complicated interplay of genetic and environmental factors affecting numerous fundamental cellular processes. The complexity of Parkinson's disease is accompanied by clinical challenges, including an inability to make a definitive diagnosis at the earliest stages of the disease and difficulties in the management of symptoms at later stages. Furthermore, there are no treatments that slow the neurodegenerative process. In this Seminar, we review these complexities and challenges of Parkinson's disease.",
"title": ""
},
{
"docid": "6033f644fb18ce848922a51d3b0000ab",
"text": "This paper tests two of the simplest and most popular trading rules moving average and trading range break, by utilitizing a very long data series, the Dow Jones index from 1897 to 1986. Standard statistical analysis is extended through the use .of bootstrap techniques. Overall our results provide strong support for the technical strategies that are explored. The returns obtained from buy (sell) signals are not consistent with the three popular null models: the random walk, the AR(I) and the GARCH-M. Consistently, buy signals generate higher returns than sell signals. Moreover, returns following sell signals are negative which is not easily explained by any of the currently existing equilibrium models. Furthermore the returns following buy signals are less volatile than returns following sell signals. The term, \"technical analysis,\" is a general heading for a myriad of trading techniques. Technical analysts attempt to forecast prices by the study of past prices and a few other related summary statistics about security trading. They believe that shifts in supply and demand can be detected in charts of market action. Technical analysis is considered by many to be the original form of investment analysis, dating back to the 1800's. It came into widespread use before the period of extensive and fully disclosed financial information, which in turn enabled the practice of fnndamental analysis to develop. In the U.S., the use of trading rules to detect patterns in stock prices is probably as old as the stock market itself. The oldest technique is attributed to Charles Dow and is traced to the late 1800's. Many of the techniques used today have been utilized for over 60 years. These techniques for discovering hidden relations in stock returns can range from extremely simple to quite elaborate. The attitude of academics towards technical analysis, until recently, is well described by Malkiel(1981): \"Obviously, I am biased against the chartist. This is not only a personal predilection, but a professional one as well. Technical analysis is anathema to, the academic world. We love to pick onit. Our bullying tactics' are prompted by two considerations: (1) the method is patently false; and (2) it's easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember': His your money we are trying to save.\" , Nonetheless, technical analysis has been enjoying a renaissance on Wall Street. All major brokerage firms publish technical commentary on the market and individual securities\" and many of the newsletters published by various \"experts\" are based on technical analysis. In recent years the efficient market hypothesis has come under serious siege. Various papers suggested that stock returns are not fully explained by common risk measures. A significant relationship between expected return and fundamental variables such as price-earnings ratio, market-to, book ratio and size was documented. Another group ofpapers has uncovered systematic patterns in stock returns related to various calendar periods such as the weekend effect, the tnrn-of-the-month effect, the holiday effect and the, January effect. A line of research directly related to this work provides evidence of predictability of equity returns from past returns. De Bandt and Thaler(1985), Fama and French(1986), and Poterba and Summers(1988) find negative serial correlation in returns of individual stocks aid various portfolios over three to ten year intervals. Rosenberg, Reid, and Lanstein(1985) provide evidence for the presence of predictable return reversals on a monthly basis",
"title": ""
},
{
"docid": "f4b0a7e2ab8728b682b8d399a887c3df",
"text": "This paper presents a framework for localization or grounding of phrases in images using a large collection of linguistic and visual cues.1 We model the appearance, size, and position of entity bounding boxes, adjectives that contain attribute information, and spatial relationships between pairs of entities connected by verbs or prepositions. We pay special attention to relationships between people and clothing or body part mentions, as they are useful for distinguishing individuals. We automatically learn weights for combining these cues and at test time, perform joint inference over all phrases in a caption. The resulting system produces a 4% improvement in accuracy over the state of the art on phrase localization on the Flickr30k Entities dataset [25] and a 4-10% improvement for visual relationship detection on the Stanford VRD dataset [20].",
"title": ""
},
{
"docid": "90d1d78d3d624d3cb1ecc07e8acaefd4",
"text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.",
"title": ""
},
{
"docid": "8646bc8ddeadf17e443e5ddcf705e492",
"text": "This paper proposes a model predictive control (MPC) scheme for the interleaved dc-dc boost converter with coupled inductors. The main control objectives are the regulation of the output voltage to its reference value, despite changes in the input voltage and the load, and the equal sharing of the load current by the two circuit inductors. An inner control loop, using MPC, regulates the input current to its reference that is provided by the outer loop, which is based on a load observer. Simulation results are provided to highlight the performance of the proposed control scheme.",
"title": ""
},
{
"docid": "2113655d3467fbdbf7769e36952d2a6f",
"text": "The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature make an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey, we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over 80 privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement.",
"title": ""
},
{
"docid": "b0901a572ecaaeb1233b92d5653c2f12",
"text": "This qualitative study offers a novel exploration of the links between social media, virtual intergroup contact, and empathy by examining how empathy is expressed through interactions on a popular social media blog. Global leaders are encouraging individuals to engage in behaviors and support policies that provide basic social foundations. It is difficult to motivate people to undertake such actions. However, research shows that empathy intensifies motivation to help others. It can cause individuals to see the world from the perspective of stigmatized group members and increase positive feelings. Social media offers a new pathway for virtual intergroup contact, providing opportunities to increase conversation about disadvantaged others and empathy. We examined expressions of empathy within a popular blog, Humans of New York (HONY), and engaged in purposeful case selection by focusing on (1) events where specific prosocial action was taken corresponding to interactions on the HONY blog and (2) presentation of people in countries other than the United States. Nine overarching themes; (1) perspective taking, (2) fantasy, (3) empathic concern, (4) personal distress, (5) relatability, (6) prosocial action, (7) community appreciation, (8) anti-empathy, and (9) rejection of anti-empathy, exemplify how the HONY community expresses and shares empathic thoughts and feelings.",
"title": ""
},
{
"docid": "976aee37c264dbf53b7b1fbbf0d583c4",
"text": "This paper applies Halliday's (1994) theory of the interpersonal, ideational and textual meta-functions of language to conceptual metaphor. Starting from the observation that metaphoric expressions tend to be organized in chains across texts, the question is raised what functions those expressions serve in different parts of a text as well as in relation to each other. The empirical part of the article consists of the sample analysis of a business magazine text on marketing. This analysis is two-fold, integrating computer-assisted quantitative investigation with qualitative research into the organization and multifunctionality of metaphoric chains as well as the cognitive scenarios evolving from those chains. The paper closes by summarizing the main insights along the lines of the three Hallidayan meta-functions of conceptual metaphor and suggesting functional analysis of metaphor at levels beyond that of text. Im vorliegenden Artikel wird Hallidays (1994) Theorie der interpersonellen, ideellen und textuellen Metafunktion von Sprache auf das Gebiet der konzeptuellen Metapher angewandt. Ausgehend von der Beobachtung, dass metaphorische Ausdrücke oft in textumspannenden Ketten angeordnet sind, wird der Frage nachgegangen, welche Funktionen diese Ausdrücke in verschiedenen Teilen eines Textes und in Bezug aufeinander erfüllen. Der empirische Teil der Arbeit besteht aus der exemplarischen Analyse eines Artikels aus einem Wirtschaftsmagazin zum Thema Marketing. Diese Analysis gliedert sich in zwei Teile und verbindet computergestütze quantitative Forschung mit einer qualitativen Untersuchung der Anordnung und Multifunktionalität von Metaphernketten sowie der kognitiven Szenarien, die aus diesen Ketten entstehen. Der Aufsatz schließt mit einer Zusammenfassung der wesentlichen Ergebnisse im Licht der Hallidayschen Metafunktionen konzeptueller Metaphern und gibt einen Ausblick auf eine funktionale Metaphernanalyse, die über die rein textuelle Ebene hinausgeht.",
"title": ""
},
{
"docid": "9cea5720bdba8af6783d9e9f8bc7b7d1",
"text": "BACKGROUND\nFeasible, cost-effective instruments are required for the surveillance of moderate-to-vigorous physical activity (MVPA) and sedentary behaviour (SB) and to assess the effects of interventions. However, the evidence base for the validity and reliability of the World Health Organisation-endorsed Global Physical Activity Questionnaire (GPAQ) is limited. We aimed to assess the validity of the GPAQ, compared to accelerometer data in measuring and assessing change in MVPA and SB.\n\n\nMETHODS\nParticipants (n = 101) were selected randomly from an on-going research study, stratified by level of physical activity (low, moderate or highly active, based on the GPAQ) and sex. Participants wore an accelerometer (Actigraph GT3X) for seven days and completed a GPAQ on Day 7. This protocol was repeated for a random sub-sample at a second time point, 3-6 months later. Analysis involved Wilcoxon-signed rank tests for differences in measures, Bland-Altman analysis for the agreement between measures for median MVPA and SB mins/day, and Spearman's rho coefficient for criterion validity and extent of change.\n\n\nRESULTS\n95 participants completed baseline measurements (44 females, 51 males; mean age 44 years, (SD 14); measurements of change were calculated for 41 (21 females, 20 males; mean age 46 years, (SD 14). There was moderate agreement between GPAQ and accelerometer for MVPA mins/day (r = 0.48) and poor agreement for SB (r = 0.19). The absolute mean difference (self-report minus accelerometer) for MVPA was -0.8 mins/day and 348.7 mins/day for SB; and negative bias was found to exist, with those people who were more physically active over-reporting their level of MVPA: those who were more sedentary were less likely to under-report their level of SB. Results for agreement in change over time showed moderate correlation (r = 0.52, p = 0.12) for MVPA and poor correlation for SB (r = -0.024, p = 0.916).\n\n\nCONCLUSIONS\nLevels of agreement with objective measurements indicate the GPAQ is a valid measure of MVPA and change in MVPA but is a less valid measure of current levels and change in SB. Thus, GPAQ appears to be an appropriate measure for assessing the effectiveness of interventions to promote MVPA.",
"title": ""
}
] |
scidocsrr
|
255658b6d0b767c989cb50d2bd0b6bd9
|
Single Image Super-resolution Using Deformable Patches
|
[
{
"docid": "7cb6582bf81aea75818eef2637c95c79",
"text": "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results.",
"title": ""
},
{
"docid": "d4c7493c755a3fde5da02e3f3c873d92",
"text": "Edge-directed image super resolution (SR) focuses on ways to remove edge artifacts in upsampled images. Under large magnification, however, textured regions become blurred and appear homogenous, resulting in a super-resolution image that looks unnatural. Alternatively, learning-based SR approaches use a large database of exemplar images for “hallucinating” detail. The quality of the upsampled image, especially about edges, is dependent on the suitability of the training images. This paper aims to combine the benefits of edge-directed SR with those of learning-based SR. In particular, we propose an approach to extend edge-directed super-resolution to include detail from an image/texture example provided by the user (e.g., from the Internet). A significant benefit of our approach is that only a single exemplar image is required to supply the missing detail – strong edges are obtained in the SR image even if they are not present in the example image due to the combination of the edge-directed approach. In addition, we can achieve quality results at very large magnification, which is often problematic for both edge-directed and learning-based approaches.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "60120375949f36157d73748af5c3231a",
"text": "This paper describes REVIEW, a new retinal vessel reference dataset. This dataset includes 16 images with 193 vessel segments, demonstrating a variety of pathologies and vessel types. The vessel edges are marked by three observers using a special drawing tool. The paper also describes the algorithm used to process these segments to produce vessel profiles, against which vessel width measurement algorithms can be assessed. Recommendations are given for use of the dataset in performance assessment. REVIEW can be downloaded from http://ReviewDB.lincoln.ac.uk.",
"title": ""
},
{
"docid": "660bc85f84d37a98e78a34ccf1c8b1ab",
"text": "In this paper, we evaluate the performance and experience differences between direct touch and mouse input on horizontal and vertical surfaces using a simple application and several validated scales. We find that, not only are both speed and accuracy improved when using the multi-touch display over a mouse, but that participants were happier and more engaged. They also felt more competent, in control, related to other people, and immersed. Surprisingly, these results cannot be explained by the intuitiveness of the controller, and the benefits of touch did not come at the expense of perceived workload. Our work shows the added value of considering experience in addition to traditional measures of performance, and demonstrates an effective and efficient method for gathering experience during inter-action with surface applications. We conclude by discussing how an understanding of this experience can help in designing touch applications.",
"title": ""
},
{
"docid": "a32a359ad54d69466d267cad6e182ae9",
"text": "The Sign System for Indonesian Language (SIBI) is a rather complex sign language. It has four components that distinguish the meaning of the sign language and it follows the syntax and the grammar of the Indonesian language. This paper proposes a model for recognizing the SIBI words by using Microsoft Kinect as the input sensor. This model is a part of automatic translation from SIBI to text. The features for each word are extracted from skeleton and color-depth data produced by Kinect. Skeleton data features indicate the angle between human joints and Cartesian axes. Color images are transformed to gray-scale and their features are extracted by using Discrete Cosine Transform (DCT) with Cross Correlation (CC) operation. The image's depth features are extracted by running MATLAB regionprops function to get its region properties. The Generalized Learning Vector Quantization (GLVQ) and Random Forest (RF) training algorithm from WEKA data mining tools are used as the classifier of the model. Several experiments with different scenarios have shown that the highest accuracy (96,67%) is obtained by using 30 frames for skeleton combined with 20 frames for region properties image classified by Random Forest.",
"title": ""
},
{
"docid": "a212a2969c0c72894dcde880bbf29fa7",
"text": "Machine learning is useful for building robust learning models, and it is based on a set of features that identify a state of an object. Unfortunately, some data sets may contain a large number of features making, in some cases, the learning process time consuming and the generalization capability of machine learning poor. To make a data set easy to learn and understand, it is typically recommended to remove the most irrelevant features from the set. However, choosing what data should be kept or eliminated may be performed by complex selection algorithms, and optimal feature selection may require an exhaustive search of all possible subsets of features which is computationally expensive. This paper proposes a simple method to perform feature selection using artificial neural networks. It is shown experimentally that genetic algorithms in combination with artificial neural networks can easily be used to extract those features that are required to produce a desired result. Experimental results show that very few hidden neurons are required for feature selection as artificial neural networks are only used to assess the quality of an individual, which is a chosen subset of features.",
"title": ""
},
{
"docid": "4b354edbd555b6072ae04fb9befc48eb",
"text": "We present a generative method for the creation of geometrically complex andmaterially heterogeneous objects. By combining generative design and additive manufacturing, we demonstrate a unique formfinding approach and method for multi-material 3D printing. The method offers a fast, automated and controllable way to explore an expressive set of symmetrical, complex and colored objects, which makes it a useful tool for design exploration andprototyping.Wedescribe a recursive grammar for the generation of solid boundary surfacemodels suitable for a variety of design domains.We demonstrate the generation and digital fabrication ofwatertight 2-manifold polygonalmeshes, with feature-aligned topology that can be produced on a wide variety of 3D printers, as well as post-processed with traditional 3D modeling tools. To date, objects with intricate spatial patterns and complex heterogeneous material compositions generated by this method can only be produced through 3D printing. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c174b7f1f6267ec75b1a9cac4bcaf2f7",
"text": "Issue tracking systems such as Bugzilla, Mantis and JIRA are Process Aware Information Systems to support business process of issue (defect and feature enhancement) reporting and resolution. The process of issue reporting to resolution consists of several steps or activities performed by various roles (bug reporter, bug triager, bug fixer, developers, and quality assurance manager) within the software maintenance team. Project teams define a workflow or a business process (design time process model and guidelines) to streamline and structure the issue management activities. However, the runtime process (reality) may not conform to the design time model and can have imperfections or inefficiencies. We apply business process mining tools and techniques to analyze the event log data (bug report history) generated by an issue tracking system with the objective of discovering runtime process maps, inefficiencies and inconsistencies. We conduct a case-study on data extracted from Bugzilla issue tracking system of the popular open-source Firefox browser project. We present and implement a process mining framework, Nirikshan, consisting of various steps: data extraction, data transformation, process discovery, performance analysis and conformance checking. We conduct a series of process mining experiments to study self-loops, back-and-forth, issue reopen, unique traces, event frequency, activity frequency, bottlenecks and present an algorithm and metrics to compute the degree of conformance between the design time and the runtime process.",
"title": ""
},
{
"docid": "8ea0ac6401d648e359fc06efa59658e6",
"text": "Different neural networks have exhibited excellent performance on various speech processing tasks, and they usually have specific advantages and disadvantages. We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependance in the spectrogram of the speech in an efficient way. The model is tested on speech corpus TIMIT for phoneme recognition and IEMOCAP for emotion recognition. Experimental results show that the model is competitive with previous methods in terms of accuracy and efficiency.",
"title": ""
},
{
"docid": "03356f32b78ae68603a59c23e8f4a01c",
"text": "1. Introduction The problem of estimating the dimensionality of a model occurs in various forms in applied statistics. There is estimating the number of factor in factor analysis, estimating the degree of a polynomial describing the data, selecting the variables to be introduced in a multiple regression equation, estimating the order of an AR or MA time series model, and so on. In factor analysis this problem was traditionally solved by eyeballing residual eigen-values, or by applying some other kind of heuristic procedure. When maximum likelihood factor analysis became computationally feasible the likelihoods for diierent dimensionalities could be compared. Most statisticians were aware of the fact that comparison of successive chi squares was not optimal in any well deened decision theoretic sense. With the advent of the electronic computer the forward and backward stepwise selection procedures in multiple regression also became quite popular, but again there were plenty of examples around showing that the procedures were not optimal and could easily lead one astray. When even more computational power became available one could solve the best subset selection problem for up to 20 or 30 variables, but choosing an appropriate criterion on the basis of which to compare the many models remains a problem. But exactly because of these advances in computation, nding a solution of the problem became more and more urgent. In the linear regression situation the C p criterion of Mallows (1973), which had already been around much longer, and the PRESS criterion of Allen (1971) were suggested. Although they seemed to work quite well, they were too limited in scope. The structural covariance models of Joreskog and others, and the log linear models of Goodman and others, made search over a much more complicated set of models necessary, and the model choice problems in those contexts could not be attacked by inherently linear methods. Three major closely related developments occurred around 1974. Akaike (1973) introduced the information criterion for model selection, generalizing his earlier work on time series analysis and factor analysis. Stone (1974) reintroduced and systematized cross",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "5cc7f7aae87d95ea38c2e5a0421e0050",
"text": "Scrum is a structured framework to support complex product development. However, Scrum methodology faces a challenge of managing large teams. To address this challenge, in this paper we propose a solution called Scrum of Scrums. In Scrum of Scrums, we divide the Scrum team into teams of the right size, and then organize them hierarchically into a Scrum of Scrums. The main goals of the proposed solution are to optimize communication between teams in Scrum of Scrums; to make the system work after integration of all parts; to reduce the dependencies between the parts of system; and to prevent the duplication of parts in the system. [Qurashi SA, Qureshi MRJ. Scrum of Scrums Solution for Large Size Teams Using Scrum Methodology. Life Sci J 2014;11(8):443-449]. (ISSN:1097-8135). http://www.lifesciencesite.com. 58",
"title": ""
},
{
"docid": "db9f6e58adc2a3ce423eed3223d88b19",
"text": "The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. When the number of SOM units is large, to facilitate quantitative analysis of the map and the data, similar units need to be grouped, i.e., clustered. In this paper, different approaches to clustering of the SOM are considered. In particular, the use of hierarchical agglomerative clustering and partitive clustering using k-means are investigated. The two-stage procedure--first using SOM to produce the prototypes that are then clustered in the second stage--is found to perform well when compared with direct clustering of the data and to reduce the computation time.",
"title": ""
},
{
"docid": "5adaee6e03fdd73ebed40804b9cad326",
"text": "Quantum circuits exhibit several features of large-scale distributed systems. They have a concise design formalism but behavior that is challenging to represent let alone predict. Issues of scalability—both in the yet-to-be-engineered quantum hardware and in classical simulators—are paramount. They require sparse representations for efficient modeling. Whereas simulators represent both the system’s current state and its operations directly, emulators manipulate the images of system states under a mapping to a different formalism. We describe three such formalisms for quantum circuits. The first two extend the polynomial construction of Dawson et al. [1] to (i) work for any set of quantum gates obeying a certain “balance” condition and (ii) produce a single polynomial over any sufficiently structured field or ring. The third appears novel and employs only simple Boolean formulas, optionally limited to a form we call “parity-of-AND” equations. Especially the third can combine with off-the-shelf state-of-the-art third-party software, namely model counters and #SAT solvers, that we show capable of vast improvements in the emulation time in natural instances. We have programmed all three constructions to proof-of-concept level and report some preliminary tests and applications. These include algebraic analysis of special quantum circuits and the possibility of a new classical attack on the factoring problem. Preliminary comparisons are made with the libquantum simulator[2–4]. 1 A Brief But Full QC Introduction A quantum circuit is a compact representation of a computational system. It consists of some number m of qubits represented by lines resembling a musical staff, and some number s of gates arrayed like musical notes and chords. Here is an example created using the popular visual simulator [5]: Fig. 1. A five-qubit quantum circuit that computes a Fourier transform on the first four qubits. The circuit C operates on m = 5 qubits. The input is the binary string x = 10010. The first n = 4 qubits see most of the action and hold the nominal input x0 = 1001 of length n = 4, while the fifth qubit is an ancilla initialized to 0 whose purpose here is to hold the nominal output bit. The circuit has thirteen gates. Six of them have a single control represented by a black dot; they activate if and only if the control receives a 1 signal. The last gate has two controls and a target represented by the parity symbol ⊕ rather than a labeled box. Called a Toffoli gate, it will set the output bit if and only if both controls receive a 1 signal. The two gates before it merely swap the qubits 2 and 3 and 1 and 4, respectively. They have no effect on the output and are included here only to say that the first twelve gates combine to compute the quantum Fourier transform QFT4. This is just the ordinary discrete Fourier transform F16 on 2 4 = 16 coordinates. The actual output C(x) of the circuit is a quantum state Z that belongs to the complex vector space C. Nine of its entries in the standard basis are shown in Figure 1; seven more were cropped from the screenshot. Sixteen of the components are absent, meaning Z has 0 in the corresponding coordinates. Despite the diversity of the nine complex entries ZL shown, each has magnitude |ZL| = 0.0625. In general, |ZL| represents the probability that a measurement—of all qubits—will yield the binary string z ∈ { 0, 1 } corresponding to the coordinate L under the standard ordered enumeration of { 0, 1 }. Here we are interested in those z whose final entry z5 is a 1. Two of them are shown; two others (11101 and 11111) are possible and also have probability 1 16 each, making a total of 1 4 probability for getting z5 = 1. Owing to the “cylindrical” nature of the set B of strings ending in 1, a measurement of just the fifth qubit yields 1 with probability 1 4 . Where does the probability come from? The physical answer is that it is an indelible aspect of nature as expressed by quantum mechanics. For our purposes the computational answer is that it comes from the four gates labeled H, for Hadamard gate. Each supplies one bit of nondeterminism, giving four bits in all, which govern the sixteen possible outcomes of this particular example. It is a mistake to think that the probabilities must be equally spread out and must be multiples of 1/2 where h is the number of Hadamard gates. Appending just one more Hadamard gate at the right end of the third qubit line creates nonzero probabilities as low as 0.0183058 . . . and as high as 0.106694 . . . , each appearing for four outcomes of 24 nonzero possibilities. This happens because the component values follow wave equations that can amplify some values while reducing or zeroing the amplitude of others via interference. Indeed, the goal of quantum computing is to marshal most of the amplitude onto a small set of desired outcomes, so that measurements— that is to say, quantum sampling—will reveal one of them with high probability. All of this indicates the burgeoning complexity of quantum systems. Our original circuit has 5 qubits, 4 nondeterministic gates, and 9 other gates, yet there are 2 = 32 components of the vectors representing states, 32 basic inputs and outputs, and 2 = 16 branchings to consider. Adding the fifth Hadamard gate creates a new fork in every path through the system, giving 32 branchings. The whole circuit C defines a 32× 32 matrix UC in which the I-th row encodes the quantum state ΦI resulting from computation on the standard basis vector x = eI . The matrix is unitary, meaning that UC multiplied by its conjugate transpose U∗ C gives the 32× 32 identity matrix. Indeed, UC is the product of thirteen simpler matrices U` representing the respective gates (` = 1, . . . , s with s = 13). Here each gate engages only a subset of the qubits of arity r < m, so that U` decomposes into its 2 r × 2 unitary gate matrix and the identity action (represented by the 2× 2 identity matrix I) on the other m− r lines. Here are some single-qubit gate matrices: H = 1 √ 2 [ 1 1 1 −1 ]",
"title": ""
},
{
"docid": "f17b3a6c31daeee0ae0a8ebc7a14e16c",
"text": "In full-duplex (FD) radios, phase noise leads to random phase mismatch between the self-interference (SI) and the reconstructed cancellation signal, resulting in possible performance degradation during SI cancellation. To explicitly analyze its impacts on the digital SI cancellation, an orthogonal frequency division multiplexing (OFDM)-modulated FD radio is considered with phase noises at both the transmitter and receiver. The closed-form expressions for both the digital cancellation capability and its limit for the large interference-to-noise ratio (INR) case are derived in terms of the power of the common phase error, INR, desired signal-to-noise ratio (SNR), channel estimation error and transmission delay. Based on the obtained digital cancellation capability, the achievable rate region of a two-way FD OFDM system with phase noise is characterized. Then, with a limited SI cancellation capability, the maximum outer bound of the rate region is proved to exist for sufficiently large transmission power. Furthermore, a minimum transmission power is obtained to achieve $\\beta$ -portion of the cancellation capability limit and to ensure that the outer bound of the rate region is close to its maximum.",
"title": ""
},
{
"docid": "a86bc0970dba249e1e53f9edbad3de43",
"text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.",
"title": ""
},
{
"docid": "12ba0cd3db135168b48e062cca1d1d32",
"text": "We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.",
"title": ""
},
{
"docid": "1277b7b45f5a54eec80eb8ab47ee3fbb",
"text": "Latent variable models, and probabilistic graphical models more generally, provide a declarative language for specifying prior knowledge and structural relationships in complex datasets. They have a long and rich history in natural language processing, having contributed to fundamental advances such as statistical alignment for translation (Brown et al., 1993), topic modeling (Blei et al., 2003), unsupervised part-of-speech tagging (Brown et al., 1992), and grammar induction (Klein and Manning, 2004), among others. Deep learning, broadly construed, is a toolbox for learning rich representations (i.e., features) of data through numerical optimization. Deep learning is the current dominant paradigm in natural language processing, and some of the major successes include language modeling (Bengio et al., 2003; Mikolov et al., 2010; Zaremba et al., 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), and natural language understanding tasks such as question answering and natural language inference.",
"title": ""
},
{
"docid": "c8ebf32413410a5d91defbb19a73b6f3",
"text": "BACKGROUND\nAudit and feedback is widely used as a strategy to improve professional practice either on its own or as a component of multifaceted quality improvement interventions. This is based on the belief that healthcare professionals are prompted to modify their practice when given performance feedback showing that their clinical practice is inconsistent with a desirable target. Despite its prevalence as a quality improvement strategy, there remains uncertainty regarding both the effectiveness of audit and feedback in improving healthcare practice and the characteristics of audit and feedback that lead to greater impact.\n\n\nOBJECTIVES\nTo assess the effects of audit and feedback on the practice of healthcare professionals and patient outcomes and to examine factors that may explain variation in the effectiveness of audit and feedback.\n\n\nSEARCH METHODS\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) 2010, Issue 4, part of The Cochrane Library. www.thecochranelibrary.com, including the Cochrane Effective Practice and Organisation of Care (EPOC) Group Specialised Register (searched 10 December 2010); MEDLINE, Ovid (1950 to November Week 3 2010) (searched 09 December 2010); EMBASE, Ovid (1980 to 2010 Week 48) (searched 09 December 2010); CINAHL, Ebsco (1981 to present) (searched 10 December 2010); Science Citation Index and Social Sciences Citation Index, ISI Web of Science (1975 to present) (searched 12-15 September 2011).\n\n\nSELECTION CRITERIA\nRandomised trials of audit and feedback (defined as a summary of clinical performance over a specified period of time) that reported objectively measured health professional practice or patient outcomes. In the case of multifaceted interventions, only trials in which audit and feedback was considered the core, essential aspect of at least one intervention arm were included.\n\n\nDATA COLLECTION AND ANALYSIS\nAll data were abstracted by two independent review authors. For the primary outcome(s) in each study, we calculated the median absolute risk difference (RD) (adjusted for baseline performance) of compliance with desired practice compliance for dichotomous outcomes and the median percent change relative to the control group for continuous outcomes. Across studies the median effect size was weighted by number of health professionals involved in each study. We investigated the following factors as possible explanations for the variation in the effectiveness of interventions across comparisons: format of feedback, source of feedback, frequency of feedback, instructions for improvement, direction of change required, baseline performance, profession of recipient, and risk of bias within the trial itself. We also conducted exploratory analyses to assess the role of context and the targeted clinical behaviour. Quantitative (meta-regression), visual, and qualitative analyses were undertaken to examine variation in effect size related to these factors.\n\n\nMAIN RESULTS\nWe included and analysed 140 studies for this review. In the main analyses, a total of 108 comparisons from 70 studies compared any intervention in which audit and feedback was a core, essential component to usual care and evaluated effects on professional practice. After excluding studies at high risk of bias, there were 82 comparisons from 49 studies featuring dichotomous outcomes, and the weighted median adjusted RD was a 4.3% (interquartile range (IQR) 0.5% to 16%) absolute increase in healthcare professionals' compliance with desired practice. Across 26 comparisons from 21 studies with continuous outcomes, the weighted median adjusted percent change relative to control was 1.3% (IQR = 1.3% to 28.9%). For patient outcomes, the weighted median RD was -0.4% (IQR -1.3% to 1.6%) for 12 comparisons from six studies reporting dichotomous outcomes and the weighted median percentage change was 17% (IQR 1.5% to 17%) for eight comparisons from five studies reporting continuous outcomes. Multivariable meta-regression indicated that feedback may be more effective when baseline performance is low, the source is a supervisor or colleague, it is provided more than once, it is delivered in both verbal and written formats, and when it includes both explicit targets and an action plan. In addition, the effect size varied based on the clinical behaviour targeted by the intervention.\n\n\nAUTHORS' CONCLUSIONS\nAudit and feedback generally leads to small but potentially important improvements in professional practice. The effectiveness of audit and feedback seems to depend on baseline performance and how the feedback is provided. Future studies of audit and feedback should directly compare different ways of providing feedback.",
"title": ""
},
{
"docid": "8222f8eae81c954e8e923cbd883f8322",
"text": "Work stealing is a promising approach to constructing multithreaded program runtimes of parallel programming languages. This paper presents HERMES, an energy-efficient work-stealing language runtime. The key insight is that threads in a work-stealing environment -- thieves and victims - have varying impacts on the overall program running time, and a coordination of their execution \"tempo\" can lead to energy efficiency with minimal performance loss. The centerpiece of HERMES is two complementary algorithms to coordinate thread tempo: the workpath-sensitive algorithm determines tempo for each thread based on thief-victim relationships on the execution path, whereas the workload-sensitive algorithm selects appropriate tempo based on the size of work-stealing deques. We construct HERMES on top of Intel Cilk Plus's runtime, and implement tempo adjustment through standard Dynamic Voltage and Frequency Scaling (DVFS). Benchmarks running on HERMES demonstrate an average of 11-12% energy savings with an average of 3-4% performance loss through meter-based measurements over commercial CPUs.",
"title": ""
},
{
"docid": "0c832dde1c268ec32e7fca64158abb31",
"text": "For many years, the clinical laboratory's focus on analytical quality has resulted in an error rate of 4-5 sigma, which surpasses most other areas in healthcare. However, greater appreciation of the prevalence of errors in the pre- and post-analytical phases and their potential for patient harm has led to increasing requirements for laboratories to take greater responsibility for activities outside their immediate control. Accreditation bodies such as the Joint Commission International (JCI) and the College of American Pathologists (CAP) now require clear and effective procedures for patient/sample identification and communication of critical results. There are a variety of free on-line resources available to aid in managing the extra-analytical phase and the recent publication of quality indicators and proposed performance levels by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) working group on laboratory errors and patient safety provides particularly useful benchmarking data. Managing the extra-laboratory phase of the total testing cycle is the next challenge for laboratory medicine. By building on its existing quality management expertise, quantitative scientific background and familiarity with information technology, the clinical laboratory is well suited to play a greater role in reducing errors and improving patient safety outside the confines of the laboratory.",
"title": ""
},
{
"docid": "3e63c8a5499966f30bd3e6b73494ff82",
"text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.",
"title": ""
}
] |
scidocsrr
|
914d66c51630092e0ec3babd3a9a99d2
|
Improving network security monitoring for industrial control systems
|
[
{
"docid": "77302cf6a07ee1b6ffa27f8c12ab6ecf",
"text": "The increasing interconnectivity of SCADA (Supervisory Control and Data Acquisition) networks has exposed them to a wide range of network security problems. This paper provides an overview of all the crucial research issues that are involved in strengthening the cyber security of SCADA networks. The paper describes the general architecture of SCADA networks and the properties of some of the commonly used SCADA communication protocols. The general security threats and vulnerabilities in these networks are discussed followed by a survey of the research challenges facing SCADA networks. The paper discusses the ongoing work in several SCADA security areas such as improving access control, firewalls and intrusion detection systems, SCADA protocol analyses, cryptography and key management, device and operating system security. Many trade and research organizations are involved in trying to standardize SCADA security technologies. The paper concludes with an overview of these standardization efforts. a 2006 Elsevier Ltd. All rights reserved. Modern industrial facilities, such as oil refineries, chemical factories, electric power generation plants, and manufacturing facilities are large, distributed complexes. Plant operators must continuously monitor and control many different sections of the plant to ensure its proper operation. The development of networking technology has made this remote command and control feasible. The earliest control networks were simple point-to-point networks connecting a monitoring or command device to a remote sensor or actuator. These have since evolved into complex networks that support communication between a central control unit and multiple remote units on a common communication bus. The nodes on these networks are usually special purpose embedded computing devices such as sensors, actuators, and PLCs. These industrial command and control networks are commonly called SCADA (Supervisory Control and Data Acquisition) networks. In today’s competitive markets, it is essential for industries to modernize their digital SCADA networks to reduce costs and increase efficiency. Many of the current SCADA networks * Corresponding author. E-mail addresses: [email protected] (V.M. Igure), sal4t@virginia 0167-4048/$ – see front matter a 2006 Elsevier Ltd. All rights reserve doi:10.1016/j.cose.2006.03.001 are also connected to the company’s corporate network and to the Internet. This improved connectivity can help to optimize manufacturing and distribution processes, but it also exposes the safety-critical industrial network to the myriad security problems of the Internet. If processes are monitored and controlled by devices connected over the SCADA network then a malicious attack over the SCADA network has the potential to cause significant damage to the plant. Apart from causing physical and economic loss to the company, an attack against a SCADA network might also adversely affect the environment and endanger public safety. Therefore, security of SCADA networks has become a prime concern. 1. SCADA network architecture A SCADA network provides an interconnection for field devices on the plant floor. These field devices, such as sensors and actuators, are monitored and controlled over the SCADA network by either a PC or a Programmable Logic Controller .edu (S.A. Laughter), [email protected] (R.D. Williams). d. c o m p u t e r s & s e c u r i t y 2 5 ( 2 0 0 6 ) 4 9 8 – 5 0 6 499 (PLC). In many cases, the plants also have a dedicated control center to screen the entire plant. The control center is usually located in a separate physical part of the factory and typically has advanced computation and communication facilities. Modern control centers have data servers, Human–Machine Interface (HMI) stations and other servers to aid the operators in the overall management of the factory network. This SCADA network is usually connected to the outside corporate network and/or the Internet through specialized gateways (Sauter and Schwaiger, 2002; Schwaiger and Treytl, 2003). The gateways provide the interface between IP-based networks on the outside and the fieldbus protocol-based SCADA networks on the factory floor. The gateway provides the protocol conversion mechanisms to enable communication between the two different networks. It also provides cache mechanisms for data objects that are exchanged between the networks in order to improve the gateway performance (Sauter and Schwaiger, 2002). A typical example of SCADA network is shown in Fig. 1. Apart from performance considerations, the design requirements for a SCADA network are also shaped by the operating conditions of the network (Decotignie, 1996). These conditions influence the topology of the network and the network protocol. The resulting SCADA networks have certain unique characteristics. For example, most of the terminal devices in fieldbus networks are special purpose embedded computing systems with limited computing capability and functionality. Unlike highly populated corporate office networks, many utility industry applications of SCADA networks, such as electric power distribution, are usually sparse, yet geographically extensive. Similarly, the physical conditions of a factory floor are vastly different from that of a corporate office environment. Both the large utility and factory floor networks are often subjected to wide variations in temperature, electro-magnetic radiation, and even simple accumulation of large quantities of dust. All of these conditions increase the noise on the network and also reduce the lifetime of the wires. The specifications for the physical layer of the network must be able to withstand such harsh conditions and manage the noise on the network. Typical communications on a SCADA network include control messages exchanged between master and slave devices. A master device is one which can control the operation of another device. A PC or a PLC is an example of a master device. A slave device is usually a simple sensor or actuator which can send messages to the command device and carry out actions at the command of a master device. However, the network protocol should also provide features for communication between fieldbus devices that want to communicate as peers. To accommodate these requirements, protocols such as PROFIBUS have a hybrid communication model, which includes a peer-to-peer communication model between master devices and a client–server communication model between masters and slaves. The communication between devices can also be asymmetric (Carlson, 2002; Risley et al., 2003). For example, messages sent from the slave to the master are typically much larger than the messages sent from the master to the slave. Some devices may also communicate only through alarms and status messages. Since many devices share a common bus, the protocol must have features for assigning priorities to messages. This helps distinguish between critical and non-critical messages. For example, an alarm message about a possible safety violation should take precedence over a regular data update message. SCADA network protocols must also provide some degree of delivery assurance and stability. Many factory processes require realtime communication between field devices. The network protocol should have features that not only ensure that the Fig. 1 – Typical SCADA network architecture. c o m p u t e r s & s e c u r i t y 2 5 ( 2 0 0 6 ) 4 9 8 – 5 0 6 500 critical messages are delivered but that they are delivered within the time constraints.",
"title": ""
},
{
"docid": "11ed7e0742ddb579efe6e1da258b0d3c",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
},
{
"docid": "9f5024623c1366b4e3c997bcfb909707",
"text": "We needed data to help ourselves and our clients to decide when to expend the extra effort to use a real-time extension such as Xenomai; when it is sufficient to use mainline Linux with the PREEMPT RT patches applied; and when unpatched mainline Linux is sufficient. To gather this data, we set out to compare the performance of three kernels: a baseline Linux kernel; the same kernel with the PREEMPT RT patches; and the same kernel with the Xenomai patches. Xenomai is a set of patches to Linux that integrates real-time capabilities from the hardware interrupt level on up. The PREEMPT RT patches make sections of the Linux kernel preemptible that are ordinarily blocking. We measure the timing for performing two tasks. The first task is to toggle a General Purpose IO (GPIO) output at a fixed period. The second task is to respond to a changing input GPIO pin by causing an output GPIO pin’s value to follow it. For this task, rather than polling, we rely on an interrupt to notify us when the GPIO input changes. For each task, we have four distinct experiments: a Linux user-space process with real-time priority; a Linux kernel module; a Xenomai user-space process; and a Xenomai kernel module. The Linux experiments are run on both a stock Linux kernel and a PREEMPT RT-patched Linux kernel. The Xenomai experiments are run on a Xenomai-patched Linux kernel. To provide an objective metric, all timing measurements are taken with an external piece of hardware, running a small C program on bare metal. This paper documents our results. In particular, we begin with a detailed description of the set of tools we developed to test the kernel configurations. We then present details of a a specific hardware test platform, the BeagleBoard C4, an OMAP3 (Arm architecture) system, and the specific kernel configurations we built to test on that platform. We provide extensive numerical results from testing the BeagleBoard. For instance, the approximate highest external-stimulus frequency for which at least 95% of the time the latency does not exceed 1/2 the period is 31kHz. This frequency is achieved with a kernel module on stock Linux; the best that can be achieved with a userspace module is 8.4kHz, using a Xenomai userspace process. If the latency must not exceed 1/2 the frequency 100% of the time, then Xenomai is the best option for both kernelspace and userspace; a Xenomai kernel module can run at 13.5kHz, while a userspace process can hit 5.9kHz. In addition to the numerical results, we discuss the qualitative difficulties we experienced in trying to test these configurations on the BeagleBoard. Finally, we offer our recommendations for deciding when to use stock Linux vs. PREEMPT RTpatched Linux vs. Xenomai for real-time applications.",
"title": ""
}
] |
[
{
"docid": "e603b32746560887bdd6dbcfdc2e1c28",
"text": "A systematic review of self-report family assessment measures was conducted with reference to their psychometric properties, clinical utility and theoretical underpinnings. Eight instruments were reviewed: The McMaster Family Assessment Device (FAD); Circumplex Model Family Adaptability and Cohesion Evaluation Scales (FACES); Beavers Systems Model Self-Report Family Inventory (SFI); Family Assessment Measure III (FAM III); Family Environment Scale (FES); Family Relations Scale (FRS); and Systemic Therapy Inventory of Change (STIC); and the Systemic Clinical Outcome Routine Evaluation (SCORE). Results indicated that five family assessment measures are suitable for clinical use (FAD, FACES-IV, SFI, FAM III, SCORE), two are not (FES, FRS), and one is a new system currently under-going validation (STIC).",
"title": ""
},
{
"docid": "7f8ca7d8d2978bfc08ab259fba60148e",
"text": "Over the last few years, much online volunteered geographic information (VGI) has emerged and has been increasingly analyzed to understand places and cities, as well as human mobility and activity. However, there are concerns about the quality and usability of such VGI. In this study, we demonstrate a complete process that comprises the collection, unification, classification and validation of a type of VGI—online point-of-interest (POI) data—and develop methods to utilize such POI data to estimate disaggregated land use (i.e., employment size by category) at a very high spatial resolution (census block level) using part of the Boston metropolitan area as an example. With recent advances in activity-based land use, transportation, and environment (LUTE) models, such disaggregated land use data become important to allow LUTE models to analyze and simulate a person’s choices of work location and activity destinations and to understand policy impacts on future cities. These data can also be used as alternatives to explore economic activities at the local level, especially as government-published census-based disaggregated employment data have become less available in the recent decade. Our new approach provides opportunities for cities to estimate land use at high resolution with low cost by utilizing VGI while ensuring its quality with a certain accuracy threshold. The automatic classification of POI can also be utilized for other types of analyses on cities. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9377e5de9d7a440aa5e73db10aa630f4",
"text": ". Micro-finance programmes targeting women became a major plank of donor poverty alleviation and gender strategies in the 1990s. Increasing evidence of the centrality of gender equality to poverty reduction and women’s higher credit repayment rates led to a general consensus on the desirability of targeting women. Not only ‘reaching’ but also ‘empowering’ women became the second official goal of the Micro-credit Summit Campaign.",
"title": ""
},
{
"docid": "3ba9e91a4d2ff8cb1fe479f5dddc86c1",
"text": "Researchers have shown that program analyses that drive software development and maintenance tools supporting search, traceability and other tasks can benefit from leveraging the natural language information found in identifiers and comments. Accurate natural language information depends on correctly splitting the identifiers into their component words and abbreviations. While conventions such as camel-casing can ease this task, conventions are not well-defined in certain situations and may be modified to improve readability, thus making automatic splitting more challenging. This paper describes an empirical study of state-of-the-art identifier splitting techniques and the construction of a publicly available oracle to evaluate identifier splitting algorithms. In addition to comparing current approaches, the results help to guide future development and evaluation of improved identifier splitting approaches.",
"title": ""
},
{
"docid": "6247c827c6fdbc976b900e69a9eb275c",
"text": "Despite the fact that commercial computer systems have been in existence for almost three decades, many systems in the process of being implemented may be classed as failures. One of the factors frequently cited as important to successful system development is involving users in the design and implementation process. This paper reports the results of a field study, conducted on data from forty-two systems, that investigates the role of user involvement and factors affecting the employment of user involvement on the success of system development. Path analysis was used to investigate both the direct effects of the contingent variables on system success and the effect of user involvement as a mediating variable between the contingent variables and system success. The results show that high system complexity and constraints on the resources available for system development are associated with less successful systems.",
"title": ""
},
{
"docid": "93fcbdfe59015b67955246927d67a620",
"text": "The Emotion Recognition in the Wild (EmotiW) Challenge has been held for three years. Previous winner teams primarily focus on designing specific deep neural networks or fusing diverse hand-crafted and deep convolutional features. They all neglect to explore the significance of the latent relations among changing features resulted from facial muscle motions. In this paper, we study this recognition challenge from the perspective of analyzing the relations among expression-specific facial features in an explicit manner. Our method has three key components. First, we propose a pair-wise learning strategy to automatically seek a set of facial image patches which are important for discriminating two particular emotion categories. We found these learnt local patches are in part consistent with the locations of expression-specific Action Units (AUs), thus the features extracted from such kind of facial patches are named AU-aware facial features. Second, in each pair-wise task, we use an undirected graph structure, which takes learnt facial patches as individual vertices, to encode feature relations between any two learnt facial patches. Finally, a robust emotion representation is constructed by concatenating all task-specific graph-structured facial feature relations sequentially. Extensive experiments on the EmotiW 2015 Challenge testify the efficacy of the proposed approach. Without using additional data, our final submissions achieved competitive results on both sub-challenges including the image based static facial expression recognition (we got 55.38% recognition accuracy outperforming the baseline 39.13% with a margin of 16.25%) and the audio-video based emotion recognition (we got 53.80% recognition accuracy outperforming the baseline 39.33% and the 2014 winner team's final result 50.37% with the margins of 14.47% and 3.43%, respectively).",
"title": ""
},
{
"docid": "e5ec3cf10b6664642db6a27d7c76987c",
"text": "We present a protocol for payments across payment systems. It enables secure transfers between ledgers and allows anyone with accounts on two ledgers to create a connection between them. Ledger-provided escrow removes the need to trust these connectors. Connections can be composed to enable payments between any ledgers, creating a global graph of liquidity or Interledger. Unlike previous approaches, this protocol requires no global coordinating system or blockchain. Transfers are escrowed in series from the sender to the recipient and executed using one of two modes. In the Atomic mode, transfers are coordinated using an ad-hoc group of notaries selected by the participants. In the Universal mode, there is no external coordination. Instead, bounded execution windows, participant incentives and a “reverse” execution order enable secure payments between parties without shared trust in any system or institution.",
"title": ""
},
{
"docid": "da287113f7cdcb8abb709f1611c8d457",
"text": "The paper describes a completely new topology for a low-speed, high-torque permanent brushless magnet machine. Despite being naturally air-cooled, it has a significantly higher torque density than a liquid-cooled transverse-flux machine, whilst its power factor is similar to that of a conventional permanent magnet brushless machine. The high torque capability and low loss density are achieved by combining the actions of a speed reducing magnetic gear and a high speed PM brushless machine within a highly integrated magnetic circuit. In this way, the magnetic limit of the machine is reached before its thermal limit. The principle of operation of such a dasiapseudopsila direct-drive machine is described, and measured results from a prototype machine are presented.",
"title": ""
},
{
"docid": "f114e788557e8d734bd2a04a5b789208",
"text": "Adaptive content delivery is the state of the art in real-time multimedia streaming. Leading streaming approaches, e.g., MPEG-DASH and Apple HTTP Live Streaming (HLS), have been developed for classical IP-based networks, providing effective streaming by means of pure client-based control and adaptation. However, the research activities of the Future Internet community adopt a new course that is different from today's host-based communication model. So-called information-centric networks are of considerable interest and are advertised as enablers for intelligent networks, where effective content delivery is to be provided as an inherent network feature. This paper investigates the performance gap between pure client-driven adaptation and the theoretical optimum in the promising Future Internet architecture named data networking (NDN). The theoretical optimum is derived by modeling multimedia streaming in NDN as a fractional multi-commodity flow problem and by extending it taking caching into account. We investigate the multimedia streaming performance under different forwarding strategies, exposing the interplay of forwarding strategies and adaptation mechanisms. Furthermore, we examine the influence of network inherent caching on the streaming performance by varying the caching polices and the cache sizes.",
"title": ""
},
{
"docid": "48518bad41b1b422f698a1f09997960f",
"text": "Knowledge graph is powerful tool for knowledge based engineering. In this paper, a vertical knowledge graph is proposed for the non-traditional machining. Firstly, the definition and classification of the knowledge graph are proposed. Then, the construct flow and key techniques are discussed in details for the construction of vertical knowledge graph. Finally, a vertical knowledge graph of EDM (electrical discharge matching) is proposed as a case study to illustrate the feasibility of this method.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "defc7f4420ad99d410fa18c24b46ab24",
"text": "To determine a reference range of fetal transverse cerebellar diameter in Brazilian population. This was a retrospective cross-sectional study with 3772 normal singleton pregnancies between 18 and 24 weeks of pregnancy. The transverse cerebellar diameter was measured on the axial plane of the fetal head at the level of the lateral ventricles, including the thalamus, cavum septum pellucidum, and third ventricle. To assess the correlation between transverse cerebellar diameter and gestational age, polynomial equations were calculated, with adjustments by the determination coefficient (R2). The mean of fetal transverse cerebellar diameter ranged from 18.49 ± 1.24 mm at 18 weeks to 25.86 ± 1.66 mm at 24 weeks of pregnancy. We observed a good correlation between transverse cerebellar diameter and gestational age, which was best represented by a linear equation: transverse cerebellar diameter: -6.21 + 1.307*gestational age (R2 = 0.707). We determined a reference range of fetal transverse cerebellar diameter for the second trimester of pregnancy in Brazilian population.",
"title": ""
},
{
"docid": "0b41e6fde6fb9a1f685ceec59fc5abc9",
"text": "Reflector antennas are widely used on satellites to communicate with ground stations. They simultaneously transmit and receive RF signals using separate downlink and uplink frequency bands. These antennas require compact and high-performance feed assemblies with small size, low mass, low passive intermodulation (PIM) products [1], low insertion loss, high power handling, and low cross-polar levels. The feeds must also be insensitive to large thermal variations, and must survive the launch environment. In order to achieve these desirable features without prototyping and/or bench tuning, Custom Microwave Inc. (CMI) has combined integrated RF design, precision CAD, and a precision manufacturing technique known as electroforming to closely integrate the various components of a feed or feed network, thereby achieving small size while maintaining high RF performance [2]. In addition to close integration, electroforming eliminates split joints and minimizes flanges by allowing several components to be realized in a single piece, making it the ideal manufacturing technique for ultra-low passive-intermodulation applications. This paper describes the use of precision design CAD tools along with electroforming to realize high-performance feed assemblies for various communication frequency bands for fixed satellite, broadcast satellite, and broadband satellite services.",
"title": ""
},
{
"docid": "878bdefc419be3da8d9e18111d26a74f",
"text": "PURPOSE\nTo estimate prevalence and chronicity of insomnia and the impact of chronic insomnia on health and functioning of adolescents.\n\n\nMETHODS\nData were collected from 4175 youths 11-17 at baseline and 3134 a year later sampled from managed care groups in a large metropolitan area. Insomnia was assessed by youth-reported DSM-IV symptom criteria. Outcomes are three measures of somatic health, three measures of mental health, two measures of substance use, three measures of interpersonal problems, and three of daily activities.\n\n\nRESULTS\nOver one-fourth reported one or more symptoms of insomnia at baseline and about 5% met diagnostic criteria for insomnia. Almost 46% of those who reported one or more symptoms of insomnia in Wave 1 continued to be cases at Wave 2 and 24% met DSM-IV symptom criteria for chronic insomnia (cases in Wave 1 were also cases in Wave 2). Multivariate analyses found chronic insomnia increased subsequent risk for somatic health problems, interpersonal problems, psychological problems, and daily activities. Significant odds (p < .05) ranged from 1.6 to 5.6 for poor outcomes. These results are the first reported on chronic insomnia among youths, and corroborate, using prospective data, previous findings on correlates of disturbed sleep based on cross-sectional studies.\n\n\nCONCLUSIONS\nInsomnia is both common and chronic among adolescents. The data indicate that the burden of insomnia is comparable to that of other psychiatric disorders such as mood, anxiety, disruptive, and substance use disorders. Chronic insomnia severely impacts future health and functioning of youths. Those with chronic insomnia are more likely to seek medical care. These data suggest primary care settings might provide a venue for screening and early intervention for adolescent insomnia.",
"title": ""
},
{
"docid": "df3c5a848c66dbd5e804242a93cdb998",
"text": "Handwritten character recognition has been one of the most fascinating research among the various researches in field of image processing. In Handwritten character recognition method the input is scanned from images, documents and real time devices like tablets, tabloids, digitizers etc. which are then interpreted into digital text. There are basically two approaches - Online Handwritten recognition which takes the input at run time and Offline Handwritten Recognition which works on scanned images. In this paper we have discussed the architecture, the steps involved, and the various proposed methodologies of offline and online character recognition along with their comparison and few applications.",
"title": ""
},
{
"docid": "a87ed525f9732e66e6c172867ef8b189",
"text": "We examine corporate financial and investment decisions made by female executives compared with male executives. Male executives undertake more acquisitions and issue debt more often than female executives. Further, acquisitions made by firms with male executives have announcement returns approximately 2% lower than those made by female executive firms, and debt issues also have lower announcement returns for firms with male executives. Female executives place wider bounds on earnings estimates and are more likely to exercise stock options early. This evidence suggests men exhibit relative overconfidence in significant corporate decision making compared with women. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "87447383afe36c38a5f0a7066614336e",
"text": "The current study examined whether self-compassion, the tendency to treat oneself kindly during distress and disappointments, would attenuate the positive relationship between body mass index (BMI) and eating disorder pathology, and the negative relationship between BMI and body image flexibility. One-hundred and fifty-three female undergraduate students completed measures of self-compassion, self-esteem, eating disorder pathology, and body image flexibility, which refers to one's acceptance of negative body image experiences. Controlling for self-esteem, hierarchical regressions revealed that self-compassion moderated the relationships between BMI and the criteria. Specifically, the positive relationship between BMI and eating disorder pathology and the negative relationship between BMI and body image flexibility were weaker the higher women's levels of self-compassion. Among young women, self-compassion may help to protect against the greater eating disturbances that coincide with a higher BMI, and may facilitate the positive body image experiences that tend to be lower the higher one's BMI.",
"title": ""
},
{
"docid": "7da294f96055210548a1b9f33204c234",
"text": "ARGUS is a multi-agent visitor identification system distributed over several workstations. Human faces are extracted from security camera images by a neuralnetwork-based face detector, and identified as frequent visitors by ARENA, a memory-based face recognition system. ARGUS then uses a messaging system to notify hosts that their guests have arrived. An interface agent enables users to submit feedback, which is immediately incorporated by ARENA to improve its face recognition performance. The ARGUS components were rapidly developed using JGram, an agent framework that is also detailed in this paper. JGram automatically converts high-level agent specifications into Java source code, and assembles complex tasks by composing individual agent services into a JGram pipeline. ARGUS has been operating successfully in an outdoor environment for several months.",
"title": ""
},
{
"docid": "badb04b676d3dab31024e8033fc8aec4",
"text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.",
"title": ""
},
{
"docid": "7ce79a08969af50c1712f0e291dd026c",
"text": "Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.",
"title": ""
}
] |
scidocsrr
|
73a537f621468311eabaa37761cef16e
|
Self-Organizing Scheme Based on NFV and SDN Architecture for Future Heterogeneous Networks
|
[
{
"docid": "4d66a85651a78bfd4f7aba290c21f9a7",
"text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.",
"title": ""
}
] |
[
{
"docid": "f452650f3b003e6cd35d0303823e9277",
"text": "With the cloud storage services, users can easily form a group and share data with each other. Given the fact that the cloud is not trustable, users need to compute signatures for blocks of the shared data to allow public integrity auditing. Once a user is revoked from the group, the blocks that were previously signed by this revoked user must be re-signed by an existing user, which may result in heavy communication and computation cost for the user. Proxy re-signatures can be used here to allow the cloud to do the re-signing work on behalf of the group. However, a malicious cloud is able to use the re-signing keys to arbitrarily convert signatures from one user to another deliberately. Moreover, collusions between revoked users and a malicious cloud will disclose the secret values of the existing users. In this paper, we propose a novel public auditing scheme for the integrity of shared data with efficient and collusion-resistant user revocation utilizing the concept of Shamir secret sharing. Besides, our scheme also supports secure and efficient public auditing due to our improved polynomial-based authentication tags. The numerical analysis and experimental results demonstrate that our proposed scheme is provably secure and highly efficient.",
"title": ""
},
{
"docid": "6a4638a12c87b470a93e0d373a242868",
"text": "Unfortunately, few of today’s classrooms focus on helping students develop as creative thinkers. Even students who perform well in school are often unprepared for the challenges that they encounter after graduation, in their work lives as well as their personal lives. Many students learn to solve specific types of problems, but they are unable to adapt and improvise in response to the unexpected situations that inevitably arise in today’s fast-changing world.",
"title": ""
},
{
"docid": "9648c6cbdd7a04c595b7ba3310f32980",
"text": "Increase in identity frauds, crimes, security there is growing need of fingerprint technology in civilian and law enforcement applications. Partial fingerprints are of great interest which are either found at crime scenes or resulted from improper scanning. These fingerprints are poor in quality and the number of features present depends on size of fingerprint. Due to the lack of features such as core and delta, general fingerprint matching algorithms do not perform well for partial fingerprint matching. By using combination of level1 and level 2 features accuracy of partial matching cannot be increased. Therefore, we utilize extended features in combination with other feature set. Efficacious fusion methods for coalesce of different modality systems perform better for these types of prints. In this paper, we propose a method for partial fingerprint matching using score level fusion of minutiae based radon transform and pores based LBP extraction. To deal with broken ridges and fragmentary information, radon transform is used to get local information around minutiae. Finally, we evaluate the performance by comparing Equal Error Rate (ERR) of proposed method and existing method and proposed method reduces the error rate to 1.84%.",
"title": ""
},
{
"docid": "23aa04378f4eed573d1290c6bb9d3670",
"text": "The ability to compare systems from the same domain is of central importance for their introduction into complex applications. In the domains of named entity recognition and entity linking, the large number of systems and their orthogonal evaluation w.r.t. measures and datasets has led to an unclear landscape regarding the abilities and weaknesses of the different approaches. We present GERBIL—an improved platform for repeatable, storable and citable semantic annotation experiments— and its extension since being release. GERBIL has narrowed this evaluation gap by generating concise, archivable, humanand machine-readable experiments, analytics and diagnostics. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets. By these means, we aim to ensure that both tool developers and end users can derive meaningful insights into the extension, integration and use of annotation applications. In particular, GERBIL provides comparable results to tool developers, simplifying the discovery of strengths and weaknesses of their implementations with respect to the state-of-the-art. With the permanent experiment URIs provided by our framework, we ensure the reproducibility and archiving of evaluation results. Moreover, the framework generates data in a machine-processable format, allowing for the efficient querying and postprocessing of evaluation results. Additionally, the tool diagnostics provided by GERBIL provide insights into the areas where tools need further refinement, thus allowing developers to create an informed agenda for extensions and end users to detect the right tools for their purposes. Finally, we implemented additional types of experiments including entity typing. GERBIL aims to become a focal point for the state-of-the-art, driving the research agenda of the community by presenting comparable objective evaluation results. Furthermore, we tackle the central problem of the evaluation of entity linking, i.e., we answer the question of how an evaluation algorithm can compare two URIs to each other without being bound to a specific knowledge base. Our approach to this problem opens a way to address the deprecation of URIs of existing gold standards for named entity recognition and entity linking, a feature which is currently not supported by the state-of-the-art. We derived the importance of this feature from usage and dataset requirements collected from the GERBIL user community, which has already carried out more than 24.000 single evaluations using our framework. Through the resulting updates, GERBIL now supports 8 tasks, 46 datasets and 20 systems.",
"title": ""
},
{
"docid": "f3b4a9b49a34d56c32589cee14e6b900",
"text": "The paper reports on mobile robot motion estimation based on matching points from successive two-dimensional (2D) laser scans. This ego-motion approach is well suited to unstructured and dynamic environments because it directly uses raw laser points rather than extracted features. We have analyzed the application of two methods that are very different in essence: (i) A 2D version of iterative closest point (ICP), which is widely used for surface registration; (ii) a genetic algorithm (GA), which is a novel approach for this kind of problem. Their performance in terms of real-time applicability and accuracy has been compared in outdoor experiments with nonstop motion under diverse realistic navigation conditions. Based on this analysis, we propose a hybrid GA-ICP algorithm that combines the best characteristics of these pure methods. The experiments have been carried out with the tracked mobile robot Auriga-alpha and an on-board 2D laser scanner. _____________________________________________________________________________________ This document is a PREPRINT. The published version of the article is available in: Journal of Field Robotics, 23: 21–34. doi: 10.1002/rob.20104; http://dx.doi.org/10.1002/rob.20104.",
"title": ""
},
{
"docid": "15657f493da77021df3406868e6949ff",
"text": "Brushless dc motors controlled by Hall-effect sensors are used in variety of applications, wherein the Hall sensors should be placed 120 electrical degrees apart. This is difficult to achieve in practice especially in low-precision motors, which leads to unsymmetrical operation of the inverter/motor phases. To mitigate this phenomenon, an approach of filtering the Hall-sensor signals has been recently proposed. This letter extends the previous work and presents a very efficient digital implementation of such filters that can be easily included into various brushless dc motor-drive systems for restoring their operation in steady state and transients.",
"title": ""
},
{
"docid": "41b83a85c1c633785766e3f464cbd7a6",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
},
{
"docid": "fe31348bce3e6e698e26aceb8e99b2d8",
"text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.",
"title": ""
},
{
"docid": "e76a9cef74788905d3d8f5659c2bfca2",
"text": "In this paper, we present a novel configuration for realizing monolithic substrate integrated waveguide (SIW)-based phased antenna arrays using Ferrite low-temperature cofired ceramic (LTCC) technology. Unlike the current common schemes for realizing SIW phased arrays that rely on surface-mount component (p-i-n diodes, etc.) for controlling the phase of the individual antenna elements, here the phase is tuned by biasing of the ferrite filling of the SIW. This approach eliminates the need for mounting of any additional RF components and enables seamless monolithic integration of phase shifters and antennas in SIW technology. As a proof of concept, a two-element slotted SIW-based phased array is designed, fabricated, and measured. The prototype exhibits a gain of 4.9 dBi at 13.2 GHz and a maximum E-plane beam-scanning of ±28° using external windings for biasing the phase shifters. Moreover, the array can achieve a maximum beam-scanning of ±19° when biased with small windings that are embedded in the package. This demonstration marks the first time a fully monolithic SIW-based phased array is realized in Ferrite LTCC technology and paves the way for future larger size implementations.",
"title": ""
},
{
"docid": "a820e52486283ae0b1dd5c1ce07daa34",
"text": "The striatal dopaminergic system has been implicated in reinforcement learning (RL), motor performance, and incentive motivation. Various computational models have been proposed to account for each of these effects individually, but a formal analysis of their interactions is lacking. Here we present a novel algorithmic model expanding the classical actor-critic architecture to include fundamental interactive properties of neural circuit models, incorporating both incentive and learning effects into a single theoretical framework. The standard actor is replaced by a dual opponent actor system representing distinct striatal populations, which come to differentially specialize in discriminating positive and negative action values. Dopamine modulates the degree to which each actor component contributes to both learning and choice discriminations. In contrast to standard frameworks, this model simultaneously captures documented effects of dopamine on both learning and choice incentive-and their interactions-across a variety of studies, including probabilistic RL, effort-based choice, and motor skill learning.",
"title": ""
},
{
"docid": "2f045a9bfabe7adb71085ac29be39990",
"text": "Changes in functional connectivity across mental states can provide richer information about human cognition than simpler univariate approaches. Here, we applied a graph theoretical approach to analyze such changes in the lower alpha (8-10 Hz) band of EEG data from 26 subjects undergoing a mentally-demanding test of sustained attention: the Psychomotor Vigilance Test. Behavior and connectivity maps were compared between the first and last 5 min of the task. Reaction times were significantly slower in the final minutes of the task, showing a clear time-on-task effect. A significant increase was observed in weighted characteristic path length, a measure of the efficiency of information transfer within the cortical network. This increase was correlated with reaction time change. Functional connectivity patterns were also estimated on the cortical surface via source localization of cortical activities in 26 predefined regions of interest. Increased characteristic path length was revealed, providing further support for the presence of a reshaped global topology in cortical connectivity networks under fatigue state. Additional analysis showed an asymmetrical pattern of connectivity (right>left) in fronto-parietal regions associated with sustained attention, supporting the right-lateralization of this function. Interestingly, in the fatigue state, significance decreases were observed in left, but not right fronto-parietal connectivity. Our results indicate that functional network organization can change over relatively short time scales with mental fatigue, and that decreased connectivity has a meaningful relationship with individual difference in behavior and performance.",
"title": ""
},
{
"docid": "ed13193df5db458d0673ccee69700bc0",
"text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).",
"title": ""
},
{
"docid": "bf19f897047ba130afd7742a9847e08c",
"text": "Neural Machine Translation (NMT) has been shown to be more effective in translation tasks compared to the Phrase-Based Statistical Machine Translation (PBMT). However, NMT systems are limited in translating low-resource languages (LRL), due to the fact that neural methods require a large amount of parallel data to learn effective mappings between languages. In this work we show how so-called multilingual NMT can help to tackle the challenges associated with LRL translation. Multilingual NMT forces words and subwords representation in a shared semantic space across multiple languages. This allows the model to utilize a positive parameter transfer between different languages, without changing the standard attentionbased encoder-decoder architecture and training modality. We run preliminary experiments with three languages (English, Italian, Romanian) covering six translation directions and show that for all available directions the multilingual approach, i.e. just one system covering all directions is comparable or even outperforms the single bilingual systems. Finally, our approach achieve competitive results also for language pairs not seen at training time using a pivoting (x-step) translation. Italiano. La traduzione automatica con reti neurali (neural machine translation, NMT) ha dimostrato di essere più efficace in molti compiti di traduzione rispetto a quella basata su frasi (phrase-based machine translation, PBMT). Tuttavia, i sistemi NMT sono limitati nel tradurre lingue con basse risorse (LRL). Questo è dovuto al fatto che i metodi di deep learning richiedono grandi quantit di dati per imparare una mappa efficace tra le due lingue. In questo lavoro mostriamo come un modello NMT multilingua può aiutare ad affrontare i problemi legati alla traduzione di LRL. La NMT multilingua costringe la rappresentrazione delle parole e dei segmenti di parole in uno spazio semantico condiviso tra multiple lingue. Questo consente al modello di usare un trasferimento di parametri positivo tra le lingue coinvolte, senza cambiare l’architettura NMT encoder-decoder basata sull’attention e il modo di addestramento. Abbiamo eseguito esperimenti preliminari con tre lingue (inglese, italiano e rumeno), coprendo sei direzioni di traduzione e mostriamo che per tutte le direzioni disponibili l’approccio multilingua, cioè un solo sistema che copre tutte le direzioni è confrontabile o persino migliore dei singolo sistemi bilingue. Inoltre, il nostro approccio ottiene risultati competitivi anche per coppie di lingue non viste durante il trainig, facendo uso di traduzioni con pivot.",
"title": ""
},
{
"docid": "43d5236bd9e2afc2882b662e4626bfce",
"text": "Mindfulness meditation (or simply mindfulness) is an ancient method of attention training. Arguably, developed originally by the Buddha, it has been practiced by Buddhists over 2,500 years as part of their spiritual training. The popularity in mindfulness has soared recently following its adaptation as Mindfulness-Based Stress Management by Jon Kabat-Zinn (1995). Mindfulness is often compared to hypnosis but not all assertions are accurate. This article, as a primer, delineates similarities and dissimilarities between mindfulness and hypnosis in terms of 12 specific facets, including putative neuroscientific findings. It also provides a case example that illustrates clinical integration of the two methods.",
"title": ""
},
{
"docid": "9058505c04c1dc7c33603fd8347312a0",
"text": "Fear appeals are a polarizing issue, with proponents confident in their efficacy and opponents confident that they backfire. We present the results of a comprehensive meta-analysis investigating fear appeals' effectiveness for influencing attitudes, intentions, and behaviors. We tested predictions from a large number of theories, the majority of which have never been tested meta-analytically until now. Studies were included if they contained a treatment group exposed to a fear appeal, a valid comparison group, a manipulation of depicted fear, a measure of attitudes, intentions, or behaviors concerning the targeted risk or recommended solution, and adequate statistics to calculate effect sizes. The meta-analysis included 127 articles (9% unpublished) yielding 248 independent samples (NTotal = 27,372) collected from diverse populations. Results showed a positive effect of fear appeals on attitudes, intentions, and behaviors, with the average effect on a composite index being random-effects d = 0.29. Moderation analyses based on prominent fear appeal theories showed that the effectiveness of fear appeals increased when the message included efficacy statements, depicted high susceptibility and severity, recommended one-time only (vs. repeated) behaviors, and targeted audiences that included a larger percentage of female message recipients. Overall, we conclude that (a) fear appeals are effective at positively influencing attitude, intentions, and behaviors; (b) there are very few circumstances under which they are not effective; and (c) there are no identified circumstances under which they backfire and lead to undesirable outcomes.",
"title": ""
},
{
"docid": "e5ecbd3728e93badd4cfbf5eef6957f9",
"text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.",
"title": ""
},
{
"docid": "c91cbf47f1c506b4d512adc752fff039",
"text": "OBJECTIVE\nSodium benzoate, a common additive in popular beverages, has recently been linked to ADHD. This research examined the relationship between sodium benzoate-rich beverage ingestion and symptoms related to ADHD in college students.\n\n\nMETHOD\nCollege students (N = 475) completed an anonymous survey in class in fall 2010. The survey assessed recent intake of a noninclusive list of sodium benzoate-rich beverages and ADHD-related symptoms using a validated screener.\n\n\nRESULTS\nSodium benzoate-rich beverage intake was significantly associated with ADHD-related symptoms (p = .001), and significance was retained after controlling for covariates. Students scoring ≥4 on the screener (scores that may be consistent with ADHD; n = 67) reported higher intakes (34.9 ± 4.4 servings/month) than the remainder of the sample (16.7 ± 1.1 servings/month).\n\n\nCONCLUSION\nThese data suggest that a high intake of sodium benzoate-rich beverages may contribute to ADHD-related symptoms in college students and warrants further investigation.",
"title": ""
},
{
"docid": "1d1caa539215e7051c25a9f28da48651",
"text": "Physiological changes occur in pregnancy to nurture the developing foetus and prepare the mother for labour and delivery. Some of these changes influence normal biochemical values while others may mimic symptoms of medical disease. It is important to differentiate between normal physiological changes and disease pathology. This review highlights the important changes that take place during normal pregnancy.",
"title": ""
},
{
"docid": "9b71c5bd7314e793757776c6e54f03bb",
"text": "This paper evaluates the application of Bronfenbrenner’s bioecological theory as it is represented in empirical work on families and their relationships. We describe the ‘‘mature’’ form of bioecological theory of the mid-1990s and beyond, with its focus on proximal processes at the center of the Process-Person-Context-Time model. We then examine 25 papers published since 2001, all explicitly described as being based on Bronfenbrenner’s theory, and show that all but 4 rely on outmoded versions of the theory, resulting in conceptual confusion and inadequate testing of the theory.",
"title": ""
}
] |
scidocsrr
|
1057b1673d4ac7b30c702bff9c449e9e
|
Malware Detection with Deep Neural Network Using Process Behavior
|
[
{
"docid": "4ca5fec568185d3699c711cc86104854",
"text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.",
"title": ""
}
] |
[
{
"docid": "45636bc97812ecfd949438c2e8ee9d52",
"text": "Single-image super-resolution is a fundamental task for vision applications to enhance the image quality with respect to spatial resolution. If the input image contains degraded pixels, the artifacts caused by the degradation could be amplified by superresolution methods. Image blur is a common degradation source. Images captured by moving or still cameras are inevitably affected by motion blur due to relative movements between sensors and objects. In this work, we focus on the super-resolution task with the presence of motion blur. We propose a deep gated fusion convolution neural network to generate a clear high-resolution frame from a single natural image with severe blur. By decomposing the feature extraction step into two task-independent streams, the dualbranch design can facilitate the training process by avoiding learning the mixed degradation all-in-one and thus enhance the final high-resolution prediction results. Extensive experiments demonstrate that our method generates sharper super-resolved images from low-resolution inputs with high computational efficiency.",
"title": ""
},
{
"docid": "d11c2dd512f680e79706f73d4cd3d0aa",
"text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.",
"title": ""
},
{
"docid": "096249a1b13cd994427eacddc8af3cf6",
"text": "Many factors influence the adoption of cloud computing. Organizations must systematically evaluate these factors before deciding to adopt cloud-based solutions. To assess the determinants that influence the adoption of cloud computing, we develop a research model based on the innovation characteristics from the diffusion of innovation (DOI) theory and the technology-organization-environment (TOE) framework. Data collected from 369 firms in Portugal are used to test the related hypotheses. The study also investigates the determinants of cloud-computing adoption in the manufacturing and services sectors. 2014 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +351 914 934 438. E-mail addresses: [email protected] (T. Oliveira), [email protected] (M. Thomas), [email protected] (M. Espadanal).",
"title": ""
},
{
"docid": "873a24a210aa57fc22895500530df2ba",
"text": "We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic system building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs. assumptions, computation vs. embodiment, and planning vs. feedback. 2) To understand which region of each spectrum most adequately addresses which robotic problem, we must explore the full spectrum of possible approaches. To achieve this, our community should agree on key aspects that characterize the solution space of robotic systems. 3) For manipulation problems in unstructured environments, certain regions of each spectrum match the problem most adequately, and should be exploited further. This is supported by the fact that our solution deviated from the majority of the other challenge entries along each of the spectra.",
"title": ""
},
{
"docid": "7e51bffe62c16cdc517a7c1cbd4ac3fe",
"text": "Information is a perennially significant business asset in all organizations. Therefore, it must be protected as any other valuable asset. This is the objective of information security, and an information security program provides this kind of protection for a company’s information assets and for the company as a whole. One of the best ways to address information security problems in the corporate world is through a risk-based approach. In this paper, we present a taxonomy of security risk assessment drawn from 125 papers published from 1995 to May 2014. Organizations with different size may face problems in selecting suitable risk assessment methods that satisfy their needs. Although many risk-based approaches have been proposed, most of them are based on the old taxonomy, avoiding the need for considering and applying the important criteria in assessing risk raised by rapidly changing technologies and the attackers knowledge level. In this paper, we discuss the key features of risk assessment that should be included in an information security management system. We believe that our new risk assessment taxonomy helps organizations to not only understand the risk assessment better by comparing different new concepts but also select a suitable way to conduct the risk assessment properly. Moreover, this taxonomy will open up interesting avenues for future research in the growing field of security risk assessment.",
"title": ""
},
{
"docid": "82a4bac1745e2d5dd9e39c5a4bf5b3e9",
"text": "Meaning can be as important as usability in the design of technology.",
"title": ""
},
{
"docid": "c95e58c054855c60b16db4816c626ecb",
"text": "Markerless tracking of human pose is a hard yet relevant problem. In this paper, we derive an efficient filtering algorithm for tracking human pose using a stream of monocular depth images. The key idea is to combine an accurate generative model — which is achievable in this setting using programmable graphics hardware — with a discriminative model that provides data-driven evidence about body part locations. In each filter iteration, we apply a form of local model-based search that exploits the nature of the kinematic chain. As fast movements and occlusion can disrupt the local search, we utilize a set of discriminatively trained patch classifiers to detect body parts. We describe a novel algorithm for propagating this noisy evidence about body part locations up the kinematic chain using the un-scented transform. The resulting distribution of body configurations allows us to reinitialize the model-based search. We provide extensive experimental results on 28 real-world sequences using automatic ground-truth annotations from a commercial motion capture system.",
"title": ""
},
{
"docid": "90a9e56cc5a2f9c149dfb33d3446f095",
"text": "The author explores the viability of a comparative approach to personality research. A review of the diverse animal-personality literature suggests that (a) most research uses trait constructs, focuses on variation within (vs. across) species, and uses either behavioral codings or trait ratings; (b) ratings are generally reliable and show some validity (7 parameters that could influence reliability and 4 challenges to validation are discussed); and (c) some dimensions emerge across species, but summaries are hindered by a lack of standard descriptors. Arguments for and against cross-species comparisons are discussed, and research guidelines are suggested. Finally, a research agenda guided by evolutionary and ecological principles is proposed. It is concluded that animal studies provide unique opportunities to examine biological, genetic, and environmental bases of personality and to study personality change, personality-health links, and personality perception.",
"title": ""
},
{
"docid": "67d141b8e53e1398b6988e211d16719e",
"text": "the recent advancement of networking technology has enabled the streaming of video content over wired/wireless network to a great extent. Video streaming includes various types of video content, namely, IP television (IPTV), Video on demand (VOD), Peer-to-Peer (P2P) video sharing, Voice (and video) over IP (VoIP) etc. The consumption of the video contents has been increasing a lot these days and promises a huge potential for the network provider, content provider and device manufacturers. However, from the end user's perspective there is no universally accepted existing standard metric, which will ensure the quality of the application/utility to meet the user's desired experience. In order to fulfill this gap, a new metric, called Quality of Experience (QoE), has been proposed in numerous researches recently. Our aim in this paper is to research the evolution of the term QoE, find the influencing factors of QoE metric especially in video streaming and finally QoE modelling and methodologies in practice.",
"title": ""
},
{
"docid": "f393b6e00ef1e97f683a5dace33e40ff",
"text": "s on human factors in computing systems (pp. 815–828). ACM New York, NY, USA. Hudlicka, E. (1997). Summary of knowledge elicitation techniques for requirements analysis (Course material for human computer interaction). Worcester Polytechnic Institute. Kaptelinin, V., & Nardi, B. (2012). Affordances in HCI: Toward a mediated action perspective. In Proceedings of CHI '12 (pp. 967–976).",
"title": ""
},
{
"docid": "77ec1741e7a0876a0fe9fb85dd57f552",
"text": "Despite growing recognition that attention fluctuates from moment-to-moment during sustained performance, prevailing analysis strategies involve averaging data across multiple trials or time points, treating these fluctuations as noise. Here, using alternative approaches, we clarify the relationship between ongoing brain activity and performance fluctuations during sustained attention. We introduce a novel task (the gradual onset continuous performance task), along with innovative analysis procedures that probe the relationships between reaction time (RT) variability, attention lapses, and intrinsic brain activity. Our results highlight 2 attentional states-a stable, less error-prone state (\"in the zone\"), characterized by higher default mode network (DMN) activity but during which subjects are at risk of erring if DMN activity rises beyond intermediate levels, and a more effortful mode of processing (\"out of the zone\"), that is less optimal for sustained performance and relies on activity in dorsal attention network (DAN) regions. These findings motivate a new view of DMN and DAN functioning capable of integrating seemingly disparate reports of their role in goal-directed behavior. Further, they hold potential to reconcile conflicting theories of sustained attention, and represent an important step forward in linking intrinsic brain activity to behavioral phenomena.",
"title": ""
},
{
"docid": "229605eada4ca390d17c5ff168c6199a",
"text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.",
"title": ""
},
{
"docid": "65a990303d1d6efd3aea5307e7db9248",
"text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org",
"title": ""
},
{
"docid": "5ed74b235edcbcb5aeb5b6b3680e2122",
"text": "Self-paced learning (SPL) mimics the cognitive mechanism o f humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by mini zer function. Existing methods usually pursue this by artificially designing th e explicit form of SPL regularizer. In this paper, we focus on the minimizer functi on, and study a group of new regularizer, named self-paced implicit regularizer th at is deduced from robust loss function. Based on the convex conjugacy theory, the min imizer function for self-paced implicit regularizer can be directly learned fr om the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We dem onstrate that the learning procedure of SPL-IR is associated with latent robu st loss functions, thus can provide some theoretical inspirations for its working m echanism. We further analyze the relation between SPL-IR and half-quadratic opt imization. Finally, we implement SPL-IR to both supervised and unsupervised tasks , nd experimental results corroborate our ideas and demonstrate the correctn ess and effectiveness of implicit regularizers.",
"title": ""
},
{
"docid": "e141a1c5c221aa97db98534b339694cb",
"text": "Despite the tremendous popularity and great potential, the field of Enterprise Resource Planning (ERP) adoption and implementation is littered with remarkable failures. Though many contributing factors have been cited in the literature, we argue that the integrated nature of ERP systems, which generally requires an organization to adopt standardized business processes reflected in the design of the software, is a key factor contributing to these failures. We submit that the integration and standardization imposed by most ERP systems may not be suitable for all types of organizations and thus the ‘‘fit’’ between the characteristics of the adopting organization and the standardized business process designs embedded in the adopted ERP system affects the likelihood of implementation success or failure. In this paper, we use the structural contingency theory to identify a set of dimensions of organizational structure and ERP system characteristics that can be used to gauge the degree of fit, thus providing some insights into successful ERP implementations. Propositions are developed based on analyses regarding the success of ERP implementations in different types of organizations. These propositions also provide directions for future research that might lead to prescriptive guidelines for managers of organizations contemplating implementing ERP systems. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2cc55b2cf34d363de50b220a5ced5676",
"text": "We report an imaging scheme, termed aperture-scanning Fourier ptychography, for 3D refocusing and super-resolution macroscopic imaging. The reported scheme scans an aperture at the Fourier plane of an optical system and acquires the corresponding intensity images of the object. The acquired images are then synthesized in the frequency domain to recover a high-resolution complex sample wavefront; no phase information is needed in the recovery process. We demonstrate two applications of the reported scheme. In the first example, we use an aperture-scanning Fourier ptychography platform to recover the complex hologram of extended objects. The recovered hologram is then digitally propagated into different planes along the optical axis to examine the 3D structure of the object. We also demonstrate a reconstruction resolution better than the detector pixel limit (i.e., pixel super-resolution). In the second example, we develop a camera-scanning Fourier ptychography platform for super-resolution macroscopic imaging. By simply scanning the camera over different positions, we bypass the diffraction limit of the photographic lens and recover a super-resolution image of an object placed at the far field. This platform's maximum achievable resolution is ultimately determined by the camera's traveling range, not the aperture size of the lens. The FP scheme reported in this work may find applications in 3D object tracking, synthetic aperture imaging, remote sensing, and optical/electron/X-ray microscopy.",
"title": ""
},
{
"docid": "a4790fdc5f6469b45fa4a22a871f3501",
"text": "NSGA ( [5]) is a popular non-domination based genetic algorithm for multiobjective optimization. It is a very effective algorithm but has been generally criticized for its computational complexity, lack of elitism and for choosing the optimal parameter value for sharing parameter σshare. A modified version, NSGAII ( [3]) was developed, which has a better sorting algorithm , incorporates elitism and no sharing parameter needs to be chosen a priori. NSGA-II is discussed in detail in this.",
"title": ""
},
{
"docid": "790895861cb5bba78513d26c1eb30e4c",
"text": "This paper develops an integrated approach, combining quality function deployment (QFD), fuzzy set theory, and analytic hierarchy process (AHP) approach, to evaluate and select the optimal third-party logistics service providers (3PLs). In the approach, multiple evaluating criteria are derived from the requirements of company stakeholders using a series of house of quality (HOQ). The importance of evaluating criteria is prioritized with respect to the degree of achieving the stakeholder requirements using fuzzy AHP. Based on the ranked criteria, alternative 3PLs are evaluated and compared with each other using fuzzy AHP again to make an optimal selection. The effectiveness of proposed approach is demonstrated by applying it to a Hong Kong based enterprise that supplies hard disk components. The proposed integrated approach outperforms the existing approaches because the outsourcing strategy and 3PLs selection are derived from the corporate/business strategy.",
"title": ""
},
{
"docid": "c6347c06d84051023baaab39e418fb65",
"text": "This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELMs) Toolbox for Big Data. It summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements. The results are applicable to a wide range of machine learning problems and thus provide a solid ground for tackling numerous Big Data challenges. The included toolbox is targeted at enabling the full potential of ELMs to the widest range of users.",
"title": ""
},
{
"docid": "db570f8ff8d714dc2964a9d9b7032bf4",
"text": "Pain related to the osseous thoracolumbar spine is common in the equine athlete, with minimal information available regarding soft tissue pathology. The aims of this study were to describe the anatomy of the equine SSL and ISL (supraspinous and interspinous ligaments) in detail and to assess the innervation of the ligaments and their myofascial attachments including the thoracolumbar fascia. Ten equine thoracolumbar spines (T15-L1) were dissected to define structure and anatomy of the SSL, ISL and adjacent myofascial attachments. Morphological evaluation included histology, electron microscopy and immunohistochemistry (S100 and Substance P) of the SSL, ISL, adjacent fascial attachments, connective tissue and musculature. The anatomical study demonstrated that the SSL and ISL tissues merge with the adjacent myofascia. The ISL has a crossing fibre arrangement consisting of four ligamentous layers with adipose tissue axially. A high proportion of single nerve fibres were detected in the SSL (mean = 2.08 fibres/mm2 ) and ISL (mean = 0.75 fibres/mm2 ), with the larger nerves located between the ligamentous and muscular tissue. The oblique crossing arrangement of the fibres of the ISL likely functions to resist distractive and rotational forces, therefore stabilizing the equine thoracolumbar spine. The dense sensory innervation within the SSL and ISL could explain the severe pain experienced by some horses with impinging dorsal spinous processes. Documentation of the nervous supply of the soft tissues associated with the dorsal spinous processes is a key step towards improving our understanding of equine back pain.",
"title": ""
}
] |
scidocsrr
|
9fb6202d6d18b99a484bb9a3b41b1132
|
Car Number Plate Recognition (CNPR) system using multiple template matching
|
[
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
},
{
"docid": "a81c87374e7ea9a3066f643ac89bfd2b",
"text": "Image edge detection is a process of locating the e dg of an image which is important in finding the approximate absolute gradient magnitude at each point I of an input grayscale image. The problem of getting an appropriate absolute gradient magnitude for edges lies in the method used. The Sobel operator performs a 2-D spatial gradient measurement on images. Transferri ng a 2-D pixel array into statistically uncorrelated data se t enhances the removal of redundant data, as a result, reduction of the amount of data is required to represent a digital image. The Sobel edge detector uses a pair of 3 x 3 convolution masks, one estimating gradient in the x-direction and the other estimating gradient in y–direction. The Sobel detector is incredibly sensit ive o noise in pictures, it effectively highlight them as edges. Henc e, Sobel operator is recommended in massive data communication found in data transfer.",
"title": ""
}
] |
[
{
"docid": "275afb5836acf741593f6bac90e5ffce",
"text": "We propose algorithms to address the spectrum efficiency and fairness issues of multi band multiuser Multiple-Input and Multiple-Output (MIMO) cognitive ad-hoc networks. To improve the transmission efficiency of the MIMO system, a cross layer antenna selection algorithm is proposed. Using the transmission efficiency results, user data rate of the cognitive ad-hoc network is determined. Objective function for the average data rate of the multi band multiuser cognitive MIMO ad-hoc network is also defined. For the average data rate objective function, primary users interference is considered as performance constraint. Furthermore, using the user data rate results, a learning-based channel allocation algorithm is proposed. Finally, numerical results are presented for performance evaluation of the proposed antenna selection and channel allocation algorithms.",
"title": ""
},
{
"docid": "97353be7c54dd2ded69815bf93545793",
"text": "In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.",
"title": ""
},
{
"docid": "39debcb0aa41eec73ff63a4e774f36fd",
"text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.",
"title": ""
},
{
"docid": "625b96d21cb9ff05785aa34c98c567ff",
"text": "The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.",
"title": ""
},
{
"docid": "8bc418be099f14d677d3fdfbfa516248",
"text": "The present study examines the influence of social context on the use of emoticons in Internet communication. Secondary school students (N = 158) responded to short internet chats. Social context (task-oriented vs. socio-emotional) and valence of the context (positive vs. negative) were manipulated in these chats. Participants were permitted to respond with text, emoticon or a combination of both. Results showed that participants used more emoticons in socio-emotional than in task-oriented social contexts. Furthermore, students used more positive emoticons in positive contexts and more negative emoticons in negative contexts. An interaction was found between valence and kind of context; in negative, task-oriented contexts subjects used the least emoticons. Results are related to research about the expression of emotions in face-to-face interaction. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "813a45c7cae19fcd548a8b95a670d65a",
"text": "In this paper, conical monopole type UWB antenna which suppress dual bands is proposed. The SSRs were arranged in such a way that the interaction of the magnetic field with them enables the UWB antenna to reject the dual bands using the resonance of SRRs. The proposed conical monopole antenna has a return loss less than -10dB and antenna gain greater than 5dB at 2GHz~11GHz frequency band, except the suppressed bands. The return loss and gain at WiMAX and WLAN bands is greater than -3dB and less than 0dB respectively.",
"title": ""
},
{
"docid": "ca3c3dec83821747896d44261ba2f9ad",
"text": "Building discriminative representations for 3D data has been an important task in computer graphics and computer vision research. Convolutional Neural Networks (CNNs) have shown to operate on 2D images with great success for a variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible and promising next step. Unfortunately, the computational complexity of 3D CNNs grows cubically with respect to voxel resolution. Moreover, since most 3D geometry representations are boundary based, occupied regions do not increase proportionately with the size of the discretization, resulting in wasted computation. In this work, we represent 3D spaces as volumetric fields, and propose a novel design that employs field probing filters to efficiently extract features from them. Each field probing filter is a set of probing points — sensors that perceive the space. Our learning algorithm optimizes not only the weights associated with the probing points, but also their locations, which deforms the shape of the probing filters and adaptively distributes them in 3D space. The optimized probing points sense the 3D space “intelligently”, rather than operating blindly over the entire domain. We show that field probing is significantly more efficient than 3DCNNs, while providing state-of-the-art performance, on classification tasks for 3D object recognition benchmark datasets.",
"title": ""
},
{
"docid": "62b345b0aa68a909fbbded8ba18ea75c",
"text": "The transmission of malaria is highly variable and depends on a range of climatic and anthropogenic factors. In addition, the dispersal of Anopheles mosquitoes is a key determinant that affects the persistence and dynamics of malaria. Simple, lumped-population models of malaria prevalence have been insufficient for predicting the complex responses of malaria to environmental changes. A stochastic lattice-based model that couples a mosquito dispersal and a susceptible-exposed-infected-recovered epidemics model was developed for predicting the dynamics of malaria in heterogeneous environments. The It$$\\hat{o}$$ o^ approximation of stochastic integrals with respect to Brownian motion was used to derive a model of stochastic differential equations. The results show that stochastic equations that capture uncertainties in the life cycle of mosquitoes and interactions among vectors, parasites, and hosts provide a mechanism for the disruptions of malaria. Finally, model simulations for a case study in the rural area of Kilifi county, Kenya are presented. A stochastic lattice-based integrated malaria model has been developed. The applicability of the model for capturing the climate-driven hydrologic factors and demographic variability on malaria transmission has been demonstrated.",
"title": ""
},
{
"docid": "dfaa6e183e70cbacc5c9de501993b7af",
"text": "Traditional buildings consume more of the energy resources than necessary and generate a variety of emissions and waste. The solution to overcoming these problems will be to build them green and smart. One of the significant components in the concept of smart green buildings is using renewable energy. Solar energy and wind energy are intermittent sources of energy, so these sources have to be combined with other sources of energy or storage devices. While batteries and/or supercapacitors are an ideal choice for short-term energy storage, regenerative hydrogen-oxygen fuel cells are a promising candidate for long-term energy storage. This paper is to design and test a green building energy system that consists of renewable energy, energy storage, and energy management. The paper presents the architecture of the proposed green building energy system and a simulation model that allows for the study of advanced control strategies for the green building energy system. An example green building energy system is tested and simulation results show that the variety of energy source and storage devices can be managed very well.",
"title": ""
},
{
"docid": "a827f7ceabd844453dcf81cf7f87c7db",
"text": "Steganography means hiding the secret message within an ordinary message and extraction of it as its destination. In the texture synthesis process here re-samples smaller texture image which gives a new texture image with a similar local appearance. In the existing system, work is done for the texture synthesis process but the embedding capacity of those systems is very low. In the project introduced the method SURTDS (steganography using reversible texture synthesis) for enhancing the embedding capacity of the system by using the difference expansion method with texture synthesis. Initially, this system evaluates the binary value of the secret image and converts this value into a decimal value. The process of embedding is performed by using the difference expansion techniques. Difference expansion computes the average and difference in a patch and embedded the value one by one. This system improves the embedding capacity of the stego image. The experimental result has verified that this system improves the embedding capacity of the SURTDS is better than the existing system.",
"title": ""
},
{
"docid": "61a782f8797b76d6d5ce581729c3cfc0",
"text": "Wordnets are lexico-semantic resources essential in many NLP tasks. Princeton WordNet is the most widely known, and the most influential, among them. Wordnets for languages other than English tend to adopt unquestioningly WordNet’s structure and its net of lexicalised concepts. We discuss a large wordnet constructed independently of WordNet, upon a model with a small yet significant difference. A mapping onto WordNet is under way; the large portions already linked open up a unique perspective on the comparison of similar but not fully compatible lexical resources. We also try to characterise numerically a wordnet’s aptitude for NLP applications.",
"title": ""
},
{
"docid": "336d83fd5628d9325fed0d88c56bc617",
"text": "Influence of fruit development and ripening on the changes in physico-chemical properties, antiradical activity and the accumulation of polyphenolic compounds were investigated in Maoluang fruits. Total phenolics content (TP) was assayed according to the Folin-Ciocalteu method, and accounted for 19.60-8.66 mg GAE/g f.w. The TP gradually decreased from the immature to the over ripe stages. However, the total anthocyanin content (TA) showed the highest content at the over ripe stage, with an average value of 141.94 mg/100 g f.w. The antiradical activity (AA) of methanolic extracts from Maoluang fruits during development and ripening were determined with DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging. The highest AA was observed at the immature stage accompanied by the highest content of gallic acid and TP. Polyphenols were quantified by HPLC. The level of procyanidin B2, procyanidin B1, (+)-catechin, (–)-epicatechin, rutin and tran-resveratrol as the main polyphenol compounds, increased during fruit development and ripening. Other phenolic acids such as gallic, caffeic, and ellagic acids significantly decreased (p < 0.05) during fruit development and ripening. At over ripe stage, Maoluang possess the highest antioxidants. Thus, the over ripe stage would be the appropriate time to harvest when taking nutrition into consideration. This existing published information provides a helpful daily diet guide and useful guidance for industrial utilization of Maoluang fruits.",
"title": ""
},
{
"docid": "105f34c3fa2d4edbe83d184b7cf039aa",
"text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.",
"title": ""
},
{
"docid": "82917c4e6fb56587cc395078c14f3bb7",
"text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.",
"title": ""
},
{
"docid": "8d7af01e003961cbf2a473abe32d8b7e",
"text": "This paper presents a series of control strategies for soft compliant manipulators. We provide a novel approach to control multi-fingered tendon-driven foam hands using a CyberGlove and a simple ridge regression model. The results achieved include complex posing, dexterous grasping and in-hand manipulations. To enable efficient data sampling and a more intuitive design process of foam robots, we implement and evaluate a finite element based simulation. The accuracy of this model is evaluated using a Vicon motion capture system. We then use this simulation to solve inverse kinematics and compare the performance of supervised learning, reinforcement learning, nearest neighbor and linear ridge regression methods in terms of their accuracy and sample efficiency.",
"title": ""
},
{
"docid": "96973058d3ca943f3621dfe843baf631",
"text": "Many organizations are gradually catching up with the tide of adopting agile practices at workplace, but they seem to be struggling with how to choose the agile practices and mix them into their IT software project development and management. These organizations have already had their own development styles, many of which have adhered to the traditional plan-driven methods such as waterfall. The inherent corporate culture of resisting to change or hesitation to abandon what they have established for a whole new methodology hampers the process change. In this paper, we will review the current state of agile adoption in business organizations and propose a new approach to IT project development and management by blending Scrum, an agile method, into traditional plan-driven project development and management. The management activity involved in Scrum is discussed, the team and meeting composing of Scrum are investigated, the challenges and benefits of applying Scrum in traditional IT project development and management are analyzed, the blending structure is illustrated and discussed, and the iterative process with Scrum and planned process without Scrum are compared.",
"title": ""
},
{
"docid": "c31ffcb1514f437313c2f3f0abaf3a88",
"text": "Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.",
"title": ""
},
{
"docid": "245a31291c7d8fbaac249f9e4585c652",
"text": "A recent advancement of the mobile web has enabled features previously only found in natively developed apps. Thus, arduous development for several platforms or using cross-platform approaches was required. The novel approach, coined Progressive Web Apps, can be implemented through a set of concepts and technologies on any web site that meets certain requirements. In this paper, we argue for progressive web apps as a possibly unifying technology for web apps and native apps. After an introduction of features, we scrutinize the performance. Two cross-platform mobile apps and one Progressive Web App have been developed for comparison purposes, and provided in an open source repository for results’ validity verification. We aim to spark interest in the academic community, as a lack of academic involvement was identified as part of the literature search.",
"title": ""
},
{
"docid": "c0db1cd3688a18c853331772dbdfdedc",
"text": "In this review we describe the challenges and opportunities for creating magnetically active metamaterials in the optical part of the spectrum. The emphasis is on the sub-wavelength periodic metamaterials whose unit cell is much smaller than the optical wavelength. The conceptual differences between microwave and optical metamaterials are demonstrated. We also describe several theoretical techniques used for calculating the effective parameters of plasmonic metamaterials: the effective dielectric permittivity eff(ω) and magnetic permeability μeff(ω). Several examples of negative permittivity and negative permeability plasmonic metamaterials are used to illustrate the theory. c © 2008 Elsevier Ltd. All rights reserved. PACS: 42.70.-a; 41.20.Gz; 78.67.Bf",
"title": ""
},
{
"docid": "2e7b03f13b1c33a42b3ff77886e0683e",
"text": "Internet of Things (IoT), cloud computing and integrated deployment are becoming central topics in Internet development. In this paper, a cloud containerization solution, Docker, has been introduced to the original \"Bluetooth Based Software Defined Function\" (BT-SDF), a framework designed to simplify the IoT function redefining process. With the assistance of Docker and its clustering, Docker Swarm, the BT-SDF will be transformed to a scalable, extensible and flexible IoT function redefining framework named \"Cloud and Bluetooth based software defined function\".",
"title": ""
}
] |
scidocsrr
|
89d1b9e0fc2d35058a88b50565026c6c
|
Inductorless DC-AC Cascaded H-Bridge Multilevel Boost Inverter for Electric/Hybrid Electric Vehicle Applications
|
[
{
"docid": "913709f4fe05ba2783c3176ed00015fe",
"text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "7e7d4a3ab8fe57c6168835fa1ab3b413",
"text": "Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multicore CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics.",
"title": ""
},
{
"docid": "a2cbc2b95b1988dae97d501c141e161d",
"text": "We present a fast and simple method to compute bundled layouts of general graphs. For this, we first transform a given graph drawing into a density map using kernel density estimation. Next, we apply an image sharpening technique which progressively merges local height maxima by moving the convolved graph edges into the height gradient flow. Our technique can be easily and efficiently implemented using standard graphics acceleration techniques and produces graph bundlings of similar appearance and quality to state-of-the-art methods at a fraction of the cost. Additionally, we show how to create bundled layouts constrained by obstacles and use shading to convey information on the bundling quality. We demonstrate our method on several large graphs.",
"title": ""
},
{
"docid": "180a271a86f9d9dc71cc140096d08b2f",
"text": "This communication demonstrates for the first time the capability to independently control the real and imaginary parts of the complex propagation constant in planar, printed circuit board compatible leaky-wave antennas. The structure is based on a half-mode microstrip line which is loaded with an additional row of periodic metallic posts, resulting in a substrate integrated waveguide SIW with one of its lateral electric walls replaced by a partially reflective wall. The radiation mechanism is similar to the conventional microstrip leaky-wave antenna operating in its first higher-order mode, with the novelty that the leaky-mode leakage rate can be controlled by virtue of a sparse row of metallic vias. For this topology it is demonstrated that it is possible to independently control the antenna pointing angle and main lobe beamwidth while achieving high radiation efficiencies, thus providing low-cost, low-profile, simply fed, and easily integrable leaky-wave solutions for high-gain frequency beam-scanning applications. Several prototypes operating at 15 GHz have been designed, simulated, manufactured and tested, to show the operation principle and design flexibility of this one dimensional leaky-wave antenna.",
"title": ""
},
{
"docid": "64a14e3dfc292fb4d1dc16160e89dedf",
"text": "Approaches to climate change impact, adaptation and vulnerability assessment: towards a classification framework to serve decision-making.",
"title": ""
},
{
"docid": "9cab244eeb45f9553fc25ecca2c37bbd",
"text": "BACKGROUND\nPeriorbital skin hyperpigmentation, so-called dark circles, is of major concern for many people. However, only a few reports refer to the morbidity and treatment, and as far as the authors know, there are no reports of the condition in Asians.\n\n\nMETHODS\nA total of 18 Japanese patients underwent combined therapy using Q-switched ruby laser to eliminate dermal pigmentation following topical bleaching treatment with tretinoin aqueous gel and hydroquinone ointment performed initially (6 weeks) to reduce epidermal melanin. Both steps were repeated two to four times until physical clearance of the pigmentation was confirmed and patient satisfaction was achieved. Skin biopsy was performed at baseline in each patient and at the end of treatment in three patients, all with informed consent. Clinical and histologic appearances of periorbital hyperpigmentation were evaluated and rated as excellent, good, fair, poor, or default.\n\n\nRESULTS\nSeven of 18 patients (38.9 percent) showed excellent clearing after treatment and eight (44.4 percent) were rated good. Only one (5.6 percent) was rated fair and none was rated poor. Postinflammatory hyperpigmentation was observed in only two patients (11.1 percent). Histologic examination showed obvious epidermal hyperpigmentation in 10 specimens. Dermal pigmentation was observed in all specimens but was not considered to be melanocytosis. Remarkable reduction of dermal pigmentation was observed in the biopsy specimens of three patients after treatment.\n\n\nCONCLUSION\nThe new treatment protocol combining Q-switched ruby laser and topical bleaching treatment using tretinoin and hydroquinone is considered effective for improvement of periorbital skin hyperpigmentation, with a low incidence of postinflammatory hyperpigmentation.",
"title": ""
},
{
"docid": "3a2740b7f65841f7eb4f74a1fb3c9b65",
"text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.",
"title": ""
},
{
"docid": "9a6a724f8aa0ae4fa9de1367f8661583",
"text": "In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.",
"title": ""
},
{
"docid": "7efa3543711bc1bb6e3a893ed424b75d",
"text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.",
"title": ""
},
{
"docid": "56205e79e706e05957cb5081d6a8348a",
"text": "Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "9ae435f5169e867dc9d4dc0da56ec9fb",
"text": "Renewable energy is currently the main direction of development of electric power. Because of its own characteristics, the reliability of renewable energy generation is low. Renewable energy generation system needs lots of energy conversion devices which are made of power electronic devices. Too much power electronic components can damage power quality in microgrid. High Frequency AC (HFAC) microgrid is an effective way to solve the problems of renewable energy generation system. Transmitting electricity by means of HFAC is a novel idea in microgrid. Although the HFAC will cause more loss of power, it can improve the power quality in microgrid. HFAC can also reduce the impact of fluctuations of renewable energy in microgrid. This paper mainly simulates the HFAC with Matlab/Simulink and analyzes the feasibility of HFAC in microgrid.",
"title": ""
},
{
"docid": "2e167507f8b44e783d60312c0d71576d",
"text": "The goal of this paper is to study different techniques to predict stock price movement using the sentiment analysis from social media, data mining. In this paper we will find efficient method which can predict stock movement more accurately. Social media offers a powerful outlet for people’s thoughts and feelings it is an enormous ever-growing source of texts ranging from everyday observations to involved discussions. This paper contributes to the field of sentiment analysis, which aims to extract emotions and opinions from text. A basic goal is to classify text as expressing either positive or negative emotion. Sentiment classifiers have been built for social media text such as product reviews, blog posts, and even twitter messages. With increasing complexity of text sources and topics, it is time to re-examine the standard sentiment extraction approaches, and possibly to redefine and enrich the definition of sentiment. Next, unlike sentiment analysis research to date, we examine sentiment expression and polarity classification within and across various social media streams by building topical datasets within each stream. Different data mining methods are used to predict market more efficiently along with various hybrid approaches. We conclude that stock prediction is very complex task and various factors should be considered for forecasting the market more accurately and efficiently.",
"title": ""
},
{
"docid": "bfae60b46b97cf2491d6b1136c60f6a6",
"text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.",
"title": ""
},
{
"docid": "73bee7e59be3d6a044965a512abdf115",
"text": "The underlaying equations for the models we consider are hyperbolic systems of conservation laws in one dimension: ut + f(u)x = 0, where x ∈ R, u ∈ R and Df(u) is assumed to have real distinct eigenvalues. The main mathematical novelty is to describe the dynamics on a network, represented by a directed topological graph, instead of a real line. The more advanced results are available for the scalar case, i.e. n = 1.",
"title": ""
},
{
"docid": "6bda457a005dbb2ff6abf84392d7b197",
"text": "One of the major problems in developing media mix models is that the data that is generally available to the modeler lacks sufficient quantity and information content to reliably estimate the parameters in a model of even moderate complexity. Pooling data from different brands within the same product category provides more observations and greater variability in media spend patterns. We either directly use the results from a hierarchical Bayesian model built on the category dataset, or pass the information learned from the category model to a brand-specific media mix model via informative priors within a Bayesian framework, depending on the data sharing restriction across brands. We demonstrate using both simulation and real case studies that our category analysis can improve parameter estimation and reduce uncertainty of model prediction and extrapolation.",
"title": ""
},
{
"docid": "b7e78ca489cdfb8efad03961247e12f2",
"text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling",
"title": ""
},
{
"docid": "c82cecc94eadfa9a916d89a9ee3fac21",
"text": "In this paper, we develop a supply chain network model consisting of manufacturers and retailers in which the demands associated with the retail outlets are random. We model the optimizing behavior of the various decision-makers, derive the equilibrium conditions, and establish the finite-dimensional variational inequality formulation. We provide qualitative properties of the equilibrium pattern in terms of existence and uniqueness results and also establish conditions under which the proposed computational procedure is guaranteed to converge. Finally, we illustrate the model through several numerical examples for which the equilibrium prices and product shipments are computed. This is the first supply chain network equilibrium model with random demands for which modeling, qualitative analysis, and computational results have been obtained.",
"title": ""
},
{
"docid": "6de71e8106d991d2c3d2b845a9e0a67e",
"text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.",
"title": ""
},
{
"docid": "dd01a74456f7163e3240ebde99cad89e",
"text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\"(objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 milliseconds) until the mass-energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass-energy difference leads to sufficient separation of space-time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum classical reduction occurs. Unlike the random, \"subjective reduction\"(SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a self-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for postreduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\"which tune and \"orchestrate\"the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\"(\"B>Orch OR\", and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500 milliseconds) will elicit Orch OR. In providing a connection among 1) pre-conscious to conscious transition, 2) fundamental space-time notions, 3) noncomputability, and 4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\", we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed.",
"title": ""
},
{
"docid": "018b25742275dd628c58208e5bd5a532",
"text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.