-
Notifications
You must be signed in to change notification settings - Fork 4
Aghakhani
Present: Niall Walsh, Stefan Kennedy, Hojjat Aghakhani
- 1:38: Why FakeGAN was only trained on positive fake reviews?
- 5:50: Isn't every review generated by a machine technically 'fake'?
- 7:35: Why FakeGAN didn't use the Yelp data
- 10:20: Novelty of FakGAN in comparison to SeqGAN
- 12:50: Generating reviews and using them as an extra data source could be viable
- 15:00: Conditional GAN (Multiclass GAN), how it would help the mode collapse issue
- 17:40: Abstraction of food names, hotel names (noise) from data when training and addition later
- 22:51: What is the reward and how does monte carlo work to calculate the reward and pass it back?
- 34:57: How are the discrete tokens combined to represent the word embeddings?
- 39:22: Fake review generation paper from CCS to look at
- 42:10: Which CNN kernel was used for the discriminator and how was that chosen?
- 43:50: Not much time should be spent experimenting with hyperparameters when community derived ones will work
- 45:05: The LSTM generator in FakeGAN is pretty much the same as SeqGAN
- 46:00: Any pointers on writing the paper, submission to the conference, general tips, etc.
- 48:00-End: Wrap up, thanks, chit chat
Niall Walsh niall.walsh58@mail.dcu.ie, Sun, 10 Feb, 19:13
Hello, Nice to meet you Hojjat. My name is Niall and I'm reaching out to you RE: your paper regarding fake review detection using GAN's. Myself and some colleagues are currently working on a research project in the same domain (GAN-based fake review detection) and I'm extremely interesting in the work done in your paper. I would like to commend you and your team for the work done on this project. There is a (mis)conception that GAN's are useless/impossible to use in text-based domains, and I feel you made large progress towards changing that. I'm just wondering if you may have some spare time to have a quick chat about some of the work done in this project, or even if you're available to answer some questions we may have about GAN's and your approach to augmenting them for a text classification task? Thanks very much for your time, Niall Walsh
Hojjat Aghakhani, Wed, 13 Feb, 19:53
Hello Niall, Thanks for your interest. :) Sorry that I reply to you late. Honestly, now, my schedule is pretty tight (we have a deadline at the end of February). I'm more than happy to be of any help to you and very interested in listening to your project. I'm available for a quick chat regarding the project (what we have done, what we have tried...), but, for tiny details of the project, as the project is for almost a year ago, unfortunately, I do not have much time to refresh my mind currently. So, if waiting for two weeks is fine with you, let's have a talk in early March, that's ideal for me, and honestly, the best option for me. But I don't want to bother you waiting for me. So, If you prefer to have a chat soon, let's have a quick discussion in the upcoming days. Friday, Feb. 12, 10am-17pm (GMT-8) would be fine for me. best Hojjat
Niall Walsh, Sat, 16 Feb, 15:45
Hey Hojjat, Thanks for getting back to me. Early March is perfect for us, until then we will experiment ourselves and surely discover a lot more regarding GAN's, thus making our conversation a lot more valuable. I appreciate your willingness to help us out, it will surely prove to be very useful for us. Thanks and regards, Niall
- ACLSW 2019
- Our datasets
- Experiment Results
- Research Analysis
- Hypothesis
- Machine Learning
- Deep Learning
- Paper Section Drafts
- Word Embeddings
- References/Resources
- Correspondence with H. Aghakhani
- The Gotcha! Collection