Freelancer / Digital Nomad

Content is king.

When the kingdom promised in 1996 with Bill Gates’ famous quote ‘Content is King’, internet was just born. Producing content with the digitalisation of the medium became an amazing mission for me.  As a content producer born into the Internet, I was the part of this evolution, and from publishing magazines to creating multi-media experiences (using sensors, projections, video, audio, VR, AR..etc) is my evolutionary journey till now. I’ve been working as a project manager with the teams including engineers, designers, architects, creatives, storytellers and developers.










Research is the initial stage of an each project. Be a detective, use deep googling skills.


Progress without a creative process remains fruitless. Invent, innovate, solve.


With the right tools every project ends up satisfying. Set the need, use the tool, or create it.


Only thing that can be seen is your documentation. Keep recording!

“People are interested in why you do, not what you do. It is not enough that you want to change the world, but at the same time you have to believe in it and naturally consider the interests of humanity.”


Silicon Valley is an important center of technology, where many of the ideas and tools we use daily are first conceived and brought to life. Unfortunately, humanity today remains rooted in post-industrial revolutionary production, and considers its needs as primary over those of the earth. Instead of working on sustainable future, modern society prefers paths that are selfish and destructive. It’s a crucial time for engineers, creative thinkers, and entrepreneurs who really care about the future of humanity.

‘The Adventures of Elon Mars’ is a fictional story starring Elon Musk, the grittiest and most naive person of the valley!

As a follower of Saganism, approaching science with a romantic attitude, I am simply inspired by his personality and celebrating with my creative expression. This book is all about.




Biased Algorithms Are Everywhere,
and No One Seems to Care

This week a group of researchers, together with the American Civil Liberties Union, launched an effort to identify and highlight algorithmic bias. The AI Now initiative was announced at an event held at MIT to discuss what many experts see as a growing challenge.

Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. The eventual outcry might also stymie the progress of an incredibly useful technology (see “Inspecting Algorithms for Bias”).

Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.

The founders of the new AI Now Initiative, Kate Crawford, a researcher at Microsoft, and Meredith Whittaker, a researcher at Google, say bias may exist in all sorts of services and products.


Forget Killer Robots—
Bias Is the Real AI Danger

Google’s AI chief isn’t fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute.

“The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” Giannandrea said before a recent Google conference on the relationship between humans and AI systems.

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see “Biased Algorithms Are Everywhere, and No One Seems to Care”).

“It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems,” Giannandrea added. “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”

Sahip olduğumuz çeşitli önyargılar (üstelik belli belirsiz de olabilir) aldığımız kararları etkiliyor. Buna BIAS deniliyor ve insanın şaşırtıcı oranda Cognitive Bias'ları olduğunu biliyoruz. Aynı durum karar mekanizmasına gözü kapalı güvenmeye çalıştığımız yapay zekalar için de geçerli. Öldürücü süper zeki yapay zeka robotlarından önce, bizim için karar verirken BIAS'lardan etkilenen yapay zekalar daha güçlü bir tehdit oluşturabilir.


Artificial intelligence can help
warfighters on many fronts

AI makes it possible for machines to learn from experiences, adapt to new data, and perform human-like tasks. Deep learningand natural-language processing techniques are helping to train computers to accomplish specific tasks by processing large amounts of data and recognizing patterns within it. The technology is increasingly seen as helpful to the warfighter.

Putin, "Gelecekte Yapay Zeka'nın hakimi kimse, dünyanın da hakimi de o olacak" diye açıklama yapmıştı hatırlarsanız. Birçoğumuz biliyor ki savaş endüstrisi yüzyıllardır teknolojinin itici güçlerinden biri. Bu gerçeği değiştirmek imkansız ama otonom silahlarla ilgili çalışmalar ve etik kaygılar tekrar gündemde. Hatta geçenlerde Google çalışanları yapılan askeri destekle ilgili geri adım atmayı tercih edip duyuru yapmışlardı. Fakat yine de AI ve machine learning savaş teknolojileri için taktik anlamında çok hızlı ilerlemeye devam ediyor. Birçok anlamda maliyetleri düşürüp, askeri taktik verme konusunda big data'nın önemi de gündemi meşgul eden araştırmalardan.


AI is acquiring a sense of smell that
can detect illnesses in human breath

Artificial intelligence (AI) is best known for its ability to see (as in driverless cars) and listen (as in Alexa and other home assistants). From now on, it may also smell. My colleagues and I are developing an AI system that can smell human breath and learn how to identify a range of illness-revealing substances that we might breathe out.


Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Süper-zeki yapay zekalardan korkmalı mıyız? Sam Harris, sinirbilim ve felsefe temelleri üzerinden bu alandaki soruları yanıtlamaya çalışıyor. Fakat yine o da, tüm araştırmacılar gibi insanın yönelimlerini ve bu zekayı kullanma biçimlerini değerlendiriyor.

more data


more data


Your Name (required)

Your Email (required)

Your Message